WO2024021801A1 - 报文转发装置及方法、通信芯片及网络设备 - Google Patents

报文转发装置及方法、通信芯片及网络设备 Download PDF

Info

Publication number
WO2024021801A1
WO2024021801A1 PCT/CN2023/095571 CN2023095571W WO2024021801A1 WO 2024021801 A1 WO2024021801 A1 WO 2024021801A1 CN 2023095571 W CN2023095571 W CN 2023095571W WO 2024021801 A1 WO2024021801 A1 WO 2024021801A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
header
packet
new
network processor
Prior art date
Application number
PCT/CN2023/095571
Other languages
English (en)
French (fr)
Inventor
徐洪波
李中华
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024021801A1 publication Critical patent/WO2024021801A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Definitions

  • the present application relates to the field of communication technology, and in particular to a message forwarding device and method, communication chip and network equipment.
  • a switch is a network device used for forwarding network data packets. It can connect multiple devices to a computer network and forward data to the destination through data packet switching. In the data forwarding process, the forwarding chip in the switch plays a key role.
  • the packets forwarded and processed by the network processor (NP) in the current forwarding chip are usually cached separately by the cache inside the network processor, and the packets processed by the traffic manager are usually cached separately.
  • the messages are cached separately by the cache inside the traffic manager (TM), which increases the power consumption caused by cache access and also reduces the cache utilization. Therefore, there is an urgent need to provide a solution to solve the above problems.
  • the purpose of this application is to provide a message forwarding device and method, communication chip and network equipment to solve the problem that the network processor and traffic manager inside the forwarding chip need to read and write the same message repeatedly when forwarding messages. problem, thereby reducing the power consumption caused by accessing the cache and improving cache utilization.
  • a packet forwarding device in a first aspect, includes a network processor and a traffic manager connected to the network processor, and a cache module is provided inside the network processor.
  • the network processor is configured to send at least one descriptor to the traffic manager, where the at least one descriptor is used to indicate a storage address of at least one message in the cache module.
  • the traffic manager is configured to send the first information of the packet to be forwarded to the network processor based on the at least one descriptor.
  • the message to be forwarded is one of at least one message; the first information is used to indicate the storage address of the message to be forwarded in the cache module.
  • the network processor is configured to obtain the message to be forwarded based on the first information, and send the obtained message to be forwarded to the traffic manager.
  • the traffic manager is configured to receive and forward forwarded packets. Based on this, the message only needs to be read and written once in the cache module of the network processor, and does not need to be read and written once in the cache module of the traffic manager, which not only reduces the power consumption caused by accessing the cache module, Also improves cache utilization.
  • the above-mentioned at least one descriptor is also used to indicate the packet priority and packet length of at least one packet.
  • the traffic manager is further configured to determine a first mark of the message to be forwarded based on the above-mentioned message priority and message packet length; the first mark is used to indicate the forwarding order of the message to be forwarded.
  • the traffic manager sends the first information to the network processor, the first tag is also carried in the first information. Based on this, the network processor can sequentially read the messages to be forwarded according to the forwarding order, which improves the accuracy and efficiency of reading the messages to be forwarded.
  • the network processor before the network processor sends the at least one descriptor to the traffic manager, the network processor is further configured to first store the received at least one message in the cache module in units of cells. .
  • the at least one message includes an original message header, message data and a message terminator, and each cell corresponds to a storage address in the cache module.
  • the network processor reads the original message header of the message from the cache module, and generates a new message header based on the original message header and the preset forwarding routing table. Finally, the network processor writes the new packet header into the cache module. Based on this, the network processing module only needs to read the original message header in the cache module and generate a new message header, without reading the message data, which also reduces power consumption.
  • the network processor when the packet is a unicast packet and the length of the new packet header exceeds the length of the original packet header, the network processor is configured to divide the new packet header into a first part and a first part. The second part is replaced by the original message header with the first part, and the second part is stored in the free part of the storage address corresponding to the cell where the message end character is located. Based on this, when the length of the new packet header exceeds the length of the original packet header, there is no need to re-allocate the storage address, further reducing power consumption.
  • the network processor when the length of the second part exceeds the free part of the storage address corresponding to the cell where the message end character is located, is further configured to divide the second part into a first segment and the second segment, and store the first segment in the free part of the storage address corresponding to the cell where the message end character is located, and store the second segment in the newly added storage address. Based on this, the cache space can be fully utilized.
  • the network processor when the packet is a unicast packet and the length of the new packet header exceeds the length of the original packet header, the network processor is further configured to divide the new packet header into a first part and the second part, replace the original message header with the first part, and store the second part in the newly added storage address. Based on this, the network processor no longer needs to compare the length of the second part with the free part of the storage address corresponding to the cell where the message end character is located, which reduces the processing flow and improves storage efficiency.
  • the network processor is further configured to store the new message header in the newly added storage address. Based on this, it is possible to avoid changing the structure of multicast packets.
  • a packet forwarding method is provided, which is applied to a packet forwarding device including a network processor and a traffic manager.
  • the method includes: first, the network processor sends at least one descriptor to the traffic manager; the at least one descriptor is used to indicate the storage address of at least one message in the cache module.
  • the traffic manager sends the first information of the message to be forwarded to the network processor based on the above at least one descriptor; wherein the message to be forwarded is one of at least one message; the first information is used to indicate the message to be forwarded The storage address in the cache module.
  • the network processor obtains the packet to be forwarded based on the first information, and sends the packet to be forwarded to the traffic manager.
  • the traffic manager receives and forwards the packets to be forwarded.
  • the above-mentioned at least one descriptor is also used to indicate the packet priority and packet length of at least one packet; the traffic manager can also use the above-mentioned packet priority and packet length to The first mark is determined, and the first mark is used to indicate the forwarding sequence of the packet to be forwarded.
  • the traffic manager sends the first information of the packet to be forwarded to the network processor based on each of the above descriptors, it may also carry the first tag in the first information and send it to the network processor.
  • the network processor may also first store at least one received message in the cache module in units of cells; wherein each cell The element corresponds to a storage address in the cache module; each message includes the original message header, message data, message terminator, etc. Then, the network processor obtains the original packet header from the cache module. Finally, based on the original message, the network processor header and the preset forwarding routing table to generate a new message header and store it in the cache module.
  • the network processor when the network processor generates a new packet header based on the original packet header and the preset forwarding routing table and stores it in the cache module, it can also determine whether the packet is a unicast packet. If the packet is a unicast packet and the length of the new packet header exceeds the length of the original packet header, the network processor divides the new packet header into the first part and the second part, and replaces the original packet header with The first part, the second part is stored in the free part of the storage address corresponding to the cell where the message end character is located.
  • the network processor when the network processor stores the second part in the free part of the storage address corresponding to the cell where the message end character is located, the network processor may also determine whether the length of the second part exceeds the length of the message. The free part of the storage address corresponding to the cell where the terminator is located. If the length of the second part exceeds the free part of the storage address corresponding to the cell where the end of the message is located, the network processor divides the second part into a first segment and a second segment, and stores the first segment into the message. The free part of the storage address corresponding to the cell where the text end character is located, and the second segment is stored in the newly added storage address.
  • the network processor when the network processor generates a new packet header based on the original packet header and the preset forwarding routing table and stores it in the cache module, if it is determined that the packet is a unicast packet and the new packet header is If the length of the header exceeds the length of the original header, the network processor can divide the new header into the first part and the second part, replace the original header with the first part, and store the second part into the new in the storage address.
  • the new message header can be stored in the newly added storage address.
  • a communication chip including: a processor and a memory, wherein the memory is used to store program instructions, and the processor is used to execute the program instructions in the memory to implement any possible implementation manner as in the second aspect. method in.
  • a fourth aspect provides a network device, which includes the message forwarding device in any possible implementation manner of the first aspect.
  • a computer-readable storage medium In a fifth aspect, a computer-readable storage medium is provided.
  • Computer-readable program instructions are stored in the computer-readable storage medium; when the program instructions are run on a message forwarding device, any one of the above-mentioned second aspects is realized. Methods in possible implementations.
  • a computer program product is provided.
  • the method in any possible implementation manner of the second aspect is implemented.
  • Figure 1 is a schematic diagram of the working principle of an existing switch provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of the hardware structure of a message forwarding device provided by an embodiment of the present application
  • Figure 3 is a schematic flow chart of a message forwarding method provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of a storage method of an original message header provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of a new message header storage method provided by an embodiment of the present application.
  • Figure 6 is a schematic flowchart of a new message header storage process provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of the storage process of another new message header provided by the embodiment of the present application.
  • Figure 8 is a schematic diagram of the storage process of yet another new message header provided by the embodiment of the present application.
  • Figure 9 is a schematic topological structure diagram of a message forwarding device provided by an embodiment of the present application.
  • Figure 10 is a schematic diagram of the topology of another message forwarding device provided by an embodiment of the present application.
  • A/B can mean A or B; “and/or” in this application only means It is an association relationship that describes associated objects. It means that there can be three relationships.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone. Among them, A and B Can be singular or plural.
  • plural means two or more than two. “At least one of the following” or similar expressions thereof refers to any combination of these items, including any combination of a single item (items) or a plurality of items (items).
  • a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple .
  • words such as “first” and “second” are used to distinguish identical or similar items with basically the same functions and effects. Those skilled in the art can understand that words such as “first” and “second” do not limit the number and execution order, and words such as “first” and “second” do not limit the number and execution order.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or explanations. Any embodiment or design described as “exemplary” or “such as” in the embodiments of the present application is not to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present related concepts in a concrete manner that is easier to understand.
  • an embodiment of the present application provides an architectural diagram of a switching network.
  • the external network 20 and the switching network 30 are connected through network devices.
  • the network device can be a switch, a router with switch function, and other devices.
  • the existing switch 10 usually consists of a network interface module 101, a message forwarding device 100, a switching network interface module 102, and so on.
  • the network interface module 101 is used to connect to the external network 20 and receive messages sent by the external network 20 .
  • the message forwarding device 100 is connected to the network interface module 101 for generating a forwarding message based on the message, and then transmits the forwarding message to the corresponding switching network 30 through the switching network interface module 102 .
  • the above-mentioned message forwarding device 100 usually uses a forwarding chip including a network processor (NP) 110 and a traffic manager (TM) 120; NP is a programmable device and can be used in the communication field. Packet processing, protocol analysis, route lookup, etc.; TM can complete functions such as business data caching, traffic shaping, and congestion avoidance.
  • NP network processor
  • TM traffic manager
  • this caching method means that the same message needs to be completely written once and read once in NP, and then completely written and read once in TM. Based on this method, the same message is equivalent to being read and written completely twice, which increases the power consumption caused by accessing the cache and reduces the cache utilization.
  • embodiments of the present application provide a message forwarding method applied to the message forwarding device 100.
  • the traffic manager 120 in this message forwarding method performs output scheduling, it obtains the message to be forwarded from the cache module 111 of the network processor 110, and then receives and forwards the message to be forwarded, so that the message only needs A complete read and write operation needs to be performed in the network processor 110, thereby reducing the power consumption caused by accessing the cache and improving cache utilization.
  • FIG. 2 is a schematic diagram of the hardware structure of a message forwarding device 200 provided by an embodiment of the present application.
  • the message forwarding device 200 includes a processor 201, a memory 202, and a network interface 203.
  • the processor 201 may be a general central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the program of the present application. circuit.
  • the processor 201 may also include multiple CPUs, and the processor 201 may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • a processor here may refer to one or more devices, circuits, or processing cores for processing data (eg, computer program instructions).
  • the embodiment of the present application provides that the network processor (network processor, NP) 110 and the traffic manager (traffic manager, TM) 120 can be integrated into the processor 201.
  • the network processor (network processor, NP) 110 and The traffic manager (TM) 120 can also be implemented using a discrete processor 201.
  • the memory 202 may be a device with a storage function. For example, it can be read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (random access memory (RAM)) or other types that can store information and instructions. Dynamic storage devices can also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disk storage, optical disc storage ( Including compressed optical discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be stored by a computer. any other medium, but not limited to this.
  • the memory 202 may exist independently and be connected to the processor 201 through a communication line.
  • the memory 202 may also be integrated with the processor 201.
  • the memory 202 is used to store computer execution instructions for executing the solution of the present application, and is controlled by the processor 201 for execution.
  • the processor 201 is configured to execute computer execution instructions stored in the memory 202, thereby implementing the message forwarding method in the embodiment of the present application.
  • the data stored in the memory 202 may include the messages provided in the embodiment of the present application, which is not limited by the specific embodiment of the present application.
  • the memory 202 may include the cache module 111 provided by the embodiment of the present application, and the cache module 111 and the processor 201 may be integrated on the same chip; or, the cache module 111 may also be independent from the processor 201 set up.
  • the processor 201 implements the message forwarding method provided by the embodiment of the present application by reading instructions stored in the memory 202, or the processor 201 implements the message forwarding method provided by the embodiment of the present application by using internally stored instructions.
  • the processor 201 implements the method in the above embodiment by reading the instructions stored in the memory 202
  • the memory 202 stores instructions for implementing the message forwarding method provided by the embodiment of the present application.
  • the network interface 203 is a wired interface (port), such as an FDDI or GE interface.
  • network interface 203 is a wireless interface. It should be understood that the network interface 203 includes multiple physical ports, and the network interface 203 is used for forwarding messages and the like.
  • the message forwarding device 200 also includes a bus 204.
  • the above-mentioned processor 201, memory 202, and network interface 203 are usually connected to each other through the bus 204, or are connected to each other in other ways.
  • the network processor receives at least one message and stores it in the cache module.
  • the message includes the original message header, message data and message terminator.
  • the network processor when the network processor stores the message in the cache module, it stores the message in units of cells according to "First cell (First cell), Second cell (Second cell)". ),..., "Last cell”, and then allocates a storage address in the cache module 111 to store the message; at the same time, the network processor 110 records the corresponding storage linked list and saves it in the memory.
  • the cell size can be selected from 48 bytes, 64 bytes, 128 bytes, or 256 bytes. It can be understood that the size of the cell can also be set according to actual needs and is not limited to the above method.
  • the storage address of the first cell and the sequential relationship between the storage addresses of subsequent cells and the storage address of the first cell can be recorded in the above storage linked list.
  • the network processor reads the original message header of at least one message from the cache module.
  • the network processor 110 can read the original message header of the message from the cache module 111 according to the above-mentioned storage linked list.
  • the network processor generates a new packet header based on the read original packet header and the preset forwarding routing table.
  • the forwarding routing table stores multiple known forwarding paths; the forwarding path records the path information between the sending address and the receiving address corresponding to the message.
  • the original message header usually carries the sending address corresponding to the message.
  • the network processor can determine the corresponding forwarding path based on the sending address in the original message header and the forwarding routing table, and receive the message based on the forwarding path.
  • the address, message length, message priority and other contents are stored in the new message header.
  • the network processor stores the new message header in the cache module.
  • the process of the network processor 110 storing the new packet header in the cache module 111 can be performed according to the following implementation manner.
  • the network processor 110 stores the new packet header of the unicast packet in the cache module 111 There may be a second turning point.
  • the embodiment of this application can divide the new header into three parts: part1, part2 and part3 according to the actual situation. Among them, part1 covers the original message header (Header) in the first cell of the message, part2 writes the free space (Free space) in the last cell of the message, and part3 allocates a new cell (New cell). ;
  • part1 covers the original message header (Header) in the first cell of the message
  • part2 writes the free space (Free space) in the last cell of the message
  • part3 allocates a new cell (New cell).
  • new cells need to be allocated for each new packet header.
  • a flag bit can be set in the original header of the message to indicate whether the current message is a unicast message or a multicast message, and the flag bit can be retained. in the generated new message header.
  • the network processor 110 may first determine whether the packet is a unicast packet or a multicast packet based on the flag bit in the new packet header. If the identified flag bit corresponds to a unicast message, when the network processor 110 stores the new message header in the cache module 111, it may first determine whether the length of the new message header exceeds the length of the original message header.
  • the network processor 110 divides the new packet header into a first part (part1) and a second part (part2), and replaces the original packet header with the first part (part1), store the second part (part2) into the free part of the storage address corresponding to the cell where the message end character is located. If recognized The flag bit corresponds to the multicast packet, then the network processor 110 can directly store the new packet header into the newly added storage address.
  • the length of the second part of the new message header may be longer than the free part of the storage address corresponding to the cell where the message end character is located. Therefore, as shown in Figure 7, a flag bit can be set in the original message header of the message to indicate whether the current message is a unicast message or a multicast message. This flag bit can be retained in the generated new message. In the header. After the network processor 110 generates a new packet header, it may first determine whether the packet is a unicast packet or a multicast packet based on the flag bit in the new packet header.
  • the network processor 110 when the network processor 110 stores the new packet header in the cache module 111, the original packet header may be replaced with the first part of the new packet header; and then Continue to determine whether the length of the second part of the new message header exceeds the free part of the storage address corresponding to the cell where the message end character is located. If the length of the second part of the new message header exceeds the free part of the storage address corresponding to the cell where the message end symbol is located, the network processor 110 divides the second part of the new message header into a first segment and a second segment.
  • the network processor 110 can directly store the new message header into the newly added storage address. In this way, the storage space of the cache module 111 can be more fully utilized.
  • the free part of the storage address corresponding to the cell where the message end symbol is located may be shorter than the length of the second part of the new message header
  • you continue to execute "Change the third part of the new message header
  • the two parts are divided into the first segment and the second segment, and the first segment is stored as part2 in the free part of the storage address corresponding to the cell where the message end character is located, and the second segment is stored as part3 in the newly added storage address.
  • a flag bit can be set in the original message header of the message to indicate whether the current message is a unicast message or a multicast message. This flag bit can be retained in the generated new message. In the header.
  • the network processor 110 After the network processor 110 generates a new packet header, it may first determine whether the packet is a unicast packet or a multicast packet based on the flag bit in the new packet header. If the identified flag bit corresponds to a unicast message, when the network processor 110 stores the new message header in the cache module 111, it may further determine whether the length of the second part of the new message header exceeds the length of the message. The free part of the storage address corresponding to the cell where the end character is located; if the length of the second part of the new message header exceeds the free part of the storage address corresponding to the cell where the message end character is located, the network processor 110 directly converts the new message into The second part of the header is stored in the newly added storage address. If the identified flag bit corresponds to a multicast message, the network processor 110 also directly stores the new message header into the newly added storage address.
  • the network processor sends at least one descriptor to the traffic manager.
  • each descriptor contains information such as the storage address of the new message header of the corresponding message, the message priority and message length of each message.
  • the traffic manager receives at least one of the above descriptors.
  • the traffic manager generates first information of the packet to be forwarded based on the at least one descriptor and the preset forwarding routing table.
  • the traffic manager 120 may generate a first mark of the message to be forwarded based on the message priority and the message packet length; and carry the first mark in the first information.
  • the first mark is used to indicate the forwarding sequence of the packets to be forwarded.
  • the traffic manager 120 first determines the position of the corresponding packet to be forwarded in the forwarding scheduling queue based on the packet priority in each descriptor, and then outputs the schedule based on the corresponding packet length information in units of cells. Numbering in the team. When numbering, you can start from the new message header and generate the queue number of the message to be forwarded in the forwarding scheduling queue according to the sequence number of each cell as the first mark. Finally, the traffic manager 120 uses the storage address and queue number of the new message header, as well as the sequence number and queue number of each cell corresponding to the message data as the first information. The queue number is used to indicate the forwarding order of the packets to be forwarded.
  • the traffic manager sends the first information of the packet to be forwarded to the network processor.
  • the traffic manager 120 performs output scheduling based on cell granularity, and the forwarding scheduling queue is based on packet output.
  • the traffic manager 120 schedules a new packet header, it outputs the storage address and queue number of the new packet header of the packet to be forwarded to the network processor 110; when it schedules packet data, it outputs the stored packet data to the network processor 110. corresponds to the sequence number and queue number of each cell.
  • the network processor receives the first information sent by the traffic manager.
  • the network processor obtains the message to be forwarded from the cache module based on the first information.
  • the network processor 110 can read the corresponding new packet header from the cache module 111 according to the storage address of the new packet header obtained according to the above-mentioned storage linked list, and read the packet data according to the queue number.
  • the storage linked list includes the storage address of the cell corresponding to the new message header, the sequence number and storage address of each cell corresponding to the message data, and the sequence relationship between the cells.
  • the network processor can sequentially read the new message header, message data and other contents according to the order relationship in the storage linked list, and encapsulate them to obtain the message to be forwarded.
  • the network processor sends the obtained packet to be forwarded to the traffic manager.
  • the traffic manager receives and forwards the packet to be forwarded sent by the network processor.
  • the traffic manager 120 may also encrypt the message to be forwarded according to the encryption protocol.
  • the encryption protocol used in encryption can use various currently used protocols, and the embodiments of the present application do not specifically limit this.
  • embodiments of the present application provide a message forwarding device 100.
  • the message forwarding device 100 is configured to perform each step in the above message forwarding method.
  • the embodiment of the present application can use the message forwarding device according to the above method examples.
  • To divide functional modules for example, each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or software function modules.
  • the division of modules in the embodiments of this application is schematic and is only a logical function division. There may be other division methods in actual implementation.
  • FIG. 9 and FIG. 10 show a possible structural diagram of the message forwarding device involved in the above embodiment.
  • the packet forwarding device 100 includes a network processor 110 and a traffic manager 120 connected to the network processor 110 .
  • the network processor 110 includes a cache module 111 .
  • the network processor 110 is configured to send at least one descriptor to the traffic manager 120; the at least one descriptor is used to indicate the storage address of at least one message in the cache module.
  • the traffic manager 120 is configured to send the first information of the packet to be forwarded to the network processor 110 according to the at least one descriptor; wherein the packet to be forwarded is one of the at least one packet mentioned above; the first information is used to indicate that the packet to be forwarded is to be forwarded.
  • the storage address of the forwarded message in the cache module The network processor 110 is also configured to obtain the packet to be forwarded based on the first information, and send the packet to be forwarded to the traffic manager 120 .
  • the traffic manager 120 is also used to receive and forward messages to be forwarded.
  • the traffic manager 120 can directly obtain the message to be forwarded from the cache module 111 of the network processor 110 without having to read and write the message again in its own buffer. flow process, which reduces the power consumption caused by accessing the cache and also improves the utilization of the cache.
  • the network processor 110 includes a cache module 111, a reassembly module 112 connected to the cache module 111, and a forwarding plane module 113 connected to the cache module 111.
  • the above-mentioned reassembly module 112 and forwarding module 113 can be implemented by a hardware coprocessor inside the network processor 110, and the cache module 111 can use static random access memory (static random access memory, SRAM).
  • the above-mentioned at least one descriptor is also used to indicate the message priority and message packet length of at least one message; after receiving a descriptor, the traffic manager 120 can determine the message value according to the message priority and message length.
  • the packet length generates a first mark of the message to be forwarded in the forwarding scheduling queue; and carries the first mark in the first information and sends it to the network processor; then, the reassembly module 112 determines the message to be forwarded according to the first mark.
  • the new message header, message data and other contents of the message to be forwarded are read in the cache module 111.
  • the traffic manager 120 may first determine the position of the packet corresponding to the descriptor in the forwarding scheduling queue based on the packet priority in the descriptor, and then number the packet according to the length of the packet in units of cells. When numbering, you can start from the new message header and generate the queue number of the message to be forwarded in the forwarding scheduling queue according to the sequence number of each cell as the first mark. At the same time, the traffic manager 120 saves the storage address and queue number of the new message header, as well as the sequence number and queue number of each cell corresponding to the message data in the forwarding schedule list.
  • the network processor 110 may also perform the following steps before sending at least one descriptor to the traffic manager 120: First, the network processor 110 stores the received message in units of cells. Cache module 111; each cell corresponds to a storage address in the cache module 111; and the message includes original message header, message data, message terminator and other contents. Secondly, the network processor 110 generates a new packet header based on the original packet header and the preset forwarding routing table and stores it in the cache module 111 .
  • the new message header is obtained as follows: first, the reassembly module 112 stores the received message in the cache module 111 in units of cells, and in the cache module 111 for each Cells are allocated storage addresses for storage, and the corresponding storage linked lists are recorded at the same time. Among them, the message includes the original message header, message data, message terminator and other contents. Then, the reassembly module 112 generates a new packet header based on the above-mentioned original packet header and the preset forwarding routing table and stores it in the cache module 111 .
  • the reassembly module 112 when the reassembly module 112 receives a message that includes the original message header, message data, and message terminator, it first divides the message into multiple cells in units of cells (as shown in Figure 4 The first cell (First cell), the second cell (Second cell), ..., the last cell (Last cell)) in the cell, and allocate storage addresses for each cell in the cache module 111 for storage, and record at the same time The corresponding storage linked list. Secondly, the reassembly module 112 reads the original message header of the message from the storage address allocated by the cache module 111 and sends it to the forwarding module 113 .
  • the forwarding plane module 113 generates a new packet header based on the original packet header and the preset forwarding routing table and stores it in the cache module 111; at the same time, the reassembly module 112 sends the descriptor of the new packet header to the traffic manager 120 .
  • the traffic manager 120 then generates the queue number of the corresponding message in the forwarding scheduling queue according to the descriptor of the received new message header; wherein, the traffic manager 120 performs output scheduling according to the forwarding scheduling queue, and transfers the queue number waiting in the forwarding scheduling queue.
  • the queue number of the forwarded message and the storage address of the new message header in the message to be forwarded are sent to the reassembly module 112 as first information.
  • the reassembly module 112 reads the message to be forwarded corresponding to the queue number from the cache module 111 based on the first information and the storage linked list, and sends the read message to be forwarded to the traffic manager 120 . Finally, the traffic manager 120 receives and forwards the message to be forwarded.
  • the traffic manager 120 performs output scheduling according to the forwarding scheduling queue, the output scheduling is performed at the cell granularity, and the forwarding scheduling queue is based on packet output.
  • the traffic manager 120 when getting the to-be-forwarded During the packet process, if a new packet header is scheduled, the traffic manager 120 sends the storage address and queue number of the new packet header to the reassembly module 112; if the packet data is scheduled, the traffic manager 120 The sequence number and queue number of each cell corresponding to the message data are sent to the reassembly module 112. After obtaining the storage address of the new packet header, the reassembly module 112 reads the new packet header and packet data corresponding to the queue number from the cache module 111 according to the storage link list, and sends them to the traffic manager 120 .
  • the storage linked list is used to store the storage address of the first cell corresponding to the new message header, and the sequential relationship between the storage address of each cell corresponding to the subsequent message data and the storage address of the first cell.
  • the reassembly module 112 sequentially reads the new message header and message data according to the storage address of the first cell and the sequence relationship.
  • the traffic manager 120 may also encrypt the message to be forwarded according to the encryption protocol.
  • the reassembly module 112 can be implemented using a hardware coprocessor inside the network processor 110 .
  • the forwarding plane module 113 can also be implemented using a hardware coprocessor inside the network processor 110 .
  • the cache module 111 can use various caches that are currently widely used, and the embodiment of the present application does not specifically limit this.
  • the above-mentioned storage linked list can be stored through the memory.
  • the memory includes but is not limited to random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM), fast Flash memory, or optical memory, etc. Codes and data are stored in the memory.
  • the data stored in the memory may also include transmission protocol information, forwarding routing tables, etc.
  • the specific embodiments of this application are not limited.
  • the message only needs to be written once when it is input and read again when it is output, which reduces the number of times of reading and writing the message, and also reduces the dynamic power consumption of the cache module 111.
  • the packet data itself only needs to be read once when generating the forwarded packet output, which also greatly improves cache utilization. .
  • the embodiment of this application can divide the new header into three parts: part1, part2 and part3 according to the actual situation. Among them, part1 covers the original message header (Header) in the first cell of the message, part2 writes the free space (Free space) in the last cell of the message, and part3 allocates a new cell (New cell). .
  • part1 covers the original message header (Header) in the first cell of the message
  • part2 writes the free space (Free space) in the last cell of the message
  • part3 allocates a new cell (New cell).
  • new cells need to be allocated for each new packet header. Based on this, specific implementations are described below.
  • a flag bit can be set in the original message header of the message to indicate whether the current message is a unicast message or a multicast message. This flag bit can be retained when generating in the new message header.
  • the reassembly module 112 can first determine whether the message is a unicast message or a multicast message based on the flag bit in the new message header. If the identified flag bit corresponds to a unicast message, the reassembly module 112 may first determine whether the length of the new message header exceeds the length of the original message header when storing the new message header in the cache module 111.
  • the reassembly module 112 divides the new message header into a first part (part1) and a second part (part2), and replaces the original message header with the first part (part2). part1), store the second part (part2) into the free part of the storage address corresponding to the cell where the message end character is located. If the identified flag bit corresponds to a multicast message, the reassembly module 112 can directly store the new message header into the newly added storage address.
  • the length of the second part of the new message header may be longer than the cell pair where the message end character is located.
  • the free part of the corresponding storage address is longer. Therefore, referring to Figure 7, a flag bit can be set in the original message header of the message to indicate whether the current message is a unicast message or a multicast message, and the flag bit can be retained in the generated new message. Head.
  • the reassembly module 112 can first determine whether the message is a unicast message or a multicast message based on the flag bit in the new message header.
  • the reassembly module 112 when the reassembly module 112 stores the new message header in the cache module 111, it may first replace the original message header with the first part of the new message header; and then continue. Determine whether the length of the second part of the new message header exceeds the free part of the storage address corresponding to the cell where the message terminator of the message is located. If the length of the second part of the new message header exceeds the free part of the storage address corresponding to the cell where the message end character of the message is located, the reassembly module 112 divides the second part of the new message header into a first segment and a first segment.
  • the second segment is stored as part2 in the free part of the storage address corresponding to the cell where the message end character is located, and the second segment is stored as part3 in the newly added storage address. If the identified flag bit corresponds to a multicast message, the reassembly module 112 can directly store the new message header into the newly added storage address. In this way, the storage space of the cache module 111 can be more fully utilized.
  • the free part of the storage address corresponding to the cell where the message end symbol is located may be shorter than the length of the second part of the new message header
  • you continue to execute "Change the third part of the new message header
  • the two parts are divided into the first segment and the second segment, and the first segment is stored as part2 in the free part of the storage address corresponding to the cell where the message end character is located, and the second segment is stored as part3 in the newly added storage address.
  • a flag bit can be set in the original header of the message to indicate whether the current message is a unicast message or a multicast message, and the flag bit can be retained in the generated new message header. middle.
  • the reassembly module 112 can first determine whether the message is a unicast message or a multicast message based on the flag bit in the new message header. If the identified flag bit corresponds to a unicast message, the reassembly module 112, while storing the new message header in the cache module 111, may further determine whether the length of the second part of the new message header exceeds that of the message.
  • the reassembly module 112 Directly store the second part of the new message header into the newly added storage address. If the identified flag bit corresponds to a multicast message, the reassembly module 112 can directly store the new message header into the newly added storage address.
  • the message forwarding device may be presented in the form of dividing various functional modules in an integrated manner.
  • a “module” here may refer to a specific ASIC, circuit, processor and memory that executes one or more software or firmware programs, integrated logic circuits, and/or other devices that may provide the above functions.
  • the message forwarding device may take the form of the message forwarding device shown in FIG. 2 .
  • the processor 201 in the message forwarding device shown in Figure 2 can cause the message forwarding device 100 to execute the message forwarding method in the above method embodiment by calling the computer execution instructions stored in the memory 202.
  • the functions/implementation processes of the network processor 110 and the traffic manager 120 in Figure 9 can be implemented by the processor 201 in the message forwarding device 100 shown in Figure 2 calling computer execution instructions stored in the memory 202.
  • the network processor 110 and the traffic manager 120 may be integrated into the same processor 201, or may be provided in two separate processors 201.
  • the embodiment of the present application also provides a message forwarding device (for example, the message forwarding device can It is a communication chip or chip system).
  • the message forwarding device includes a processor and an interface.
  • the processor is used to read instructions to execute the method in any of the above method embodiments.
  • the message forwarding device further includes a memory.
  • the memory is used to store necessary program instructions and data.
  • the processor can call the program code stored in the memory to instruct the message forwarding device to execute the method in any of the above method embodiments.
  • the memory may not be in the communication device.
  • the message forwarding device is a chip system, it may be composed of a chip, or may include a chip and other discrete devices, which is not specifically limited in the embodiment of the present application.
  • the embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores computer-readable program instructions.
  • the program instructions are run on the message forwarding device.
  • the computer-readable storage medium may be read-only memory (ROM), random access memory (RAM), compact disc (compact disc-only memory, CD-ROM) , tapes, floppy disks, U disks and optical data storage devices, etc.
  • the embodiment of the present application also provides a computer program product.
  • the computer program product When the computer program product is run on a message forwarding device, it causes the message forwarding device to execute the method in any of the above embodiments. .
  • the disclosed message forwarding device and message forwarding method can be implemented in other ways.
  • the embodiments of the message forwarding device described above are only illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other division methods, such as multiple units or components. may be combined or may be integrated into another device, or some features may be omitted, or not performed.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separate.
  • a component shown as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or it may be distributed to multiple different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional module in various embodiments of the present application can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or contribute to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium includes a number of instructions to cause a device, such as a microcontroller, a chip, etc., or a processor to execute all or part of the steps of the methods of various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk, etc. A medium on which program code can be stored.

Abstract

本申请实施例提供了一种报文转发装置及方法、通信芯片及网络设备,应用于通信技术领域,以解决转发芯片内的网络处理器和流量管理器进行报文转发时,需要对报文单独进行处理,增加缓存器访问功耗的同时降低缓存利用率的问题。首先,网络处理器向流量管理器发送至少一个描述符;该至少一个描述符用于指示至少一个报文在所述缓存模块中的存储地址;其次,流量管理器依据该至少一个描述符和预设转发路由表向网络处理器发送待转发报文的第一信息;然后,网络处理器依据接收的第一信息获取待转发报文,并将待转发报文发送给流量管理器;最后,流量管理器接收并转发待转发报文。

Description

报文转发装置及方法、通信芯片及网络设备
本申请要求于2022年07月26日提交国家知识产权局、申请号为202210886409.2、申请名称为“报文转发装置及方法、通信芯片及网络设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种报文转发装置及方法、通信芯片及网络设备。
背景技术
交换机是一种用于网络数据包转发的网络设备,能连接多台设备到计算机网络中,并通过数据包交换的方式,将数据转发到目的地。在数据转发过程中,交换机中的转发芯片起到了很关键的作用。但是,针对不需要片外缓存的转发场景,目前转发芯片中的网络处理器(network processor,NP)转发所处理的报文通常由网络处理器内部的缓存器单独缓存,流量管理器所处理的报文又由流量管理器(traffic manager,TM)内部的缓存器单独进行缓存,导致增加了缓存器访问所造成的功耗,同时也降低了缓存的利用率。因此,亟需提供一种方案以解决上述问题。
发明内容
本申请的目的在于提供一种报文转发装置及方法、通信芯片及网络设备,以解决转发芯片内部的网络处理器和流量管理器进行报文转发时,需要对同一个报文重复进行读写的问题,从而降低访问缓存器所造成的功耗,提高缓存的利用率。
为达到上述目的,本申请实施例提供如下技术方案:
第一方面,提供了一种报文转发装置,该报文转发装置包括网络处理器以及与该网络处理器连接的流量管理器,且网络处理器内部设置了缓存模块。其中,网络处理器被配置为向流量管理器发送至少一个描述符,该至少一个描述符用于指示至少一个报文在缓存模块中的存储地址。其次,流量管理器被配置为依据上述至少一个描述符向网络处理器发送待转发报文的第一信息。其中,待转发报文为至少一个报文中的一个;第一信息用于指示待转发报文在缓存模块中的存储地址。然后,网络处理器被配置为依据该第一信息获取待转发报文,并将获取到待转发报文发送给流量管理器。最后,流量管理器被配置为接收并转发待转发报文。基于此,该报文只需要在网络处理器的缓存模块中进行一遍读写操作,无需在流量管理器的缓存模块中在执行一遍读写操作,既降低了访问缓存模块所造成的功耗,也提高缓存的利用率。
在一种可能的实现方式中,上述至少一个描述符还用于指示至少一个报文的报文优先级和报文包长。流量管理器还被配置为依据上述报文优先级和报文包长确定待转发报文的第一标记;该第一标记用于指示待转发报文的转发顺序。流量管理器向网络处理器发送第一信息时,将该第一标记也携带在第一信息中。基于此,网络处理器能够依据该转发顺序依次读取待转发报文,提高了待转发报文读取的准确性和效率。
在一种可能的实现方式中,网络处理器在向流量管理器发送上述至少一个描述符之前,网络处理器还被配置为先将接收到的至少一个报文以信元为单位存入缓存模块。其中,该至少一个报文包括原始报文头、报文数据和报文结束符,每个信元对应缓存模块中的一个存储地址。然后,网络处理器从缓存模块读取该报文的原始报文头,并依据该原始报文头和预设的转发路由表生成新报文头。最后,网络处理器将新报文头写入缓存模块。基于此,网络处理模块只需要读取缓存模块中的原始报文头并生成新报文头,无需读取报文数据,也降低了功耗。
在一种可能的实现方式中,当该报文是单播报文、且新报文头的长度超过原始报文头的长度时,网络处理器被配置为将新报文头划分为第一部分和第二部分,并将原始报文头替换为第一部分,将第二部分存入报文结束符所在信元对应的存储地址的空闲部分。基于此,在新报文头的长度超过原始报文头的长度时,无需重新分配一遍存储地址,进一步降低了功耗。
在一种可能的实现方式中,当上述第二部分的长度超过了报文结束符所在信元对应的存储地址的空闲部分时,网络处理器还被配置为将上述第二部分划分为第一段和第二段,并将第一段存入报文结束符所在信元对应的存储地址的空闲部分,将第二段存入新增的存储地址中。基于此,可以对缓存空间进行充分的利用。
在一种可能的实现方式中,当该报文是单播报文、且新报文头的长度超过原始报文头的长度时,网络处理器还被配置为将新报文头划分为第一部分和第二部分,并将原始报文头替换为第一部分,将第二部分存入新增的存储地址中。基于此,网络处理器无需再将第二部分的长度与报文结束符所在信元对应的存储地址的空闲部分进行比较,降低了处理流程,提高了存储效率。
在一种可能的实现方式中,当该报文是组播报文时;网络处理器还被配置为将新报文头存入新增的存储地址中。基于此,可以避免改变组播报文的结构。
第二方面,提供了一种报文转发方法,该报文转发方法应用于包括了网络处理器和流量管理器的报文转发装置。该方法包括:首先,网络处理器向流量管理器发送至少一个描述符;该至少一个描述符用于指示至少一个报文在缓存模块中的存储地址。其次,流量管理器依据上述至少一个描述符向网络处理器发送待转发报文的第一信息;其中,待转发报文为至少一个报文中的一个;第一信息用于指示待转发报文在缓存模块中的存储地址。然后,网络处理器依据第一信息获取待转发报文,并将待转发报文发送给流量管理器。最后,流量管理器接收并转发待转发报文。
在一种可能的实现方式中,上述的至少一个描述符还用于指示至少一个报文的报文优先级和报文包长;流量管理器还可以依据上述报文优先级和报文包长确定第一标记,该第一标记用于指示待转发报文的转发顺序。流量管理器依据上述各个描述符向网络处理器发送待转发报文的第一信息时,也可以将第一标记携带于第一信息中一起发送给网络处理器。
在一种可能的实现方式中,网络处理器向流量管理器发送上述至少一个描述符之前,还可以先将接收到的至少一个报文以信元为单位存入缓存模块;其中,每个信元对应缓存模块中的一个存储地址;每个报文包括原始报文头、报文数据和报文结束符等。然后,网络处理器从缓存模块获取原始报文头。最后,网络处理器依据原始报文 头和预设转发路由表生成新报文头并存入缓存模块。
在一种可能的实现方式中,网络处理器依据原始报文头和预设转发路由表生成新报文头并存入缓存模块的过程中,还可以确定该报文是否是单播报文。若该报文是单播报文、且新报文头的长度超过原始报文头的长度,则网络处理器将新报文头划分为第一部分和第二部分,并将原始报文头替换为第一部分,将第二部分存入报文结束符所在信元对应的存储地址的空闲部分。
在一种可能的实现方式中,网络处理器将上述第二部分存入报文结束符所在信元对应的存储地址的空闲部分的过程中,还可以确定上述第二部分的长度是否超过报文结束符所在信元对应的存储地址的空闲部分。若上述第二部分的长度是否超过报文结束符所在信元对应的存储地址的空闲部分,则网络处理器将第二部分划分为第一段和第二段,并将第一段存入报文结束符所在信元对应的存储地址的空闲部分,将第二段存入新增的存储地址中。
在一种可能的实现方式中,网络处理器依据原始报文头和预设转发路由表生成新报文头并存入缓存模块的过程中,若确定该报文是单播报文、且新报文头的长度超过原始报文头的长度,则网络处理器可以将新报文头划分为第一部分和第二部分,并将原始报文头替换为第一部分,将第二部分存入新增的存储地址中。
在一种可能的实现方式中,网络处理器依据原始报文头和预设转发路由表生成新报文头并存入缓存模块的过程中,若网络处理器确定该报文是组播报文,则可以将新报文头存入新增的存储地址中。
第三方面,提供了一种通信芯片,包括:处理器和存储器,其中,存储器用于存储程序指令,处理器用于执行存储器中的程序指令,以实现如第二方面任一种可能的实现方式中的方法。
第四方面,提供了一种网络设备,该网络设备包括上述第一方面中任一种可能的实现方式中的报文转发装置。
第五方面,提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机可读的程序指令;该程序指令在报文转发装置上运行时,实现上述第二方面任一种可能的实现方式中的方法。
第六方面,提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,实现上述第二方面中任一种可能的实现方式中的方法。
其中,上述第二方面至上述第六方面所能带来的技术效果可参见上述第一方面,此处不再赘述。
附图说明
图1为本申请实施例提供的现有交换机的工作原理示意图;
图2为本申请实施例提供的一种报文转发装置的硬件结构示意图;
图3为本申请实施例提供的一种报文转发方法的流程示意图;
图4为本申请实施例提供的一种原始报文头的存储方式示意图;
图5为本申请实施例提供的一种新报文头的存储方式示意图;
图6为本申请实施例提供的一种新报文头的存储流程示意图;
图7为本申请实施例提供的又一种新报文头的存储流程示意图;
图8为本申请实施例提供的又一种新报文头的存储流程示意图;
图9为本申请实施例提供的一种报文转发装置的拓扑结构示意图;
图10为本申请实施例提供的又一种报文转发装置的拓扑结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
在本申请的描述中,除非另有说明,“/”表示前后关联的对象是一种“或”的关系,例如,A/B可以表示A或B;本申请中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,其中A,B可以是单数或者复数。并且,在本申请的描述中,除非另有说明,“多个”是指两个或多于两个。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。同时,在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念,便于理解。
下面结合附图和实施例对本发明作详细说明:
如图1所示,本申请的实施例提供一种交换网络的架构图。在交换网络中通过网络设备将外部网络20与交换网30连接。该网络设备可以是交换机、具备交换机功能的路由器等设备。以交换机为例,现有的交换机10通常由网络接口模块101、报文转发装置100和交换网接口模块102等组成。网络接口模块101用于连接外部网络20,并接收外部网络20发送的报文。报文转发装置100与网络接口模块101连接,用于依据该报文生成转发报文,然后通过交换网接口模块102将转发报文输送给对应的交换网30。上述的报文转发装置100通常采用包括网络处理器(network processor,NP)110和流量管理器(traffic manager,TM)120的转发芯片;其中,NP是一种可编程器件,可以用于通信领域的数据包处理、协议分析、路由查找等;TM能够完成业务数据缓存、流量整形、拥塞避免等功能。在NP和TM中均设有缓存器,且在不需要片外缓存的应用场景中,NP转发所处理的报文通常由NP内部的缓存器单独缓存,TM所处理的报文通常由TM内部的缓存器单独进行缓存。但是,这种缓存方式意味着同一个报文需要在NP完整的写一次并读一次,然后在TM又完整的写一次并读一次。基于这种方式,同一个报文相当于完整读写了两次,以至于增加了访问缓存器所造成的功耗,同时降低了缓存的利用率。
基于背景技术存在的问题,本申请实施例提供一种应用于报文转发装置100上的报文转发方法。该报文转发方法中的流量管理器120进行输出调度时,从网络处理器110的缓存模块111中获取待转发报文,然后接收并转发待转发报文,使得报文只需 要在网络处理器110中进行一遍完整的读写操作,从而降低访问缓存器所造成的功耗,同时提高缓存的利用率。
图2是本申请实施例提供的一种报文转发装置200的硬件结构示意图,该报文转发装置200包括处理器201、存储器202和网络接口203。
处理器201可以是通用中央处理器(central processing unit,CPU)、微处理器、特定应用集成电路(application-specific integrated circuit,ASIC),或者一个或多个用于控制本申请方案程序执行的集成电路。在具体实现中,作为一种实施例,处理器201也可以包括多个CPU,并且处理器201可以是单核(single-CPU)处理器或多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路或用于处理数据(例如计算机程序指令)的处理核。其中,本申请的实施例提供网络处理器(network processor,NP)110和流量管理器(traffic manager,TM)120可以集成于该处理器201,当然,网络处理器(network processor,NP)110和流量管理器(traffic manager,TM)120也可以采用分立的处理器201实现。
存储器202可以是具有存储功能的装置。例如可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备、随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器202可以是独立存在,通过通信线路与处理器201相连接。存储器202也可以和处理器201集成在一起。
其中,存储器202用于存储执行本申请方案的计算机执行指令,并由处理器201来控制执行。具体的,处理器201用于执行存储器202中存储的计算机执行指令,从而实现本申请实施例中的报文转发方法。在本申请的实施例中,存储器202中存储的数据可以包括本申请的实施例中提供的报文,具体本申请实施例不做限定。需要说明的是,存储器202可以包含本申请的实施例提供的缓存模块111,并且该缓存模块111可以与处理器201可以集成于同一芯片上;或者,该缓存模块111也可以与处理器201独立设置。
可选地,处理器201通过读取存储器202中保存的指令实现本申请实施例提供的报文转发方法,或者,处理器201通过内部存储的指令实现本申请实施例提供的报文转发方法。在处理器201通过读取存储器202中保存的指令实现上述实施例中的方法的情况下,存储器202中保存实现本申请实施例提供的报文转发方法的指令。
网络接口203是有线接口(端口),例如FDDI、GE接口。或者,网络接口203是无线接口。应理解,网络接口203包括多个物理端口,网络接口203用于转发报文等。
可选地,报文转发装置200还包括总线204,上述处理器201、存储器202、网络接口203通常通过总线204相互连接,或采用其他方式相互连接。
如图3所示,采用上述图2提供的报文转发装置对报文转发方法实施的总体流程如下所述。
S301.网络处理器接收至少一个报文并存入缓存模块。
其中,该报文包括原始报文头、报文数据及报文结束符。示例性的,如图4所示,网络处理器在将该报文存入缓存模块时,以信元为单位将该报文按照“首信元(First cell)、第二信元(Second cell)、...、尾信元(Last cell)”的形式进行分割,然后在缓存模块111中分配存储地址以存储该报文;同时网络处理器110记录对应的存储链表并保存在存储器中。其中,对该报文进行分割时,信元的大小可以选择48个字节、64个字节、128个字节、256个字节中的任意一个。可以理解的是,信元的大小也可以根据实际需求进行设置,并不局限于上述方式。上述存储链表中可以记录首信元的存储地址,以及后续各个信元的存储地址与首信元的存储地址之间的顺序关系。
S302.网络处理器从缓存模块读取的至少一个报文的原始报文头。
其中,网络处理器110可以依据上述的存储链表从缓存模块111中读取该报文的原始报文头。
S303.网络处理器依据读取到的原始报文头和预设转发路由表生成新报文头。
其中,转发路由表中存储有多个已知的转发路径;该转发路径记载了该报文对应的发送地址和接收地址之间的路径信息。原始报文头中通常会携带该报文对应的发送地址,网络处理器可以根据原始报文头中的发送地址和转发路由表确定对应的转发路径,并依据该转发路径将该报文的接收地址、报文长度、报文优先级等内容存入该新报文头中。
S304.网络处理器将新报文头存入缓存模块。
其中,网络处理器110将新报文头存入缓存模块111的过程中,可以按照以下实施方式执行。
如图5所示,考虑到原始报文头替换为新报文头后,新报文头的长度通常会发生变化,网络处理器110在缓存模块111中存入单播报文的新报文头就有可能出现二次拐点。为了降低单播报文的新报文头存入时导致的二次拐点,本申请实施例可以根据实际情况,将新报文头分为part1、part2和part3三个部分。其中,part1覆盖该报文的首信元中的原始报文头(Header),part2写在该报文的尾信元中的空闲部分(Free space),part3则分配新信元(New cell);另外,对于组播报文,因为原始报文头不能覆盖,所以每个新报文头都需要分配新信元。基于此,本申请实施例提供的实施方案如下所述。
在一种实施方案中,如图6所示,在该报文的原始报文头中可以设置一个标志位用于表示当前的报文是单播报文还是组播报文,该标志位可以保留在生成的新报文头中。网络处理器110生成新报文头后,可以先根据新报文头中的标志位确定该报文是单播报文还是组播报文。若识别到的标志位对应的是单播报文,则网络处理器110将新报文头存入缓存模块111的过程中,可以先确定新报文头的长度是否超过原始报文头的长度,若新报文头的长度超过原始报文头的长度,则网络处理器110将新报文头划分为第一部分(part1)和第二部分(part2),并将原始报文头替换为第一部分(part1),将第二部分(part2)存入报文结束符所在信元对应的存储地址的空闲部分。若识别到 的标志位对应的是组播报文,则网络处理器110可以直接将新报文头存入新增的存储地址中。
在一种实施方案中,考虑到新报文头的第二部分的长度有可能比报文结束符所在信元对应的存储地址的空闲部分更长。因此,如图7所示,在该报文的原始报文头中可以设置一个标志位用于表示当前的报文是单播报文还是组播报文,该标志位可以保留在生成的新报文头中。网络处理器110生成新报文头后,可以先根据新报文头中的标志位确定该报文是单播报文还是组播报文。若识别到的标志位对应的是单播报文,则网络处理器110将新报文头存入缓存模块111的过程中,可以先将原始报文头替换为新报文头的第一部分;然后继续确定新报文头的第二部分的长度是否超过报文结束符所在信元对应的存储地址的空闲部分。若新报文头的第二部分的长度超过报文结束符所在信元对应的存储地址的空闲部分,则网络处理器110将新报文头的第二部分划分为第一段和第二段,并将第一段作为part2存入报文结束符所在信元对应的存储地址的空闲部分,将第二段作为part3存入新增的存储地址中。若识别到的标志位对应的是组播报文,则网络处理器110可以直接将新报文头存入新增的存储地址中。通过这种方式,可以更充分的利用缓存模块111的存储空间。
在一种实施方案中,虽然报文结束符所在信元对应的存储地址的空闲部分有可能比新报文头的第二部分的长度更短,但是若继续执行“将新报文头的第二部分划分为第一段和第二段,并将第一段作为part2存入报文结束符所在信元对应的存储地址的空闲部分,将第二段作为part3存入新增的存储地址中”这一过程会增加处理流程,降低存储效率。因此,如图8所示,在该报文的原始报文头中可以设置一个标志位用于表示当前的报文是单播报文还是组播报文,该标志位可以保留在生成的新报文头中。网络处理器110生成新报文头后,可以先根据新报文头中的标志位确定该报文是单播报文还是组播报文。若识别到的标志位对应的是单播报文,则网络处理器110将新报文头存入缓存模块111的过程中,还可以进一步确定新报文头的第二部分的长度是否超过报文结束符所在信元对应的存储地址的空闲部分;若新报文头的第二部分的长度超过报文结束符所在信元对应的存储地址的空闲部分,则网络处理器110直接将新报文头的第二部分存入新增的存储地址中。若识别到的标志位对应的是组播报文,则网络处理器110也直接将新报文头存入新增的存储地址中。
S305.网络处理器将至少一个描述符发送给流量管理器。
其中,各个描述符中包含了对应报文的新报文头的存储地址、各个报文的报文优先级和报文长度等信息。
S306.流量管理器接收上述至少一个描述符。
S307.流量管理器依据上述至少一个描述符和预设转发路由表生成待转发报文的第一信息。
其中,流量管理器120可以依据报文优先级和报文包长生成待转发报文的第一标记;并将第一标记携带于第一信息中。其中,该第一标记用于指示待转发报文的转发顺序。
示例性的,流量管理器120依据各个描述符中的报文优先级先确定对应的待转发报文在转发调度队列中位置,然后以信元为单位依据对应的报文长度信息在输出调度 队里中进行编号。编号时可以从新报文头开始,按照各个信元的序号生成待转发报文在转发调度队列中的队列编号,作为第一标记。最后,流量管理器120将新报文头的存储地址和队列编号,以及报文数据对应各个信元的序号和队列编号作为第一信息。其中,队列编号用于指示待转发报文的转发顺序。
S308.流量管理器向网络处理器发送待转发报文的第一信息。
其中,流量管理器120以信元为粒度做输出调度,且转发调度队列内基于包输出。流量管理器120调度新报文头时,向网络处理器110输出待转发报文的新报文头的存储地址和队列编号;调度报文数据时,则向网络处理器110输出存储报文数据的对应各个信元的序号和队列编号。
S309.网络处理器接收流量管理器发送的第一信息。
S310.网络处理器依据第一信息从缓存模块获取待转发报文。
其中,网络处理器110可以依据上述的存储链表根据获取到新报文头的存储地址从缓存模块111中读取对应的新报文头,并依据队列编号读取报文数据。存储链表中包括新报文头对应信元的存储地址、报文数据对应各个信元的序号和存储地址,以及各个信元之间的先后顺序关系。网络处理器可以按照存储链表中的先后顺序关系依次读取新报文头和报文数据等内容,并进行封装,得到待转发报文。
S311.网络处理器将获取到的待转发报文发送给流量管理器。
S312.流量管理器接收并转发网络处理器发送的待转发报文。
其中,流量管理器120在转发待转发报文的过程中,还可以依据加密协议对待转发报文进行加密。可以理解的是,加密时所用的加密协议可以使用目前使用的各种协议,本申请实施例对此不做具体限制。
相应地,本申请实施例提供一种报文转发装置100,该报文转发装置100用于执行上述报文转发方法中的各个步骤,本申请实施例可以根据上述方法示例对该报文转发装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图9和图10示出上述实施例中所涉及的报文转发装置的一种可能的结构示意图。如图9所示,该报文转发装置100包括网络处理器110以及与该网络处理器110连接的流量管理器120。网络处理器110包括缓存模块111。
网络处理器110,用于向流量管理器120发送至少一个描述符;该至少一个描述符用于指示至少一个报文在缓存模块中的存储地址。流量管理器120用于依据该至少一个描述符向网络处理器110发送待转发报文的第一信息;其中,待转发报文为上述至少一个报文中的一个;第一信息用于指示待转发报文在缓存模块中的存储地址。网络处理器110还用于依据该第一信息获取待转发报文,并将待转发报文发送给流量管理器120。流量管理器120还用于接收并转发待转发报文。
通过这种方式,流量管理器120在转发待转发报文时,可以直接从网络处理器110的缓存模块111中获取待转发报文,无需在自身的缓存器中再对报文执行一遍读写流 程,降低了访问缓存器所造成的功耗,同时也提高了缓存器的利用率。
具体的,如图10所示,该网络处理器110包括缓存模块111,与该缓存模块111连接的重组模块112,以及与该缓存模块111连接的转发面模块113。上述的重组模块112和转发面模块113可以由网络处理器110内部的硬件协处理器实现,缓存模块111可以选用静态随机存取存储器(static random access memory,SRAM)。
进一步的,上述的至少一个描述符还用于指示至少一个报文的报文优先级和报文包长等信息;流量管理器120接收到一个描述符后,可以依据报文优先级和报文包长生成待转发报文在转发调度队列中的第一标记;并将该第一标记携带于第一信息中发送至网络处理器;然后,重组模块112按照该第一标记指示的待转发报文的转发顺序,在缓存模块111中读取待转发报文的新报文头、报文数据等内容。
示例性的,流量管理器120可以依据一个描述符中的报文优先级先确定该描述符对应的报文在转发调度队列中位置,然后以信元为单位依据报文长度为进行编号。编号时可以从新报文头开始,按照各个信元的序号生成待转发报文在转发调度队列中的队列编号,作为第一标记。同时,流量管理器120将新报文头的存储地址和队列编号,以及报文数据对应各个信元的序号和队列编号等保存在转发调度列表中。
在一种实施方案中,网络处理器110向所述流量管理器120发送至少一个描述符之前还可以先执行以下步骤:首先,网络处理器110将接收到的报文以信元为单位存入缓存模块111;每个信元对应缓存模块111中的一个存储地址;且该报文包括原始报文头、报文数据和报文结束符等内容。其次,网络处理器110依据原始报文头和预设的转发路由表生成新报文头并存入缓存模块111。结合图10示出的报文转发装置,新报文头的获得方式如下:首先,重组模块112将接收到的报文以信元为单位存入缓存模块111,并在缓存模块111中为各个信元分配存储地址进行存储,同时记录对应的存储链表。其中,该报文包括原始报文头、报文数据和报文结束符等内容。然后,重组模块112依据上述的原始报文头和预设的转发路由表生成新报文头并存入缓存模块111。
示例性的,当重组模块112接收到包含了原始报文头、报文数据以及报文结束符的报文后,先将该报文以信元为单位分割为多个信元(如图4中的首信元(First cell)、第二信元(Second cell)、...、尾信元(Last cell)),并在缓存模块111中为各个信元分配存储地址进行存储,同时记录对应的存储链表。其次,重组模块112从缓存模块111分配的存储地址中读取该报文的原始报文头并发送给转发面模块113。再次,转发面模块113依据该原始报文头和预设的转发路由表生成新报文头并存入缓存模块111;同时,重组模块112将新报文头的描述符发送给流量管理器120。流量管理器120再依据接收到的新报文头的描述符生成对应的报文在转发调度队列中的队列编号;其中,流量管理器120根据转发调度队列进行输出调度,将转发调度队列中待转发报文的队列编号和待转发报文中新报文头的存储地址等作为第一信息发送至重组模块112。接着,重组模块112依据该第一信息和存储链表从缓存模块111读取上述队列编号对应的待转发报文,并将读取到的待转发报文发送给流量管理器120。最后,流量管理器120接收并转发待转发报文。其中,流量管理器120根据转发调度队列进行输出调度时,以信元为粒度做输出调度,转发调度队列内基于包输出。例如,在获取待转发 报文的过程中,若调度的是新报文头,则流量管理器120向重组模块112发送新报文头的存储地址和队列编号;若调度的是报文数据,则流量管理器120则向重组模块112发送报文数据对应的各个信元的序号以及队列编号。重组模块112获取到新报文头的存储地址后,依据存储链表从缓存模块111中读取队列编号对应的新报文头和报文数据,并发送给流量管理器120。其中,存储链表用于存储新报文头所对应的首信元的存储地址,以及后续报文数据对应的各个信元的存储地址与首信元的存储地址之间的顺序关系。重组模块112依据首信元的存储地址和该顺序关系依次读取新报文头和报文数据。另外,流量管理器120在转发待转发报文时,还可以依据加密协议对待转发报文进行加密。
在上述实现过程中,重组模块112可以使用网络处理器110内部的硬件协处理器实现。转发面模块113也可以使用网络处理器110内部的硬件协处理器来实现。缓存模块111则可以使用目前广泛使用的各种缓存器,本申请实施例对此也不作具体限定。上述的存储链表可以通过存储器进行存储。该存储器包括但不限于是随机存取存储器(random access memory,RAM)、只读存储器(read only memory,ROM)、可擦除可编程只读存储器(erasable programmable read-only memory,EPROM)、快闪存储器、或光存储器等。存储器中保存有代码和数据,例如存储器中存储的数据还可以包括传输协议的信息、转发路由表等,具体本申请实施例不做限定。
通过上述实现过程,报文只需要在输入时写一次,输出时再读一次,减少了报文的读写次数,也就降低了缓存模块111的动态功耗。同时,对于组播报文,因为只需要使用新报文头替换原始报文头,报文数据本身只是在生成转发报文输出时需要读取一次,也在很大程度上提高了缓存利用率。
在一种实施方案中,参照图5,考虑到原始报文头替换为新报文头以后,新报文头的长度通常会发生变化,在缓存模块111中存入单播报文的新报文头就有可能出现二次拐点。为了降低单播报文的新报文头存入时导致的二次拐点,本申请实施例可以根据实际情况,将新报文头分为part1、part2和part3三个部分。其中,part1覆盖该报文的首信元中的原始报文头(Header),part2写在该报文的尾信元中的空闲部分(Free space),part3则分配新信元(New cell)。另外,对于组播报文,因为原始报文头不能覆盖,所以每个新报文头都需要分配新信元。基于此,具体实施方案如下所述。
在一种实施方案中,参照图6,在该报文的原始报文头中可以设置一个标志位用于表示当前的报文是单播报文还是组播报文,该标志位可以保留在生成的新报文头中。重组模块112接收到转发面模块113生成新报文头后,可以先根据新报文头中的标志位确定该报文是单播报文还是组播报文。若识别到的标志位对应的是单播报文,则重组模块112在将新报文头存入缓存模块111的过程中,可以先确定新报文头的长度是否超过原始报文头的长度,若新报文头的长度超过原始报文头的长度,则重组模块112将新报文头划分为第一部分(part1)和第二部分(part2),并将原始报文头替换为第一部分(part1),将第二部分(part2)存入报文结束符所在信元对应的存储地址的空闲部分。若识别到的标志位对应的是组播报文,则重组模块112可以直接将新报文头存入新增的存储地址中。
在一种实施方案中,新报文头的第二部分的长度有可能比报文结束符所在信元对 应的存储地址的空闲部分更长。因此,参照图7,在该报文的原始报文头中可以设置一个标志位用于表示当前的该报文是单播报文还是组播报文,该标志位可以保留在生成的新报文头中。重组模块112接收到转发面模块113生成新报文头后,可以先根据新报文头中的标志位确定该报文是单播报文还是组播报文。若识别到的标志位对应的是单播报文,则重组模块112将新报文头存入缓存模块111的过程中,可以先将原始报文头替换为新报文头的第一部分;然后继续确定新报文头的第二部分的长度是否超过该报文的报文结束符所在信元对应的存储地址的空闲部分。若新报文头的第二部分的长度超过该报文的报文结束符所在信元对应的存储地址的空闲部分,则重组模块112将新报文头的第二部分划分为第一段和第二段,并将第一段作为part2存入报文结束符所在信元对应的存储地址的空闲部分,将第二段作为part3存入新增的存储地址中。若识别到的标志位对应的是组播报文,则重组模块112可以直接将新报文头存入新增的存储地址中。通过这种方式,可以更充分的利用缓存模块111的存储空间。
在一种实施方案中,虽然报文结束符所在信元对应的存储地址的空闲部分有可能比新报文头的第二部分的长度更短,但是若继续执行“将新报文头的第二部分划分为第一段和第二段,并将第一段作为part2存入报文结束符所在信元对应的存储地址的空闲部分,将第二段作为part3存入新增的存储地址中”这一过程会增加处理流程,降低存储效率。因此,参照图8,在该报文的原始报文头中可以设置一个标志位用于表示当前的报文是单播报文还是组播报文,该标志位可以保留在生成的新报文头中。重组模块112接收到转发面模块113生成新报文头后,可以先根据新报文头中的标志位确定该报文是单播报文还是组播报文。若识别到的标志位对应的是单播报文,则重组模块112将新报文头存入缓存模块111的过程中,还可以进一步确定新报文头的第二部分的长度是否超过该报文的报文结束符所在信元对应的存储地址的空闲部分;若新报文头的第二部分的长度超过该报文的报文结束符所在信元对应的存储地址的空闲部分,则重组模块112直接将新报文头的第二部分存入新增的存储地址中。若识别到的标志位对应的是组播报文,则重组模块112可以直接将新报文头存入新增的存储地址中。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在本实施例中,该报文转发装置可以采用集成的方式划分各个功能模块的形式来呈现。这里的“模块”可以指特定ASIC,电路,执行一个或多个软件或固件程序的处理器和存储器,集成逻辑电路,和/或其他可以提供上述功能的器件。在一个简单的实施例中,本领域的技术人员可以想到报文转发装置可以采用图2所示的报文转发装置的形式。
比如,图2所示的报文转发装置中的处理器201可以通过调用存储器202中存储的计算机执行指令,使得报文转发装置100执行上述方法实施例中的报文转发方法。
具体的,图9中的网络处理器110和流量管理器120的功能/实现过程可以通过图2所示的报文转发装置100中的处理器201调用存储器202中存储的计算机执行指令来实现。当然,网络处理器110和流量管理器120可以集成与同一个处理器201中,或者设置在两个分立的处理器201中。
可选的,本申请实施例还提供了一种报文转发装置(例如,该报文转发装置可以 是通信芯片或芯片系统),该报文转发装置包括处理器和接口,处理器用于读取指令以执行上述任一方法实施例中的方法。在一种可能的设计中,该报文转发装置还包括存储器。该存储器,用于保存必要的程序指令和数据,处理器可以调用存储器中存储的程序代码以指令该报文转发装置执行上述任一方法实施例中的方法。当然,存储器也可以不在该通信装置中。该报文转发装置是芯片系统时,可以由芯片构成,也可以包含芯片和其他分立器件,本申请实施例对此不作具体限定。
在一种实施方案中,本申请的实施例中还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机可读的程序指令,该程序指令在报文转发装置上运行时实现图3至图8中任一实施方案中的方法。示例性地,该计算机可读存储介质可以是只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、只读光盘(compact discread-only memory,CD-ROM)、磁带、软盘、U盘和光数据存储设备等。
在一种实施方案中,本申请实施例还提供了一种计算机程序产品,当该计算机程序产品在报文转发装置上运行时,使得报文转发装置执行如上述任意一种实施方案中的方法。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的报文转发装置和报文转发方法,可以通过其它的方式实现。例如,以上所描述的报文转发装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存入一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存入一个存储介质中,包括若干指令用以使得一个设备,如:可以是单片机,芯片等,或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种 可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (17)

  1. 一种报文转发装置,包括网络处理器以及与所述网络处理器连接的流量管理器,其特征在于,所述网络处理器包括缓存模块;
    所述网络处理器,用于向所述流量管理器发送至少一个描述符;所述至少一个描述符用于指示至少一个报文在所述缓存模块中的存储地址;
    所述流量管理器,用于依据所述至少一个描述符向所述网络处理器发送待转发报文的第一信息;其中,所述待转发报文为所述至少一个报文中的一个,所述第一信息用于指示所述待转发报文在所述缓存模块中的存储地址;
    所述网络处理器,用于依据所述第一信息获取所述待转发报文,并将所述待转发报文发送给所述流量管理器;
    所述流量管理器,还用于接收并转发所述待转发报文。
  2. 根据权利要求1所述的报文转发装置,其特征在于,所述至少一个描述符还用于指示所述至少一个报文的报文优先级和报文包长;
    所述流量管理器,还用于依据所述报文优先级和所述报文包长确定第一标记,所述第一标记用于指示所述待转发报文的转发顺序;
    所述流量管理器用于依据所述至少一个描述符向所述网络处理器发送待转发报文的第一信息具体包括:
    依据所述至少一个描述符和所述第一标记发送所述第一信息,所述第一信息还携带有所述第一标记。
  3. 根据权利要求1或2所述的报文转发装置,其特征在于,所述网络处理器,还用于:
    将接收到的所述至少一个报文以信元为单位存入所述缓存模块;每个所述信元对应所述缓存模块中的一个存储地址;所述至少一个报文包括原始报文头、报文数据和报文结束符;
    从所述缓存模块获取所述原始报文头;
    依据所述原始报文头和预设的转发路由表生成新报文头并存入所述缓存模块。
  4. 根据权利要求3所述的报文转发装置,其特征在于,所述网络处理器还用于依据所述原始报文头和预设的转发路由表生成新报文头并存入所述缓存模块包括:
    当所述报文是单播报文、且所述新报文头的长度超过所述原始报文头的长度时,将所述新报文头划分为第一部分和第二部分,并将所述原始报文头替换为所述第一部分,将所述第二部分存入所述报文结束符所在信元对应的存储地址的空闲部分。
  5. 根据权利要求4所述的报文转发装置,其特征在于,所述网络处理器还用于将所述第二部分存入所述报文结束符所在信元对应的存储地址的空闲部分包括:
    当所述第二部分的长度超过所述报文结束符所在信元对应的存储地址的空闲部分时,将所述第二部分划分为第一段和第二段,并将所述第一段存入所述报文结束符所在信元对应的存储地址的空闲部分,将所述第二段存入新增的存储地址中。
  6. 根据权利要求3所述的报文转发装置,其特征在于,所述网络处理器还用于依据所述原始报文头和预设的转发路由表生成所述新报文头并存入所述缓存模块还包括:
    当所述报文是单播报文、且所述新报文头的长度超过所述原始报文头的长度时,将所述新报文头划分为第一部分和第二部分,并将所述原始报文头替换为所述第一部分,将所述第二部分存入新增的存储地址中。
  7. 根据权利要求3-6任一项所述的报文转发装置,其特征在于,所述网络处理器还用于依据所述原始报文头和预设的转发路由表生成所述新报文头并存入所述缓存模块还包括:
    当所述报文是组播报文时;将所述新报文头存入新增的存储地址中。
  8. 一种报文转发方法,应用于报文转发装置,所述报文转发装置用于进行报文转发,所述报文转发装置包括网络处理器和流量管理器,其特征在于,所述方法包括:
    所述网络处理器向所述流量管理器发送至少一个描述符;所述至少一个描述符用于指示至少一个报文在所述缓存模块中的存储地址;
    所述流量管理器依据所述至少一个描述符向所述网络处理器发送待转发报文的第一信息;其中,所述待转发报文为所述至少一个报文中的一个;所述第一信息用于指示所述待转发报文在所述缓存模块中的存储地址;
    所述网络处理器依据所述第一信息获取所述待转发报文,并将所述待转发报文发送给所述流量管理器;
    所述流量管理器接收并转发所述待转发报文。
  9. 根据权利要求8所述的方法,其特征在于,所述至少一个描述符还用于指示所述至少一个报文的报文优先级和报文包长;所述方法还包括:
    所述流量管理器依据所述报文优先级和所述报文包长确定第一标记,所述第一标记用于指示所述待转发报文的转发顺序;
    所述流量管理器依据所述至少一个描述符向所述网络处理器发送待转发报文的第一信息具体包括:
    所述流量管理器依据所述至少一个描述符和第一标记发送所述第一信息,所述第一信息还携带有所述第一标记。
  10. 根据权利要求8或9所述的方法,其特征在于,所述网络处理器向所述流量管理器发送至少一个描述符之前,所述方法还包括:
    将接收到的所述至少一个报文以信元为单位存入所述缓存模块;每个所述信元对应所述缓存模块中的一个存储地址;所述至少一个报文包括原始报文头、报文数据和报文结束符;
    从所述缓存模块获取所述原始报文头;
    依据所述原始报文头和预设的转发路由表生成所述新报文头并存入所述缓存模块。
  11. 根据权利要求10所述的方法,其特征在于,依据所述原始报文头和预设转发路由表生成所述新报文头并存入所述缓存模块包括:
    当所述报文是单播报文、且所述新报文头的长度超过所述原始报文头的长度时,将所述新报文头划分为第一部分和第二部分,并将所述原始报文头替换为所述第一部分,将所述第二部分存入所述报文结束符所在信元对应的存储地址的空闲部分。
  12. 根据权利要求11所述的方法,其特征在于,将所述第二部分存入所述报文结束符所在信元对应的存储地址的空闲部分还包括:
    当所述第二部分的长度超过所述报文结束符所在信元对应的存储地址的空闲部分时,将所述第二部分划分为第一段和第二段,并将所述第一段存入所述报文结束符所在信元对应的存储地址的空闲部分,将所述第二段存入新增的存储地址中。
  13. 根据权利要求10所述的方法,其特征在于,依据所述原始报文头和预设转发路由表生成所述新报文头并存入所述缓存模块还包括:
    当所述报文是单播报文、且所述新报文头的长度超过所述原始报文头的长度时,将所述新报文头划分为第一部分和第二部分,并将所述原始报文头替换为所述第一部分,将所述第二部分存入新增的存储地址中。
  14. 根据权利要求8-13任一项所述的方法,其特征在于,依据所述原始报文头和预设转发路由表生成所述新报文头并存入所述缓存模块还包括:
    当所述报文是组播报文时;将所述新报文头存入新增的存储地址中。
  15. 一种通信芯片,其特征在于,包括:处理器和存储器,其中,所述存储器用于存储程序指令,所述处理器用于执行所述存储器中的程序指令,以实现如权利要求8-14任一项所述的方法。
  16. 一种网络设备,其特征在于,所述网络设备包括权利要求1-7任一项所述的报文转发装置。
  17. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机可读的程序指令;所述程序指令在报文转发装置上运行时实现权利要求8-14任一项所述的方法。
PCT/CN2023/095571 2022-07-26 2023-05-22 报文转发装置及方法、通信芯片及网络设备 WO2024021801A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210886409.2 2022-07-26
CN202210886409.2A CN117499351A (zh) 2022-07-26 2022-07-26 报文转发装置及方法、通信芯片及网络设备

Publications (1)

Publication Number Publication Date
WO2024021801A1 true WO2024021801A1 (zh) 2024-02-01

Family

ID=89681545

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/095571 WO2024021801A1 (zh) 2022-07-26 2023-05-22 报文转发装置及方法、通信芯片及网络设备

Country Status (2)

Country Link
CN (1) CN117499351A (zh)
WO (1) WO2024021801A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117729274A (zh) * 2024-02-07 2024-03-19 之江实验室 报文处理的方法、装置、设备及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018149102A1 (zh) * 2017-02-20 2018-08-23 深圳市中兴微电子技术有限公司 一种降低高优先级数据传输时延的方法和装置、存储介质
CN108874688A (zh) * 2018-06-29 2018-11-23 深圳市风云实业有限公司 一种报文数据缓存方法及装置
CN109995658A (zh) * 2017-12-29 2019-07-09 华为技术有限公司 发送、接收以及转发报文的方法和装置
CN114157619A (zh) * 2021-11-30 2022-03-08 新华三半导体技术有限公司 报文缓存管理方法、装置及网络处理器

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018149102A1 (zh) * 2017-02-20 2018-08-23 深圳市中兴微电子技术有限公司 一种降低高优先级数据传输时延的方法和装置、存储介质
CN109995658A (zh) * 2017-12-29 2019-07-09 华为技术有限公司 发送、接收以及转发报文的方法和装置
CN108874688A (zh) * 2018-06-29 2018-11-23 深圳市风云实业有限公司 一种报文数据缓存方法及装置
CN114157619A (zh) * 2021-11-30 2022-03-08 新华三半导体技术有限公司 报文缓存管理方法、装置及网络处理器

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117729274A (zh) * 2024-02-07 2024-03-19 之江实验室 报文处理的方法、装置、设备及可读存储介质

Also Published As

Publication number Publication date
CN117499351A (zh) 2024-02-02

Similar Documents

Publication Publication Date Title
US10204070B2 (en) Method, device, system and storage medium for implementing packet transmission in PCIE switching network
US6577542B2 (en) Scratchpad memory
CN108270676B (zh) 一种基于Intel DPDK的网络数据处理方法及装置
JP4974078B2 (ja) データ処理装置
JP6881861B2 (ja) パケット処理方法および装置
ES2684559T3 (es) Procedimiento y dispositivo de procesamiento de mensajes
WO2024021801A1 (zh) 报文转发装置及方法、通信芯片及网络设备
EP3657744B1 (en) Message processing
CN112118167B (zh) 一种跨网隧道数据快速传输方法
CN106789734B (zh) 在交换控制电路中巨帧的控制系统及方法
CN111290979B (zh) 数据传输方法、装置及系统
CN106850440B (zh) 一种面向多地址共享数据路由包的路由器、路由方法及其芯片
WO2016202158A1 (zh) 一种报文传输方法、装置及计算机可读存储介质
CN112491715B (zh) 路由装置及片上网络的路由设备
TWI223747B (en) Increasing memory access efficiency for packet applications
WO2022143678A1 (zh) 报文存储方法、报文出入队列方法及存储调度装置
JP2005210606A (ja) パケットの優先制御を行う通信装置及び優先制御方法並びにプログラム
CN113297117A (zh) 数据传输方法、设备、网络系统及存储介质
WO2019095942A1 (zh) 一种数据传输方法及通信设备
JP3044653B2 (ja) ゲートウェイ装置
WO2023130997A1 (zh) 管理流量管理tm控制信息的方法、tm模块和网络转发设备
JP2001202345A (ja) 並列プロセッサ
JP3896829B2 (ja) ネットワーク中継装置
JP2017212487A (ja) Ipフラグメント装置、ipデフラグメント装置、ipフラグメントパケット通信システム、ipフラグメントパケット送信方法、ipフラグメントパケットのデフラグメント方法、ipフラグメントパケットの通信方法及びプログラム
KR950009428B1 (ko) 적응 소거노드 기능을 갖는 분산큐이중버스 통신시스템의 트래픽 부하 측정방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23845016

Country of ref document: EP

Kind code of ref document: A1