CN113542152A - Method for processing message in network equipment and related equipment - Google Patents

Method for processing message in network equipment and related equipment Download PDF

Info

Publication number
CN113542152A
CN113542152A CN202010307569.8A CN202010307569A CN113542152A CN 113542152 A CN113542152 A CN 113542152A CN 202010307569 A CN202010307569 A CN 202010307569A CN 113542152 A CN113542152 A CN 113542152A
Authority
CN
China
Prior art keywords
entity
cache
message
policy
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010307569.8A
Other languages
Chinese (zh)
Inventor
张镇星
李楠
黄超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010307569.8A priority Critical patent/CN113542152A/en
Priority to PCT/CN2021/087575 priority patent/WO2021209016A1/en
Publication of CN113542152A publication Critical patent/CN113542152A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes

Abstract

The application provides a method for controlling congestion and related equipment. The method comprises the steps that a policy management entity in the network equipment determines a message processing policy according to the state of a cache entity and sends the determined message processing policy to a message processing entity; and the message processing entity processes the newly received message according to the message processing strategy. In the above technical solution, the message processing entity may execute a corresponding message processing policy according to the state of the cache entity to process a newly received message. Therefore, the cache entity can be better managed, and the occurrence of cache management abnormity of the cache entity is reduced.

Description

Method for processing message in network equipment and related equipment
Technical Field
The present application relates to the field of information technology, and in particular, to a method for processing a packet in a network device and a related device.
Background
Currently, some network devices adopt the following strategies to avoid congestion: and if the congestion of the outlet port of the network equipment is determined, requesting the upstream equipment to stop sending the message sent through the outlet port. It may take a certain amount of time for the network device to send a notification to the upstream device, and the upstream device may also take a certain amount of time to respond to the notification. During this time, the upstream device will continue to send messages to the network device. In order to avoid congestion, a buffer provided in the network device may be used to buffer the packets received from the upstream device during the period of time.
However, in some cases, the upstream device may not normally respond to the request sent by the network device to continue sending messages to the network device. At this point, the network device may not be able to cache the newly received message.
In addition, the network device typically manages the cache space according to pre-configured rules. However, if the preconfigured rules are in error, then an exception condition may occur in the cache space.
Disclosure of Invention
The application provides a method for processing a message in network equipment and related equipment, which can reduce the problem of cache space.
In a first aspect, an embodiment of the present application provides a method for processing a packet in a network device, where the network device includes a policy management entity, a cache entity, and a packet processing entity, and the method includes: the policy management entity acquires first cache state information, wherein the first cache state information indicates a first state of the cache entity at a first moment; the policy management entity determines a first message processing policy according to the first cache state information and sends the first message processing policy to the message processing entity; and the message processing entity processes the newly received message according to the first message processing strategy. In the above technical solution, the message processing entity may execute a corresponding message processing policy according to the state of the cache entity to process a newly received message. Therefore, the cache entity can be better managed, and the occurrence of cache management abnormity of the cache entity is reduced.
In a specific design, before the policy management entity obtains the first cache state information, the method further includes: the policy management entity obtains at least one of the following information of the caching entity at the first time: the length of a sending queue, the buffer occupancy of the sending queue, the time delay of the sending queue, the available capacity of the buffer entity, the used capacity of the buffer entity and the rate of an output port of the buffer entity; the policy management entity determines the first cache state information according to the at least one type of information.
In one particular design, the method further includes: the cache entity determines the first cache state information; the policy management entity obtaining first cache state information includes: the policy management entity obtains the first cache state information from the cache entity.
In a specific design, the obtaining, by the policy management entity, the first cache state information from the cache entity includes: the policy management entity receives the first cache state information sent by the cache entity. In the above embodiment, the caching entity may send the caching status information to the policy management entity by itself. This may reduce the workload of the policy management entity.
In a specific design, before the caching entity sends the first cache state information to the policy management entity, the method further includes: the caching entity determines that the first state meets a first preset condition. In the above technical solution, the cache entity sends the cache state information to the policy management entity only when the state satisfies the preset condition, so that the occupation of the output interface of the cache entity can be reduced, and the workload of the policy management entity can be reduced.
In one particular design, the first state includes normal or abnormal.
In one particular design, the first state includes at least one of: the length of the sending queue, the buffer occupancy of the sending queue, the time delay of the sending queue, the available capacity of the buffer entity, the used capacity of the buffer entity, and the rate of the egress port of the buffer entity.
In a specific design, before the policy management entity determines the first packet processing policy according to the first cache state information, the method further includes: the policy management entity determines that the first cache state information satisfies a second preset condition.
In a specific design, the cache entity includes a target cache space, and the first state includes a usage of the target cache space at the first time; the policy management entity determines a first packet processing policy according to the first cache state information, including: and the policy management entity determines the first message processing policy according to the use condition of the target cache space at the first moment.
In one particular design, the method further includes: the policy management entity obtains second cache state information, wherein the second cache state information indicates a second state of the cache entity at a second moment; the policy management entity determines a second message processing policy according to the second cache state information and sends the second message processing policy to the message processing entity; and the message processing entity processes the newly received message according to the second message processing strategy.
In a specific design, the second message handling policy is used to indicate that the message handling entity is not to execute the first message handling policy any more.
In a specific design, the second message processing policy is further configured to instruct the message processing entity to process a newly received message using a specified message processing policy.
In a specific design, the specified message handling policy is a default message handling policy or a message handling policy used before the first message handling policy is used.
In one particular design, the newly received message comprises a lossless traffic message.
In a specific design, the processing of the newly received packet by the packet processing entity according to the first packet processing policy includes: the message processing entity adopts one or more of the following processing modes: discarding all newly received messages; discarding part of newly received messages; modifying message information of all newly received messages; or modifying the message information of part of the newly received messages.
In a specific design, modifying the message information of the newly received message includes: modifying one or more of the following information of the newly received message: the priority of the newly received message; a discard enable bit of the newly received packet; the discarding priority of the newly received message; the port number of the newly received message; or, the newly received message shows a congestion flag.
In a second aspect, an embodiment of the present application provides a network device, where the network device includes a policy management entity, a cache entity, and a packet processing entity. The policy management entity, the caching entity and the message processing entity are configured to implement the first aspect or any design of the first aspect.
In a third aspect, an embodiment of the present application provides a network device, which includes a first processor, a second processor, a first memory and a second memory, where the first memory is used to store program codes executed by the first processor and the second processor. The second memory is used for caching the message from the upstream equipment. The first processor executes the first memory storing program code to implement the functionality of the policy management entity in any of the first aspect or the first aspect, and the second processor executes the first memory storing program code to implement the functionality of the message processing entity in any of the first aspect or the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium storing instructions for implementing the method according to the first aspect or any one of the possible implementation manners of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect or any of the possible implementations of the first aspect.
Drawings
Fig. 1 is a schematic diagram of a network.
Fig. 2 is a schematic structural block diagram of a network device.
Fig. 3 is a schematic flowchart of a method for processing a packet in a network device according to an embodiment of the present application.
Fig. 4 is a schematic diagram of reserved buffer space, shared buffer space and headroom buffer space.
Fig. 5 is another illustration of reserved buffer space, shared buffer space and headroom buffer space.
Fig. 6 is a schematic flowchart of a method for processing a packet in a network device according to an embodiment of the present application.
Fig. 7 is a block diagram of a network device according to an embodiment of the present application.
Fig. 8 is a block diagram of a network device according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The network device in the embodiment of the present application may be a network device (e.g., a router) having a routing function or a network device (e.g., a switch) having a switching function. The network device in the embodiment of the present application may be a network device in a wired communication network, and may also be a core network device in a wireless communication network (for example, a Global System of Mobile communication (GSM) System, a Code Division Multiple Access (CDMA) System, a Long Term Evolution (LTE) System, a future 5G network, and the like).
This application is intended to present various aspects, embodiments or features around a system that may include a number of devices, components, modules, and the like. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc. and/or may not include all of the devices, components, modules etc. discussed in connection with the figures. Furthermore, a combination of these schemes may also be used.
In addition, in the embodiments of the present application, words such as "exemplary", "for example", etc. are used to mean serving as examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the term using examples is intended to present concepts in a concrete fashion.
In the embodiments of the present application, "corresponding" and "corresponding" may be sometimes used in a mixed manner, and it should be noted that the intended meaning is consistent when the difference is not emphasized.
In the examples of the present application, the subscripts are sometimes as W1It may be mistaken for a non-subscripted form such as W1, whose intended meaning is consistent when the distinction is de-emphasized.
The network architecture and the service scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and as a person of ordinary skill in the art knows that along with the evolution of the network architecture and the appearance of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
Fig. 1 is a schematic diagram of a network. As shown in fig. 1, network 100 includes network device 110, upstream device 121, upstream device 122, upstream device 123, upstream device 124, upstream device 125, downstream device 131, downstream device 132, downstream device 133, downstream device 134, and downstream device 135. The upstream device may be a terminal device, such as a computer, a cell phone, a tablet, etc., or a network device. Similarly, the downstream device may be a terminal device or a network device.
As shown in fig. 1, network device 110 may receive packets (packets) from upstream device 121, upstream device 122, and upstream device 123 through port 111. Network device 110 may receive messages from upstream device 124 through port 112. Network device 110 may receive messages from upstream device 115 through port 113.
Network device 110 may send the message to downstream device 131 and downstream device 132 through port 114. Network device 110 may send messages to downstream device 133, downstream device 134, and downstream device 135 through port 115.
It is to be understood that fig. 1 is only a schematic diagram of a network for helping those skilled in the art understand the method of the present application, and is not a limitation to the network to which the technical solution of the present application can be applied. For example, in some embodiments, network device 110 may also receive, through port 111, messages sent from one or more upstream devices other than upstream device 121 and upstream device 122. For another example, network device 110 may also receive messages sent from one or more upstream devices through another port. As another example, network device 110 may also send messages to one or more downstream devices other than downstream device 131 and downstream device 132 via port 114. As another example, network device 110 may also send a message to one or more downstream devices through another port.
The ports (e.g., port 111, port 112, and port 113) of the network device for receiving messages from upstream devices may be referred to as ingress ports (or simply ingress ports) of the network device. Ports (e.g., port 114 and port 115) of the network device for sending messages to downstream devices may be referred to as egress ports (or simply egress ports) of the network device.
Fig. 2 is a schematic structural block diagram of a network device provided according to an embodiment of the present application. Fig. 2 is a schematic block diagram of the network device 110 shown in fig. 1.
As shown in fig. 2, the network device 110 includes: policy management entity 141, caching entity 142 and message processing entity 143. In addition, network device 110 may also include ports 111, 112 and 113, and ports 114 and 115.
Ports 111 to 113 are ingress ports. The ports 111 to 113 may be configured to receive messages from the upstream device and send the received messages to the message processing entity 143.
The message processing entity 143 is configured to process a message from an ingress port. For example, the message processing entity 143 may determine whether to allow the message to pass through. If the message is allowed to pass, the message may be sent to the caching entity 142. If the message is not allowed to pass through, the message may be deleted, or discard enable may be added to the header information of the message, and the modified message may be sent to the caching entity 142. The caching entity 142 may discard the packet after receiving the packet containing the discard enable. For another example, the message processing entity 143 may also modify the information of the priority, the egress port, and the like of the message.
After acquiring the message from the message processing entity 143, if it is determined that the message does not need to be discarded, the caching entity 142 may cache the message and send the cached message to the downstream device through the corresponding egress port (e.g., the port 114 or the port 115).
Policy management entity 141 may determine the packet processing policy based on the status of caching entity 142. The message processing entity 143 may obtain the message processing policy determined by the policy management entity 141 and execute the message processing policy.
The specific functions and advantageous effects of the policy management entity 141, the caching entity 142 and the message processing entity 143 will be described in detail with reference to the following embodiments.
The method for processing a packet in a network device provided by the present application is described below with reference to fig. 1 to 5.
A network device (e.g., network device 110) receives a message sent by an upstream device through an ingress port and sends the message to a corresponding downstream device through an egress port. In some cases, the network device cannot transmit the message from the upstream device to the downstream device in time for some reasons (for example, congestion occurs at the egress port, the traffic of the ingress port receiving the message is greater than the traffic of the egress port transmitting the message, etc.). In this case, the network device may buffer the packets that are not sent in time into the buffering entities (e.g., the buffering entity 142) of the network and the device, and wait for the egress port to send the packets, and send the packets buffered by the buffering entity to the downstream device.
According to different functions, the cache spaces for caching messages in the cache entity can be divided into two types, which can be referred to as a first cache space and a second cache space respectively.
The first buffer space may be shared by different types of packets. In other words, the first cache space may hold any one or more types of packets, and for any one or more types of packets, the packets may be cached to the first cache space as long as the first cache space has available capacity. For example, the first buffer space may be used for storing all type 1 packets.
The second cache space may include a plurality of sub-cache spaces. The plurality of sub-cache spaces correspond to the plurality of types one by one, and each sub-cache space in the plurality of sub-cache spaces is used for storing messages of the corresponding type. For example, assuming that the total capacity size of the second cache space is 512MB, the second cache space includes 8 sub-cache spaces, and the capacity size of each of the 8 sub-cache spaces may be 64 MB. The 8 sub-cache spaces correspond to the 8 types one to one. The 8 types may be referred to as type 0, type 1, … …, and type 7, respectively. The 8 sub-cache spaces may be referred to as sub-cache space 0, sub-cache space 1, … …, and sub-cache space 7, respectively. The sub-cache space 0 is used for storing the messages of type 0, the sub-cache space 1 is used for storing the messages of type 1, the sub-cache space 2 is used for storing the messages of type 2, and so on. In other words, the second cache space can store at most 64MB of type 0 packets, 64MB of type 1 packets, 64MB of type 2 packets, and so on.
In the above embodiment, the sizes of the sub-buffer spaces of different types of packets are the same. In other embodiments, the size of the different sub-cache spaces may be different. A second buffer space with a total size of 512MB is also taken as an example. It is assumed that the second buffer space includes 6 sub-buffer spaces, which are sub-buffer space 0 to sub-buffer space 5. The sub-buffer spaces 0 to 5 are used for buffering the messages of types 0 to 5, respectively. The size of the sub-buffer space 0 and the sub-buffer space 1 may be 128 MB; the size of each of the sub-buffer spaces 2 to 5 is 64 MB.
The sub-buffer space is divided according to the capacity. In other embodiments, the sub-buffer space may be divided according to the number of packets that can be buffered at most. The sub-buffer spaces 0 to 7 are also taken as examples. In some embodiments, the number of packets that can be cached in each sub-cache space is the same. For example, each of the 8 sub-cache spaces may cache 100 packets. In other embodiments, the number of packets that can be buffered in different sub-buffer spaces may be different. For example, each of the sub-buffer spaces 0 to 3 may buffer 200 packets, and each of the sub-buffer spaces 4 to 7 may buffer 100 packets.
In some embodiments, the type of the message may be determined according to a Class of Service (CoS) of the message. CoS is defined by the Institute of Electrical and Electronics Engineers (IEEE) 802.1 p. Ieee802.1p specifies 8 CoS in total, 0 to 7, respectively. The priority with a CoS value of 0 is lowest and the priority with a CoS value of 7 is highest. The Network device may determine a CoS of the received packet according to a CoS field and an identification (identification, ID) of a Virtual Local Area Network (Virtual Local Area Network) in a header of a two-layer packet. For example, if the packet is a packet carrying a CoS value and a VLAN ID, the CoS of the packet is the CoS value carried by the packet; if the message only carries the CoS value (at this time, the VLAN ID is 0), the CoS of the message is the CoS value carried by the message; if the message is a message carrying a label, the CoS of the message is defaulted to 0 or a new CoS is designated by changing.
In some embodiments, the type of the packet may correspond to the CoS of the packet one-to-one. For example, the correspondence between the type of the packet and the CoS of the packet may be as shown in table 1.
TABLE 1
Type (B) CoS
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
As shown in table 1, if the CoS of the message is 0, the type of the message is 0; if the CoS of the message is 1, the type of the message is 1, and so on.
In other embodiments, the correspondence between the type of the packet and the CoS of the packet may also be that multiple CoS correspond to one type. For example, the correspondence between the type of the packet and the CoS of the packet may be as shown in table 2.
TABLE 2
Figure BDA0002456319550000061
As shown in table 2, the type of the message with CoS of 0 and 1 is 0; the type of the message with CoS of 2 and CoS of 3 is 1; the type of the message with CoS of 4 and CoS of 5 is 2; the type of messages with CoS 6 and 7 is 3.
For another example, CoS is 0, and the types of messages of 1 and 2 are 0; the type of the message with CoS of 3, 4 and 5 is 1; the type of messages with CoS 6 and 7 is 2.
In other embodiments, the type of the message may be determined based on an Internet Protocol (IP) priority of the message. The network device may determine a Type of Service (ToS) field in the IP packet header, determining the IP priority. Similar to CoS priority, the correspondence between IP priority and packet type may be one-to-one, or multiple IP priorities may correspond to one type, and for brevity, the description is omitted here.
In addition, the type of the packet may also be determined according to a priority (Pri) field in a VLAN header of the packet or a priority in a Differentiated Services Code Point (DSCP) field in an IP header. The specific determination method is the same as the determination of the type of the packet according to the CoS, and for brevity, the detailed description is omitted here.
In other embodiments, the type of message may be divided based on other information received for the message. For example, the type of the packet may be determined from a lossy traffic packet and a lossless traffic packet. For another example, the type of the packet may be determined according to IP version four (IP version 4, IPv4) and IP version six (IPv 6).
In other embodiments, the type of the packet may be determined according to the priority information and other information (e.g., a lossy traffic packet and a lossless traffic packet, or an IPv4 packet and an IPv6 packet).
The number of types of messages supported by different network devices may vary. For example, some network devices may support 8 types of messages, and other networks may support 4 types of messages.
The number of packet types supported by the network device may be the same as the number of sub-cache spaces included in the second cache space. For example, if the number of the network device that can support the packet types is 8, the second cache space of the network device includes 8 sub-cache spaces. The 8 sub-cache spaces correspond to the 8 message types one by one.
In some embodiments, the cache entity may include at least one first cache space and at least one second cache space therein. In other words, in some embodiments, the cached entity may include a first cache space and a second cache space. In other embodiments, the cache entity may include a plurality of first cache spaces and a second cache space. In other embodiments, the cache entity may include a first cache space and a plurality of second cache spaces.
In some embodiments, the second cache space included in the cache entity may correspond to the ingress port one to one. In other words, the plurality of second buffer spaces correspond to the plurality of input ports one to one. Each second cache space user in the plurality of second cache spaces caches messages received through a corresponding port.
The network device 110 shown in fig. 2 is also taken as an example. The cache space of network device 110 cache entity 142 may include three second cache spaces. These three second buffer spaces may be referred to as a second buffer space 111, a second buffer space 112, and a second buffer space 113, respectively. The second buffer space 111 corresponds to the port 111. The second buffer space 111 is used for buffering the messages received from the port 111. The second buffer space 112 corresponds to the port 112. The second buffer space 112 is used for buffering the messages received from the port 112. The second buffer space 113 corresponds to the port 113. The second buffer space 113 is used for buffering the messages received from the port 113.
In other embodiments, the second cache space is in one-to-one correspondence with the egress ports. In other words, the plurality of second buffer spaces correspond to the plurality of output ports one to one. Each of the plurality of second cache spaces is configured to cache a packet sent through a corresponding egress port. Take network device 110 shown in fig. 2 as an example. The caching entity 142 of the network device 110 may include two second cache spaces. These two second cache spaces may be referred to as second cache space 114 and second cache space 115, respectively. The second buffer space 114 corresponds to the port 114. The second buffer space 114 is used for buffering messages sent through the port 114. The second buffer space 115 corresponds to the port 115. The second buffer space 115 is used for buffering messages sent through the port 115.
In other embodiments, the second cache space in the cache entity may not correspond to the egress port or the ingress port. In other words, the second buffer space can buffer the message regardless of which ingress port the message received by the network device is from or through. For convenience of description, such a second buffer space that does not distinguish between an ingress port and an egress port may be referred to as a port-independent second buffer space. The second buffer space corresponding to the output port or the input port may be referred to as a port-related second buffer space. Further, the second cache space corresponding to an egress port may be referred to as a second cache space associated with the egress port. The second cache space corresponding to the ingress port may be referred to as a second cache space associated with the ingress port.
In some embodiments, the plurality of second cache spaces in the cache entity may all be port-dependent second cache spaces. For example, the plurality of second cache spaces in the cache entity are all second cache spaces associated with the egress port. For another example, the plurality of second cache spaces in the cache entity are second cache spaces associated with the ingress port. For another example, a part of the second cache spaces in the plurality of second cache spaces in the cache entity is a second cache space related to an ingress port, and another part of the second cache spaces is a second cache space related to an egress port.
In other embodiments, the second cache space in the cache entity is a port independent second cache space. For example, the cache entity includes a second cache space that is port independent.
In other embodiments, there may be a plurality of port-dependent second cache spaces and at least one port-independent second cache space in the cache entity. For example, the cache entity may include a plurality of second cache spaces associated with the egress ports and a second cache space not associated with one port. For another example, the cache entity may include a plurality of ingress port-dependent second cache spaces and a port-independent second cache space. For another example, the cache entity may include a plurality of second cache spaces associated with ingress ports, a plurality of second cache spaces associated with egress ports, and a second swap space not associated with a port.
The cache entity may use the first cache space and the second cache space in a predetermined order.
The usage sequence of the first cache space and the second cache space may be configured by a user or may be set when the network device is shipped from a factory (in other words, the user cannot change the usage sequence).
In some embodiments, if the cache entity includes a first cache space, a port-independent second cache space, and a plurality of port-dependent second cache spaces, the cache entity may use the port-independent second cache space first, then use the first cache space, and finally use the port-dependent second cache space.
In other embodiments, if the cache entity includes a first cache space and a plurality of port-associated second cache spaces, the cache entity may use the first cache space first and then use the port-associated second cache spaces.
In other embodiments, if the cache entity includes a first cache space and a second, port-independent cache space, the cache entity may use the first cache space first and then use the second, port-independent cache space.
It can be understood that the above sequential use is a sequence in the case that the buffer space can also buffer the corresponding type of packet. And if the message is cached in a certain cache space without the residual cache space, skipping the cache space. For example, if the first cache space has no remaining space to cache the message, the second cache space is directly used to cache the message.
It can be understood that using the second cache space first and then using the first cache space means: and firstly, caching the messages of the corresponding type by using the sub-cache space corresponding to the messages of the certain type in the second cache space. And under the condition that the sub-cache space cannot cache the message of the corresponding type, using the first cache space to cache the message of the corresponding type (assuming that the first cache space has available capacity).
Using the first cache space and then using the second cache space means: the message is cached by using the first cache space, and if the first cache space does not have enough available capacity to continue caching the message, the message is cached by using the sub-cache space corresponding to the type of the message in the second cache space.
Taking the network 100 shown in fig. 1 as an example, it is assumed that the packets in the network 100 shown in fig. 1 may include lossless traffic packets and lossy traffic packets.
Assume that network device 110 includes only three ingress ports and two egress ports as shown in fig. 1. Then, three second cache spaces and one first cache space may be included in the cache entity of the network device 110. This one first cache space may be referred to as a shared cache space. The three second buffer spaces may be divided into a reserved buffer space and a headroom buffer space. The three second buffer spaces may include one reserved buffer space and three headroom buffer spaces. The reserved cache space is a second cache space that is port independent. The three headroom buffer spaces are the second buffer space associated with the ingress port. In other words, each of the three headroom buffer spaces corresponds to one ingress port of the network device 110. The three headroom buffer spaces may be referred to as headroom buffer space 111, headroom buffer space 112, and headroom buffer space 113, respectively. The headroom buffer space 111 corresponds to the input port 111, the headroom buffer space 112 corresponds to the input port 112, and the headroom buffer space 113 corresponds to the input port 113. The headroom buffer space 111 is used for buffering the messages received from the port 111, the headroom buffer space 112 is used for buffering the messages received from the port 112, and the headroom buffer space 113 is used for buffering the messages received from the port 113.
For lossless traffic packets in the network 100, the predetermined usage order of the cache spaces of the cache entities of the network device 100 may be: reserving a cache space, sharing the cache space and clearing the cache space.
For the lossy traffic packets in the network 100, the network device 100 may use the reserved buffer space and the shared buffer space in the buffer entity for buffering.
Fig. 3 is an exemplary flowchart of a method for processing a packet in a network device according to an embodiment of the present disclosure. For convenience of description, the messages received and processed by the network device in the method shown in fig. 3 are all messages of lossless traffic.
301, the policy management entity receives the cache state information 1 sent by the cache entity.
The buffer status information 1 includes the used capacity of the target buffer space at time 1.
The target cache space may be used to cache the target packet. The target message is a message having a target type. The target type is any one of a plurality of message types. For example, if the type of the packet includes 0 to 7 as shown in table 1, the target type may be any one of types 0 to 7.
The target cache space includes a first cache space, a first target sub-cache space and a second target sub-cache space.
The first cache space has the same function as the first cache space. The first cache space may be shared by different types of packets. In other words, any type of packet can be buffered in the first buffer space as long as the first buffer space has available capacity, and each type of packet has no upper limit.
In some embodiments, the first cache space may be a shared cache space.
The target sub-cache space is one of a plurality of sub-cache spaces included in the second cache space. The sub-cache spaces correspond to a plurality of message types supported by the network equipment one to one. Each sub-cache space is used for caching the messages of the corresponding type.
The first target sub-cache space and the second target sub-cache space may be sub-cache spaces in a different second cache space. For example, the first target sub-cache space may be one of a plurality of sub-cache spaces included in the second cache space 1. For example, the second cache space 1 may be a reserved cache space. The second target sub-cache space may be one of a plurality of sub-cache spaces included in the second cache space 2. For example, the second buffer space 2 may be a headroom buffer space.
Taking the headroom buffer space as an example, for example, it is assumed that the network device supports 4 types of messages, which may be referred to as type 0, type 1, type 2, and type 3, respectively. The headroom buffer space in the buffer entity of the network device includes four sub-buffer spaces, which may be referred to as sub-buffer space 0, sub-buffer space 1, sub-buffer space 2, and sub-buffer space 3, respectively. Sub-cache space 0 corresponds to type 0. The sub-buffer space 0 is used for buffering the message of type 0. Sub-cache space 1 corresponds to type 1. The sub-buffer space 1 is used for buffering the message of type 1. Sub-cache space 2 corresponds to type 2 packets. The sub-buffer space 2 is used for buffering the messages of type 2. The sub-buffer space 3 corresponds to a type 3 packet. The sub-buffer space 3 is used for buffering the messages of type 3. Assuming the target packet is type 1, the second target sub-cache space is sub-cache space 1. Assuming that the type of the target packet is type 0, the second target sub-cache space is sub-cache space 0.
The cache entity of the network device preferentially uses the second cache space 1 to cache the message. And under the condition that the second cache space 1 can not cache the message, using the first cache space to cache the message. And under the condition that the first cache space can not cache the message, the cache entity caches the message by using the second cache space 2. In other words, the cache entity may first use the first target sub-cache space to cache the target packet; under the condition that the first target sub-cache space cannot continuously cache the target message, caching the target message by using the first cache space; and under the condition that the first cache space can not cache the target message continuously, using the second target sub-cache space to cache the target message.
The cache entity of the network device may monitor the target cache space, determine the used capacity of the target cache space at time 1, and send the used capacity of the target cache space at time 1 to the policy management entity through the cache state 1. The used capacity of the target cache in cache state 1 may include the used capacity of the first cache space at time 1, the used capacity of the first target sub-cache space at time 1, and the used capacity of the second target sub-cache space at time 1.
302, the policy management entity determines whether the target cache space meets a preset condition 1 according to the used capacity of the target cache space at time 1.
303, if the target cache space meets the preset condition 1, the policy management entity may determine a message processing policy 1 and send the message processing policy 1 to the message processing entity.
In some embodiments, the target cache space satisfying the preset condition 1 may include: the first target sub-cache space cannot continue to cache the target packet, the first cache space cannot continue to cache the target packet, and the second target sub-cache space cannot continue to cache the target packet.
In other words, the policy management entity may first determine whether the first target sub-cache space can continue to cache the target packet. If the first target sub-cache space can continue to cache the target packet, the policy management entity may determine that the target cache space does not satisfy the preset condition 1 and may continue to use the first target sub-cache space to cache the target packet. If the first target sub-cache space cannot continue to cache the target packet, the policy management entity may continue to determine whether the first cache space can continue to cache the target packet. If the first cache space can continue to cache the target packet, the policy management entity may determine that the target cache space does not satisfy the preset condition 1 and may continue to use the first cache space to cache the target packet. If the first cache space cannot continue to cache the target packet, the policy management entity may determine whether the second target sub-cache space can continue to cache the target packet. If the second target sub-cache space can cache the target packet, the policy management entity may determine that the target cache space does not satisfy the preset condition 1 and the second target sub-cache space can continue to be used for caching the target packet. If the second target sub-cache space cannot cache the target packet, the policy management entity may determine that the target cache space satisfies the preset condition 1.
In other words, the policy management entity determines that the target cache space meets the first preset condition when determining that the first target sub-cache space, the first cache space and the second target sub-cache space cannot cache the target packet at the same time.
In some embodiments, the first cache space failing to cache the target packet may include: the first cache space does not have sufficient available capacity to cache messages. In other words, the entire available capacity of the first buffer space is already occupied by buffered packets.
In other embodiments, the first cache space failing to cache the target packet may include: the used capacity in the first buffer space reaches the used capacity upper limit. In other words, an upper limit of the used capacity may be set for the first buffer space, which may be referred to as a preset threshold 1, for example. If the used capacity in the first cache space reaches the preset threshold 1, the first cache space may not continue to cache the packet. For example, the preset threshold value 1 may be set to 85%. In other words, if the first buffer space already has 85% of the buffer space occupied by the packet, the first buffer space cannot continue to buffer the packet. The remaining 15% of the capacity in the first buffer space after reaching the upper limit of the used capacity may be enabled for special cases. For example, the method can be used for buffering burst messages or messages which cannot be discarded.
It is to be understood that the preset threshold value 1 of 85% in the above embodiment is only an example. The preset threshold value 1 may also be a proportional value, such as 95%, 90%, or 80%. It can be understood that, if the preset threshold 1 is 100%, it means that the first buffer space cannot buffer the target packet unless all the available capacity of the first buffer space is occupied.
In other embodiments, the preset threshold 1 may be a buffer capacity value. For example, assuming that the total size of the first buffer space is 2048MB, the preset threshold 1 may be 1800MB, 1900MB or 1700 MB.
In other embodiments, the preset threshold 2 may also be the number of messages. For example, assuming that the first buffer space can buffer 1000 packets in total, the preset threshold 1 may be 800, 900 or 950.
As another example, the preset threshold value 1 in the above embodiment is a preset value for the used capacity. In other embodiments, a lower limit of the available capacity may be further set according to the available capacity of the first buffer space. If the available capacity of the first cache space reaches the lower limit of the available capacity, the first cache space may not continue to cache the packet (correspondingly, the available capacity of the target cache space should be reported by the cache entity at this time).
The upper limit of the used capacity and the lower limit of the available capacity of the first buffer space may be configured by a user, or may be set by the network device at the time of factory shipment (in other words, the upper limit of the used capacity and the lower limit of the available capacity cannot be changed by the user).
In other embodiments, the first cache space failing to cache the target packet may further include the first cache space failing to continue to cache the target packet.
Similarly, in some embodiments, the failure of the first target sub-cache space to cache the target packet may include: the first target sub-cache space does not have sufficient available capacity to be used for caching packets. In other words, the entire available capacity of the first target sub-cache space is already occupied by cached messages.
In other embodiments, the first target sub-cache space failing to cache the target packet may include: the used capacity in the first target sub-cache space reaches the used capacity cap. In other words, an upper limit of the used capacity may be set for the first target sub-cache space, which may be referred to as a preset threshold 2, for example. If the used capacity in the first target sub-cache space reaches the preset threshold 2, the first target sub-cache space may not continue to cache the packet. For example, the preset threshold 2 may be set to 85%. In other words, if the first target sub-cache space already has 85% of the cache space occupied by the packet, the first target sub-cache space cannot continue to cache the packet. The remaining 15% of the capacity in the first target sub-cache space after reaching the upper limit of the used capacity may be enabled for special cases. For example, the method can be used for buffering burst messages or messages which cannot be discarded.
Similarly, in some embodiments, the failure of the second target sub-cache space to cache the target packet may include: the second target sub-cache space does not have sufficient available capacity to be used for caching packets. In other words, the entire available capacity of the second target sub-cache space is already occupied by cached messages.
In other embodiments, the second target sub-cache space failing to cache the target packet may include: the used capacity in the second target sub-cache space reaches the used capacity ceiling. In other words, an upper limit of the used capacity may be set for the second target sub-buffer space, for example, also to a preset threshold 2. If the used capacity in the second target sub-cache space reaches the preset threshold 2, the second target sub-cache space may not continue to cache the packet. For example, the preset threshold 2 may be set to 85%. In other words, if the second target sub-cache space already has 85% of the cache space occupied by the packet, the second target sub-cache space cannot continue to cache the packet. The remaining 15% of the capacity in the second target sub-cache space after reaching the used capacity cap may be enabled for special cases. For example, the method can be used for buffering burst messages or messages which cannot be discarded.
It is to be understood that the preset threshold 2 of 85% in the above embodiment is only an example. The preset threshold 2 may also be a proportional value, such as 95%, 90%, or 80%. It can be understood that, if the preset threshold 2 is 100%, it indicates that the target sub-cache space cannot cache the target packet only when all the capacity of the target sub-cache space is occupied.
In other embodiments, the preset threshold 2 may be a buffer capacity value. For example, assuming that the total size of the first target sub-buffer space is 2048MB, the preset threshold 2 may be 1800MB, 1900MB or 1700 MB.
In other embodiments, the preset threshold 2 may also be the number of messages. For example, assuming that the first target sub-cache space can cache 1000 packets in total, the preset threshold 2 may be 800, 900 or 950.
As another example, the preset threshold 2 in the above embodiment is a preset value for the available capacity. In other embodiments, a lower limit of the available capacity may be set according to the available capacity of the target sub-cache space. If the available capacity of the target sub-cache space reaches the lower limit of the available capacity (correspondingly, the available capacity of the target cache space should be reported by the cache entity at this time), the target sub-cache space may not continue to cache the packet.
The upper limit of the used capacity and the lower limit of the available capacity of the first target sub-cache space and the second target sub-cache space may be configured by a user, or may be set by the network device at the time of factory shipment (in other words, the user cannot change the upper limit of the used capacity and the lower limit of the available capacity).
In addition, in the above embodiment, the upper limit of the used capacity of the first cache space, the first target sub-cache space and the second target sub-cache space is the same. In other embodiments, the upper limit of the used capacity of the first cache space, the first target sub-cache space and the second target sub-cache space may not be the same. For example, the upper limit of the used capacity of the first cache space is 85%, the upper limit of the used capacity of the first target sub-cache space is 90%, and the upper limit of the used capacity of the second target sub-cache space is 95%.
In other embodiments, the first target sub-cache space failing to cache the target packet may further include the first target sub-cache space failing to continue caching the target packet. Similarly, the failure of the second target sub-cache space to cache the target packet may also include a failure of the second target sub-cache space to continue caching the target packet.
In other embodiments, the policy management entity may estimate the time that the target buffer space is occupied and start a timer according to the transmission rate of the packet at the ingress port and the available capacity of the target buffer space. If the timer reaches the estimated time and the transmission rate of the ingress port has not changed during this period, the policy management entity may determine that the target cache space satisfies the preset condition 1. If the transmission rate of the input port or the transmission rate of the output port changes before the timer reaches the estimated time, the policy management entity may adjust the estimated time according to the changed transmission rate, and determine whether the target cache space satisfies the preset condition 1 according to the adjusted estimated time.
In other embodiments, the policy management entity may determine whether the first target sub-cache space and the first cache space are capable of caching the target packet according to the used capacity of the first target sub-cache space and the used capacity of the first cache space. If the first target sub-cache space and the first cache space cannot cache the target message, the policy management entity may estimate the time that the second target sub-cache space is occupied according to the transmission rate of the target message received by the ingress port and the size of the second target sub-cache space, and start a timer. If the timer reaches the estimated time and the transmission rate of the ingress port receiving the target packet does not change, the policy management entity may determine that the second target cache space satisfies the preset condition 1. If the transmission rate of the input port for receiving the target message or the transmission rate of the output port for sending the target message changes before the estimated time is reached by the timer, the policy management entity may adjust the estimated time according to the changed transmission rate, and determine whether the second target cache space satisfies the preset condition 1 according to the adjusted estimated time.
And 304, the message processing entity processes the newly received message according to the message processing strategy 1.
In some embodiments, message handling policy 1 may be to delete a newly received target message. For example, the message processing entity may process a newly received target message, change the message header information, add discard enable to the message header information, and send the modified message to the cache entity. In this way, the caching entity may delete the destination packet containing the discard enable. As another example, the message processing entity may directly discard the newly received target message.
In other embodiments, message handling policy 1 may be to modify the type of the newly received target message. In this way, the message processing entity can modify the type of the newly received target message. After the type of the newly received target message is modified, the sub-cache space corresponding to the modified type is used for caching. For example, the target packet is type 0, and after the type is modified, the type of the target packet is type 1. In this way, the sub-cache space for caching type 1 in the second cache space can be used to cache the type-modified target packet.
In some embodiments, the modification of the type of the target packet may be performed in order of priority. The type priority of the target message after modification is smaller than that of the target message before modification. It is assumed that the priority of type 0 is greater than the priority of type 1 than the priority of type 2. If the target packet is of type 0 priority, the type of the target packet may be modified to type 1.
The sub-cache space corresponding to the type of the re-determined message has available capacity for caching the re-classified message. The policy management entity may first determine a sub-cache space with an available capacity, and then determine that a type corresponding to one of the sub-cache spaces is a type that the target packet needs to be modified, where the type that needs to be modified may be sent to the packet processing entity as a packet processing policy. For example, assuming that the type of the target packet is type 0, the sub-cache space corresponding to type 1 cannot continue to cache the packet, and the sub-cache spaces corresponding to type 2 and type 3 may also continue to cache the packet. In this case, the policy 1 determined by the policy management entity may be to modify the type of the newly received target packet to be type 2 or type 3. After newly receiving the target packet, the packet processing entity may modify the type of the target packet to be type 2 or type 3. In this way, the caching entity may continue to use the type 2 or type 3 sub-cache space to cache the newly received packet.
In other embodiments, if the cache entity includes a plurality of second cache spaces and the plurality of second cache spaces correspond to the plurality of egress ports one to one. Then exception management of the newly received target packet may be to modify an egress port used to send the newly received target packet. As described above, each second buffer space in the buffer entity may be used to buffer the packet sent through the corresponding egress port. Therefore, it may happen that the sub-storage space for caching the target packet in one of the second cache spaces is full but the sub-cache space for caching the target packet in another one or more of the second cache spaces can still continue to cache the target packet. Therefore, the egress port for sending the target packet may be modified, so that the target packet may be cached by using the target sub-cache space in the other second cache space.
In the above technical solution, the cache management of the network device may enable the target packet not to occupy the sub-cache space corresponding to other types of packets in the unified second cache space under the condition that the target sub-cache space cannot continue to cache the target packet. If the network device further includes a second cache space that is independent of the port, the target packet does not preempt a sub-cache space corresponding to another type in the second cache space that is independent of the port. Therefore, the network equipment can be guaranteed to have the use of the residual cache space under the condition that other types of messages need to be cached.
In some embodiments, the network device may further notify an upstream that sends the target packet to stop sending the target packet to the network device if it is determined that the target cache space satisfies the preset condition 1.
The message processing entity can also stop performing exception management on the newly received target message under the condition that the cache management is recovered to be normal.
In some embodiments, the policy management entity may receive the cache status 2 sent by the caching entity. The buffer status 2 carries the used capacity of the target buffer at time 2. The policy management entity may determine whether the target cache space satisfies preset condition 2 according to the used capacity of the target cache space at time 2. If the target cache space meets the preset condition 2, the policy management entity may determine a message processing policy 2 and send the message processing policy 2 to the message processing entity, where the message processing policy 2 is different from the message processing policy 1. In this case, the message processing entity implements the message processing policy 2. And if the target cache space does not meet the preset condition 2, the measurement management entity does not determine a new message processing strategy. In this case, the message processing entity continues to execute message processing policy 1.
In some embodiments, the policy management entity may first determine whether the second target sub-cache space has cached target packets. If at least one target packet is cached in the second target sub-cache space, the policy management entity may determine that the target cache space does not satisfy the preset condition 2. For example, if the available capacity of the second target sub-cache space is equal to the total capacity of the second target sub-cache space, the policy management entity may determine that no target packet is cached in the second target sub-cache space. If the available capacity of the second target sub-cache space is smaller than the total capacity of the second target sub-cache space, the policy management entity may determine that at least one target packet is cached in the second target sub-cache space.
In some embodiments, if there is no cached target packet in the second target sub-cache space, the determining, by the policy management entity, whether the target cache space meets preset condition 2 may include: determining whether the used capacity of the first cache space is smaller than a preset threshold value 3; if the used capacity of the first cache space is smaller than the preset threshold 3, it may be determined that the target cache space satisfies the preset condition 2; if the used capacity of the first cache space is greater than or equal to the preset threshold 3, it may be determined that the target cache space does not satisfy the preset condition 2.
In other embodiments, if there is no cached target packet in the second target sub-cache space, the determining, by the policy management entity, whether the target cache space meets preset condition 2 may include: determining whether the used capacity of the first target sub-cache space is less than a preset threshold 4; if the used capacity of the first target sub-cache space is smaller than the preset threshold 4, it may be determined that the target cache space satisfies the preset condition 2; if the used capacity of the first target sub-cache space is greater than or equal to the preset threshold 4, it may be determined that the target cache space does not satisfy the preset condition 2.
In other embodiments, if there is no cached target packet in the second target sub-cache space, the determining, by the policy management entity, whether the target cache space meets preset condition 2 may include: determining whether the used capacity of the first cache space is less than a preset threshold 3 and the used capacity of the first target sub-cache space is less than a preset threshold 4; if the used capacity of the first cache space is smaller than the preset threshold 3 and the used capacity of the first target sub-cache space is smaller than the preset threshold 4, it may be determined that the target cache space satisfies the preset condition 2; otherwise, it may be determined that the target cache space does not satisfy the preset condition 2.
Similar to the preset threshold 1 and the preset threshold 2, the preset threshold 3 and the preset threshold 4 may be a proportional value or a specific capacity value or a specific number of messages. The preset threshold 3 and the preset threshold 4 may be configured by a user or may be set by the network device when the network device leaves the factory.
When sending the message cached in the caching entity, the network device sends the message according to the reverse caching order. For example, the second target sub-cache space starts caching the target packet after the first cache space cannot continue caching the packet. Therefore, the network device may send out the target packet cached in the second target sub-cache space through the corresponding output port, then send out the target packet cached in the first cache space, and finally send out the target packet cached in the first target sub-cache space. Therefore, by the scheme, the second target sub-cache space can have available capacity to cache a new target message. Thus, if the target message needs to be cached, the second target sub-cache space can be used for caching the target message.
In other embodiments, the policy management entity may determine whether the used capacity of the second target sub-cache space is less than a preset threshold 5. If the used capacity of the second target sub-cache space is less than the preset threshold 5, the policy management entity may determine that the target cache space satisfies the preset condition 2. If the used capacity of the second target sub-cache space is greater than or equal to a preset threshold 5, the policy management entity may determine that the target cache space does not satisfy the preset condition 2.
In other embodiments, the policy management entity may start a timer after sending the message processing policy 1 to the message processing entity, and if the timer reaches a preset time and the egress port for sending the target message has been sending the target message during the time, the policy management entity may determine that the target cache space satisfies the preset condition 2. The network device always sends the target message through the egress port, so that after the message processing entity executes the message processing policy 1, the target message cached in the target cache space is always reduced. After a preset time, the target buffer space has available capacity to continue buffering the target message. In this case, the policy management entity may determine another message handling policy (e.g., message handling policy 2) and send the newly determined message handling policy to the message handling entity.
In some embodiments, the preset time may be determined according to the sending speed of the target packet and the capacity of the second target sub-buffer space. For example, if the sending speed of the target packet is not changed, all target packets in the second target sub-cache space are already sent out when the timer reaches the preset time. In other words, the preset time may be enough for the network device to send out the target packet cached in the second target sub-cache space. For another example, in other embodiments, the preset time may enable the network device to send out the target packet buffered in the most part (for example, 80%, 90%, or 95% of the second target sub-buffer space, etc.) of the second target sub-buffer space.
In some embodiments, message handling policy 2 may be to stop executing message handling policy 1. In this way, the message processing entity may stop executing the message processing policy 1 after receiving the message processing policy 2 (e.g., not deleting the newly received target message). The message processing entity may process the newly received message with a default message processing policy after receiving the message processing policy 2 or with a message processing policy used before using the message processing policy 1.
The effect of the solution shown in fig. 3 will be described below with reference to fig. 4 and 5.
Fig. 4 is a schematic diagram of reserved buffer space, shared buffer space and headroom buffer space.
Fig. 4 is an illustration of how a cache space caches a target packet without employing the method shown in fig. 3.
As shown in fig. 4, the network device 110 starts receiving the type 1 packet from the upstream device 121 through the port 111 at time t0, and the transmission speed of the received type 1 packet is 60 Gbps. Network device 110 begins receiving type 1 packets from upstream device 122 through port 111 at time t0, and the transmission speed for receiving type 1 packets is 60 Gps.
Assume that a type 1 packet needs to be sent through port 114 and the maximum transmission speed of port 114 is 100 Gbps. Then, in this case, the port 114 cannot send out the type 1 messages received by the ports 111 and 112 in time. In this case, the caching entity in the network device 110 may first cache the received packet of type 1 into a sub-cache space (hereinafter, referred to as the reserved cache space of type 1) in the reserved cache space for caching the packet of type 1.
At time t1, the reserved buffer space of type 1 is fully occupied by type 1 packets. The caching entity may determine whether there is still available capacity in the shared cache space. If there is a remaining space in the shared cache space, the cache entity may cache the type 1 packet in the shared cache space.
At time t2, the entire capacity of the shared buffer space is occupied by type 1 messages. In this case, the caching entity may cache the type 1 packet using a sub-cache space for caching the type 1 packet in the headroom cache space (hereinafter referred to as the headroom cache space of type 1).
At time t3, the headroom buffer space of type 1 is fully occupied by the type 1 message. In this case, the caching entity caches the type 1 packet by using the sub-cache space for caching other types. As shown in fig. 4, the message of type 1 starts to occupy the sub-buffer spaces for buffering other types of messages in the reserved buffer space and the headroom buffer space.
As shown in fig. 4, at time t4, the buffer space is reserved, shared, and the headroom buffer space is already occupied by the type 1 packet. The network device 110 starts receiving the type 2 packet from the upstream device 123 through the port 111 at time t4, and the transmission speed of receiving the type 2 packet is 100 Gps. Assume that type 2 messages still need to be sent through port 114. It can be seen that at time t4, the port 111 still receives the type 1 packets from the upstream device 111 and the upstream device 112 at the transmission speed of 60Gps, respectively. Therefore, type 2 messages cannot be sent out from port 114 in a timely manner. In this case, the cache entity needs to cache the type 2 packet, but since the sub-cache space, which is reserved in the cache space and the headroom cache space and is originally used for caching the type 2 packet, is preempted by the type 1 packet, the cache entity cannot cache the type 2 packet in time. This creates resource management exceptions.
Fig. 5 is another illustration of reserved buffer space, shared buffer space and headroom buffer space.
Fig. 5 is a schematic diagram of a reserved buffer space, a shared buffer space and a headroom buffer space after using the method shown in fig. 3. It is assumed that the caching entity periodically sends the available capacity of each sub-caching space in the reserved caching space, the available capacity of the shared caching space and the available capacity of each sub-caching space in the headroom caching space to the policy management entity. The strategy management entity determines a message processing strategy 1 and sends the message processing strategy 1 to the message processing entity. Assume that the packet processing policy 1 is to discard the packet of type 1.
As shown in fig. 5, it is assumed that the message processing entity acquires and executes the message processing policy 1 at time t3 and starts to execute the message processing policy 1. Thus, the message processing entity discards the newly received message of type 1 from time t 3. In this case, since the egress port for transmitting type 1 is always transmitting type 1 messages, the type 1 messages held in the headroom buffer space 111 are continuously decreasing. By the time t5, the message of type 1 is no longer buffered in the headroom buffer space 111. Further, from time t5, the number of type 1 packets stored in the shared buffer space is also reduced. Suppose that at time t6, the message processing entity acquires the message processing policy 2 determined by the policy management entity and starts to execute the message processing policy 2, where the message processing policy 2 is to stop executing the message processing policy 1. Thus, from time t6, the message processing entity may stop discarding newly received type 1 messages. In this case, since the speed of receiving the type 1 packet by the port 111 does not change, the amount of the type 1 packet buffered in the shared buffer space continues to increase.
Compared with fig. 4, because the newly received type 1 packet is discarded according to the state of the cache entity, the type 1 packet does not occupy the sub-cache space of other types of packets. Therefore, the type 2 packet received from the time t4 may be buffered in the sub-buffer space reserved in the buffer air for buffering the type 2 packet. Therefore, the situation that the message of the type 1 occupies the sub-cache spaces of the messages of other types due to the configuration error of the cache entity can be avoided.
Fig. 6 is a schematic flowchart of a method for processing a packet in a network device according to an embodiment of the present application. The method shown in fig. 6 may be performed by the network device shown in fig. 2. For example, the policy management entity in the method shown in fig. 6 may be the policy management entity 141 shown in fig. 2. The cached entity in the method of fig. 6 may be the cached entity 142 of fig. 2. The message processing entity in the method shown in fig. 6 may be the message processing entity 143 shown in fig. 2.
601, the policy management entity obtains first cache state information from the cache entity.
The first cache state information is used for indicating a first state of the cache entity at a first time.
In some embodiments, the first state may be a state of the cached entity. In this case, the first state may be one of a plurality of states. For example, the plurality of states include: an abnormal state and a normal state. In this case, the first state may be a normal state or an abnormal state. As another example, the plurality of states may include: normal state, critical state, and abnormal state. In which case the first state may be a normal state, a critical state or an abnormal state.
In other embodiments, the first state may be information about the cached entity. Such relevant information may reflect the status of the cached entity. For example, the first state may include at least one of the following information: the length of the sending queue, the buffer occupancy of the sending queue, the time delay of the sending queue, the available capacity of the buffer entity, the used capacity of the buffer entity, the rate of the output port of the buffer entity, and the like.
A plurality of counters are arranged in the cache entity, and each counter corresponds to one output port queue. The counter is used for recording the number of the messages of the corresponding output port. The length of the queue (which may be referred to as a transmission queue) of each egress port can be obtained according to the number of messages recorded by the counter. The length of the transmission queue may be a queue length of each egress port, a queue length of one or more specific egress ports (for example, an egress port with the longest queue or an egress port with the lowest egress port rate, etc.), an average queue length, or a total queue length, which is not limited in this embodiment of the application.
Take the network device shown in fig. 2 as an example. In some embodiments, the first state may include a queue length corresponding to port 114 and a queue length of port 115. In other embodiments, the first state may include the queue length for only the one egress port with the longest length (e.g., may be the queue length for port 114). In other embodiments, the first state may be the sum of the queue length of port 114 and the queue length of port 115.
The rate of the egress port of the cache entity may be a rate of each egress port, or may be a rate or an average rate of one or more specific egress ports (for example, the egress port with the longest queue or the egress port with the lowest egress port rate, etc.), which is not limited in this embodiment of the present application
Take the network device shown in fig. 2 as an example. In some embodiments, the first state may include a rate corresponding to port 114 and a rate corresponding to port 115. In other embodiments, the first state may include the rate for only the egress port with the longest length (e.g., may be the queue length of port 114). In other embodiments, the first state may be an average of the rate of port 114 and the rate of port 115.
The available capacity of the cache entity may be the total available capacity of the cache entity or may be the available capacity of a part of the cache space in the cache entity.
In some embodiments, the first state may include an available capacity of the first cache space. In other embodiments, the first state may include an available capacity of each sub-cache space in the second cache space. In other embodiments, the first state may include the available capacity of one or more sub-cache spaces in the second cache space. Such as a sub-cache space for caching packets of the target type. In other embodiments, the first state may include an available capacity of the first cache space and an available capacity of the second cache space.
Similarly, the used capacity of the cache entity may be the total used capacity of the cache entity or may be the used capacity of a part of the cache space in the cache entity.
It is further assumed that the cache space of the cache entity may comprise a first cache space and a second cache space. In some embodiments, the first state may include an utilized capacity of the first cache space. In other embodiments, the first state may include an utilized capacity of each of the sub-cache spaces in the second cache space. In other embodiments, the first state may include an amount of used capacity of one or more sub-cache spaces in the second cache space. Such as a sub-cache space for caching packets of the target type. In other embodiments, the first state may include an utilized capacity of the first cache space and an utilized capacity of the second cache space.
For example, in the embodiment shown in FIG. 3, the first state is the used capacity of the target cache space at time 1.
In some embodiments, the caching entity may determine the first state and send first cache state information indicating the first state to the policy management entity. Correspondingly, the policy management entity receives the first cache state information sent by the cache entity. In other words, the first cache state information is actively determined by the caching entity and sent to the policy management entity.
In some embodiments, the caching entity may periodically determine a status of the caching entity and indicate the determined status of the caching entity to the policy management entity via the caching status information.
In other embodiments, the caching entity may send the first caching status information to the policy management entity if it is determined that the first status satisfies a first predetermined condition. In other words, the cache entity indicates the status of the cache entity to the network device only if the status satisfies the first predetermined condition. If the state of the caching entity does not meet the first preset condition, the caching entity may not need to indicate the state of the caching device to the policy management entity.
In some embodiments, the first preset condition is factory configured and cannot be modified by a user. In other embodiments, the user can set the first preset condition by himself or herself according to needs.
For example, the first preset condition is that the available capacity of the cache entity is less than or equal to the threshold Th 1. In this case, the first cache state information may be sent to the policy management entity if the caching entity determines that the available capacity of the caching entity is less than or equal to Th 1.
As another example, the first preset condition may be that the rate of the egress port reaches a maximum rate. In this case, if the caching entity determines that the rate of the egress port of the caching entity reaches the maximum rate, the first caching status information may be sent to the policy management entity.
As another example, the first preset condition may be that the available capacity of the target sub-cache space is less than the threshold Th 2. In this case, the first cache state information may be sent to the policy management entity if the caching entity determines that the available capacity of the target sub-cache space of the caching entity is less than Th 2.
As another example, the first preset condition may be that the queue length of the egress port is greater than the threshold Th 3. In this case, the first cache state information may be sent to the policy management entity if the caching entity determines that the queue length of at least one of port 114 or port 115 is greater than a threshold Th 3.
In some embodiments, only one preset condition for triggering reporting of the first buffer status information may be stored in the buffer entity.
In other embodiments, a plurality of preset conditions for triggering the reporting of the first buffer status information may be stored in the buffer entity. The caching entity sends first caching status information to the policy management entity as long as the first status satisfies one of the preset conditions. Among the plurality of preset conditions, the preset condition that the first state satisfies is the first preset condition.
In other embodiments, the policy management entity or the message processing entity may send a status request to the caching entity. The caching entity may determine a first status and send first caching status information indicating the first status to the policy management entity, if a status request is received. The way of sending the status request to the caching entity by the policy management entity or the message processing entity may be periodic or aperiodic. For example, the message processing entity may send the status request to the caching entity if the rate of ingress ports is greater than a threshold. As another example, the message processing entity may send the status request to the caching entity upon receiving a message with a particular label or level.
For example, in the embodiment shown in fig. 3, the first cache state information is sent by the caching entity to the policy management entity.
In other embodiments, the policy management entity may actively read the first cache state information in the cache entity. The cache entity may obtain a state of the cache entity, determine cache state information according to the obtained state of the cache entity, and store the determined cache state information. For example, the cache entity may obtain a first state of the cache entity at a first time, determine first cache state information, and store the first cache state information. The policy management entity may read the first cache state information stored by the cache entity.
In other embodiments, the caching entity may send at least one of the length of the sending queue, the buffer occupancy of the sending queue, the latency of the sending queue, the available capacity of the caching entity, the used capacity of the caching entity, the rate of the egress port of the caching entity, and the like to the policy management entity. The policy management entity may determine the first cache state information based on the received at least one type of information. In this case, the first cache state information is the state of the cache entity.
In other embodiments, the caching entity may obtain at least one of the length of the sending queue, the buffer occupancy of the sending queue, the delay of the sending queue, the available capacity of the caching entity, the used capacity of the caching entity, the rate of the output port of the caching entity, and the like, and store the obtained at least one information. The policy management entity may read at least one type of information stored by the caching entity and determine the first caching status information according to the at least one type of information. In this case, the first cache state information is the state of the cache entity.
And 602, the policy management entity determines a first message processing policy according to the acquired first cache state information.
Taking the method shown in fig. 3 as an example, in the method shown in fig. 3, if the cache state information 1 satisfies the following preset condition: the first cache space cannot continue to cache the target packet and the target sub-cache space cannot continue to cache the target packet, then the policy management entity may determine a packet processing policy, e.g., discard the newly received target packet.
If the cache state information 1 does not satisfy the preset condition, the policy management entity may not determine a new packet processing policy.
In other embodiments, the policy management entity may maintain multiple message handling policies. Different message processing strategies may correspond to different cache state information. For example, the queue length of the packet processing policy 11 corresponding to the egress port is smaller than the threshold Th 5; the message processing policy 12 is that the queue length corresponding to the egress port is greater than or equal to Th5 and less than a threshold Th 6; the message handling policy 13 corresponds to the queue length of the egress port being greater than or equal to Th 6. In this case, if the egress port queue length included in the first cache state information is smaller than Th5, the policy management entity may determine that the first packet processing policy is packet processing policy 11.
603, the policy management entity sends the first message processing policy to the message processing entity
604, the message processing entity implements the first message processing policy and processes the newly received message according to the first message processing policy.
In some embodiments, the first packet processing policy may be to discard all newly received packets. In this case, the message processing entity may discard all newly received messages.
In other embodiments, the first message handling policy may be to modify the message information of all newly received messages. The modified message information may be one or more of a priority of the modified message, a drop priority of the modified message, a port number of the modified message, an explicit congestion notification flag (ECN) bit of the modified message, or a drop enable bit of the modified message. In this case, the message processing entity may modify the message information of all newly received messages.
In other embodiments, the first packet processing policy may be to discard a portion of the newly received packet. In this case, the message processing entity may discard a portion of the newly received message. For example, in the embodiment shown in fig. 3, only the newly received target packet may be discarded. As another example, the message processing entity may randomly discard the newly received message.
In other embodiments, the first message handling policy may be to modify message information of a portion of the newly received message. In this case, the message processing entity may modify the message information of a part of the newly received message. For example, in the embodiment shown in fig. 3, only the message information of the target message may be modified. For another example, the message processing entity may randomly select a part of the messages and modify the message information of the selected messages.
In other embodiments, the first packet processing policy may be to discard a portion of the packet while modifying packet information of another portion of the packet.
In other embodiments, the first packet processing policy may be to discard a portion of the packet, modify packet information of the portion of the packet, and not perform any processing on the portion of the packet.
605, the policy management entity obtains second cache state information.
The second buffer status information is used for indicating a second status of the buffer entity at a second time.
In some embodiments, the policy management entity obtains the second cache state information in the same manner as the first cache state information. For example, the first cache state information and the second cache state information are both actively sent to the policy management entity by the caching entity.
In other embodiments, the policy management entity may obtain the second cache state information in a different manner than the first cache state information. For example, the first cache state information is sent to the policy management entity by the cache entity, and the second cache state information is read from the cache entity by the policy management entity.
The specific method for the policy management entity to obtain the second cache state information is similar to the specific method for the policy management entity to obtain the first cache state information. For brevity, further description is omitted here.
The meaning of the second state is similar to that of the first state, and for brevity, the description is omitted here.
606, the policy management entity determines a second message handling policy based on the second cache status information
607, the policy management entity sends the second message processing policy to the message processing entity;
608, the message processing entity executes the second message processing policy, and processes the newly received message according to the second message processing policy.
In some embodiments, the second message handling policy is used to indicate that the message handling entity is no longer executing the first message handling policy. For example, assuming that the first packet processing policy is to discard a packet of a target type, the packet processing policy may stop discarding the packet of the target type after receiving the second packet processing policy.
In some embodiments, the second message handling policy is further operable to instruct the message handling entity to handle the newly received message using a specified message handling policy.
For example, the specified message handling policy may be a default message handling policy. For example, the default message handling policy is to not perform any additional processing (e.g., drop or modify message information) on the target type of message. In other words, the default message processing policy is to process the message of the target type and the message of the non-target type in the same manner.
Also for example, the specified message handling policy can be a message handling policy used prior to using the first message handling policy. For example, the message handling policy used prior to using the first message handling policy is a third message handling policy. The third message processing policy is to modify the priority of the message of the target type. Thus, after receiving the second message processing policy, the message processing entity does not discard the message of the target type, but modifies the priority of the message of the target type.
In some embodiments, the messages in the method of fig. 6 are lossless traffic messages. In other embodiments, the message in the method shown in fig. 6 may be a lossless traffic message, or may be a lossy traffic message.
According to the method shown in fig. 6, the message processing entity may execute a corresponding message processing policy according to the state of the cache entity. Therefore, the cache entity can be better managed, and the occurrence of cache management abnormity of the cache entity is reduced.
For example, due to the misconfiguration, the resource management rule in the caching entity is that the message input to the caching entity is not discarded until the queue length of the port 115 is greater than 25535. But the queue length of port 115 in the resource management rule is larger than the queue length that port 115 can actually handle (assume that port 115 can actually handle is 1023). However, with the method shown in fig. 6, the policy management entity may obtain the queue length of the port 115 reported by the cache entity, determine the packet processing policy when the queue length of the port 115 is greater than 1000, and send the determined packet processing policy to the packet processing entity. Assume that the message handling policy may be to modify the egress port of the message to be port 114. Thus, the message processing entity can modify the egress port of the egress port 115 message as 114. Therefore, the condition of cache management abnormity caused by cache entity configuration error can be avoided.
For another example, in a lossless network, for a lossless traffic packet, if the network device is congested at an egress port, the upstream device is requested to stop sending the packet sent through the egress port. However, the upstream device may not normally respond to the request sent by the network device and continue sending messages to the network device. According to the processing method in the current industry, because the received message is a lossless flow message, the network device cannot discard the newly received message. But the network device cannot send the lossless flow message in time. This may cause a situation where there is not enough buffer space to buffer the newly received packet. With the method shown in fig. 6, the policy management entity can obtain the available capacity of the buffer space reported by the buffer entity. The policy management entity may determine the message processing policy and send the determined message processing policy to the message processing entity, in case the available capacity of the cache space is insufficient. It is assumed that the packet handling policy may be to discard all newly received packets. In this way, the message processing entity may discard the newly received message. Therefore, the message capacity needing to be cached by the caching entity does not exceed the available capacity of the caching entity.
The caching entity 142 may be implemented by a memory.
In some embodiments, different types of cache spaces (i.e., the first cache space and the second cache space) in the cache entity 142 may be implemented by the same memory.
In other embodiments, different types of cache spaces in cache entity 142 may be implemented by different memories. For example, the cache entity 142 may be implemented by four memories. Memory 1 may be used as a first cache space, memory 2 may be used as a second cache space corresponding to port 111, memory 3 may be used as a second cache space corresponding to port 112, and memory 4 may be used as a second cache space corresponding to port 113.
It will be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, the policy management entity 141 and the message processing entity 143 may be implemented by the same processor or system on chip (SoC). The processor may be a Central Processing Unit (CPU) or a Network Processor (NP).
In other embodiments, policy management entity 141 and message processing entity 143 may be implemented by different processors. For example, the message processing entity 143 may be a Central Processing Unit (CPU) or a Network Processor (NP), and the policy management entity 141 may be an Application Specific Integrated Circuit (ASIC). As another example, the message processing entity 143 may be implemented by one ASIC and the policy management entity 141 by another ASIC.
Fig. 7 is a block diagram of a network device according to an embodiment of the present application. The network device 700 shown in fig. 7 includes: a processor 701, a memory 702, a receiver 703 and a transmitter 704.
The processor 701, memory 702, receiver 703 and transmitter 704 may communicate over a bus 705.
Processor 701 is the control center for network device 700 and provides sequencing and processing facilities for executing instructions, performing interrupt actions, providing timing functions, and other functions. Optionally, processor 701 includes one or more Central Processing Units (CPUs). . Optionally, the network device 700 includes multiple processors. The processor 701 may be a single-core (single CPU) processor or a multi-core (multi-CPU) processor. The processor 701 may also be an Application Specific Integrated Circuit (ASIC), a system on chip (SoC), a Network Processor (NP), a digital signal processing circuit (DSP), a Micro Controller Unit (MCU), a Programmable Logic Device (PLD), other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or other integrated chips.
Program code executed by processor 701 may be stored in memory 702. The processor 701 controls the communication with the peripheral device by controlling the execution of other programs or processes, thereby controlling the operation of the network device 700, and thus implementing the operation steps of the above-described method.
The memory 702 may also be used to store messages from upstream devices.
The receiver 703 is used to receive a message from an upstream device. The transmitter 704 is used to transmit the message stored in the memory to a downstream device.
The bus 705 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. But for clarity of illustration the various busses are labeled in the figures as bus 705.
Fig. 8 is a block diagram of a network device according to an embodiment of the present application. The network device 800 shown in fig. 8 includes: a first processor 801, a second processor 802, a first memory 803, a second memory 804, a receiver 805 and a transmitter 806.
The receiver 805 and the transmitter 806 are the same as the receiver 703 and the transmitter 704 in the network device 700 shown in fig. 7, and are not described herein again for brevity.
The first processor 801 may be used to implement the functions of the policy management entity in the above embodiments.
The second processor 802 may be used to implement the functions of the message processing entity in the above embodiments.
The first memory 803 may be used to store program codes executed by the first processor 801 and the second processor 802.
The second memory 804 may be used to buffer messages from upstream devices.
The embodiment of the application also provides a chip, which comprises a logic circuit and an input/output interface. The logic may be coupled to the memory for executing instructions and/or code in the memory to implement the functions performed by the policy management entity in the embodiments described above.
The embodiment of the application also provides a chip, which comprises a logic circuit and an input/output interface. The logic may be coupled to the memory for executing instructions and/or code in the memory, and the chip may perform the functions performed by the message processing entity in the embodiments described above.
The embodiment of the application also provides a chip, which comprises a logic circuit and an input/output interface. The logic may be coupled to the memory for executing instructions and/or code in the memory, and the chip may perform the functions performed by the policy management entity and the message processing entity in the above embodiments.
The chip may be a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a system on chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a digital signal processing circuit (DSP), a Microcontroller (MCU), a programmable logic controller (PLD), other programmable logic devices, a discrete gate or transistor logic device, a discrete hardware component, or other integrated chips.
Embodiments of the present application also provide a computer-readable storage medium, on which instructions are stored, and when executed, the instructions perform the method in the above method embodiments.
Embodiments of the present application further provide a computer-readable storage medium, on which instructions are stored, and when executed, the instructions perform the steps performed by the policy management entity in the foregoing method embodiments.
An embodiment of the present application further provides a computer-readable storage medium, on which instructions are stored, and when the instructions are executed, the steps executed by the message processing entity in the foregoing method embodiment are executed.
As a form of the present embodiment, there is provided a computer program product comprising instructions which, when executed, perform the method of the above-described method embodiments.
As a form of this embodiment, there is provided a computer program product containing instructions that, when executed, perform the steps performed by the policy management entity in the above-described method embodiment.
As a form of this embodiment, there is provided a computer program product containing instructions that, when executed, perform the steps performed by the message processing entity in the above-described method embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (33)

1. A method for processing message in network equipment is characterized in that the network equipment comprises a policy management entity, a cache entity and a message processing entity, and the method comprises the following steps:
the policy management entity acquires first cache state information, wherein the first cache state information indicates a first state of the cache entity at a first moment;
the policy management entity determines a first message processing policy according to the first cache state information and sends the first message processing policy to the message processing entity;
and the message processing entity processes the newly received message according to the first message processing strategy.
2. The method of claim 1, wherein prior to the policy management entity obtaining the first cache state information, the method further comprises:
the policy management entity obtains at least one of the following information of the cache entity at the first time: the length of a sending queue, the buffer occupancy of the sending queue, the time delay of the sending queue, the available capacity of the buffer entity, the used capacity of the buffer entity, and the rate of an output port of the buffer entity;
and the policy management entity determines the first cache state information according to the acquired at least one information.
3. The method of claim 1, wherein the method further comprises:
the cache entity determines the first cache state information;
the policy management entity obtaining first cache state information includes:
and the policy management entity acquires the first cache state information from the cache entity.
4. The method of claim 3, wherein the policy management entity obtaining the first cache state information from the caching entity comprises:
and the policy management entity receives the first cache state information sent by the cache entity.
5. The method of claim 4, wherein prior to the caching entity sending the first cache state information to the policy management entity, the method further comprises:
and the cache entity determines that the first state meets a first preset condition.
6. The method of any of claims 1-5, wherein the first state comprises normal or abnormal.
7. The method of any of claims 1 to 5, wherein the first state comprises at least one of: the length of a sending queue, the buffer occupancy of the sending queue, the time delay of the sending queue, the available capacity of the buffer entity, the used capacity of the buffer entity, and the rate of an output port of the buffer entity.
8. The method according to any of claims 1 to 7, wherein before the policy management entity determines a first packet processing policy based on the first cache state information, the method further comprises:
and the policy management entity determines that the first cache state information meets a second preset condition.
9. The method of any of claims 1 to 5, wherein the cached entity comprises a target cache space, and the first state comprises usage of the target cache space at the first time;
the policy management entity determines a first packet processing policy according to the first cache state information, including:
and the policy management entity determines the first message processing policy according to the use condition of the target cache space at the first moment.
10. The method of any of claims 1 to 9, further comprising:
the policy management entity acquires second cache state information, wherein the second cache state information indicates a second state of the cache entity at a second moment;
the policy management entity determines a second message processing policy according to the second cache state information and sends the second message processing policy to the message processing entity;
and the message processing entity processes the newly received message according to the second message processing strategy.
11. The method of claim 10, wherein the second message processing policy is used to indicate that the message processing entity is no longer executing the first message processing policy.
12. The method of claim 11, wherein the second message handling policy is further for instructing the message processing entity to process a newly received message using a specified message handling policy.
13. The method of claim 12, wherein the specified messaging policy is a default messaging policy or a messaging policy used prior to using the first messaging policy.
14. The method of any of claims 1 to 13, wherein the newly received message comprises a lossless traffic message.
15. The method according to any of claims 1 to 14, wherein the packet processing entity processes the newly received packet according to the first packet processing policy, comprising:
the message processing entity adopts one or more of the following processing modes:
discarding all the newly received messages;
discarding part of the newly received messages;
modifying message information of all the newly received messages; or
And modifying the message information of part of the newly received messages.
16. The method of claim 15, wherein modifying the message information of the newly received message comprises:
modifying one or more of the following information of the newly received message:
the priority of the newly received message;
a discard enable bit of the newly received packet;
the discarding priority of the newly received message;
the port number of the newly received message; or
And displaying a congestion flag bit of the newly received message.
17. A network device, wherein the network device comprises a policy management entity, a caching entity and a packet processing entity, and the method comprises:
the policy management entity is configured to obtain first cache state information, where the first cache state information indicates a first state of the cache entity at a first time;
the policy management entity is configured to determine a first packet processing policy according to the first cache state information, and send the first packet processing policy to the packet processing entity;
and the message processing entity is used for processing the newly received message according to the first message processing strategy.
18. The network device of claim 17,
the policy management entity is further configured to obtain at least one of the following information of the cache entity at the first time: the length of a sending queue, the buffer occupancy of the sending queue, the time delay of the sending queue, the available capacity of the buffer entity, the used capacity of the buffer entity, and the rate of an output port of the buffer entity;
the policy management entity is specifically configured to determine the first cache state information according to the obtained at least one type of information.
19. The network device of claim 17, wherein the caching entity is further configured to determine the first caching status information;
the policy management entity is specifically configured to obtain the first cache state information from the cache entity.
20. The network device according to claim 19, wherein the policy management entity is specifically configured to receive the first cache state information sent by the cache entity.
21. The network device of claim 20, wherein the caching entity is further configured to determine that the first status satisfies a first predetermined condition before sending the first caching status information to the policy management entity.
22. The network device of any of claims 17 to 21, wherein the first state comprises normal or abnormal.
23. The network device of any one of claims 17 to 21, wherein the first state comprises at least one of: the length of a sending queue, the buffer occupancy of the sending queue, the time delay of the sending queue, the available capacity of the buffer entity, the used capacity of the buffer entity, and the rate of an output port of the buffer entity.
24. The network device according to any of claims 17 to 23, wherein the policy management entity is further configured to determine that the first cache state information satisfies a second preset condition before determining a first packet processing policy according to the first cache state information.
25. The network device of any one of claims 17 to 21, wherein the caching entity comprises a target cache space, the first state comprising usage of the target cache space at the first time instance;
the policy management entity is specifically configured to determine the first packet processing policy according to the usage of the target cache space at the first time.
26. The network device of any of claims 17 to 25,
the policy management entity is further configured to obtain second cache state information, where the second cache state information indicates a second state of the cache entity at a second time;
the policy management entity is further configured to determine a second packet processing policy according to the second cache state information, and send the second packet processing policy to the packet processing entity;
and the message processing entity is also used for processing the newly received message according to the second message processing strategy.
27. The network device of claim 26, wherein the second message handling policy is to indicate that the message handling entity is no longer to implement the first message handling policy.
28. The network device of claim 27, wherein the second message handling policy is further for instructing the message handling entity to handle newly received messages using a specified message handling policy.
29. The network device of claim 28, wherein the specified messaging policy is a default messaging policy or a messaging policy used prior to using the first messaging policy.
30. The network device of any of claims 17 to 29, wherein the newly received message comprises a lossless traffic message.
31. The network device according to any of claims 17 to 30, wherein the packet processing entity is specifically configured to process a newly received packet by using one or more of the following processing manners:
discarding all the newly received messages;
discarding part of the newly received messages;
modifying message information of all the newly received messages; or
And modifying the message information of part of the newly received messages.
32. The network device according to claim 31, wherein the packet processing entity is specifically configured to modify one or more of the following information of the newly received packet:
the priority of the newly received message;
a discard enable bit of the newly received packet;
the discarding priority of the newly received message;
the port number of the newly received message; or
And displaying a congestion flag bit of the newly received message.
33. A computer-readable storage medium, characterized in that the computer-readable storage medium stores instructions for the method of any of claims 1 to 16.
CN202010307569.8A 2020-04-17 2020-04-17 Method for processing message in network equipment and related equipment Pending CN113542152A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010307569.8A CN113542152A (en) 2020-04-17 2020-04-17 Method for processing message in network equipment and related equipment
PCT/CN2021/087575 WO2021209016A1 (en) 2020-04-17 2021-04-15 Method for processing message in network device, and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010307569.8A CN113542152A (en) 2020-04-17 2020-04-17 Method for processing message in network equipment and related equipment

Publications (1)

Publication Number Publication Date
CN113542152A true CN113542152A (en) 2021-10-22

Family

ID=78084771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010307569.8A Pending CN113542152A (en) 2020-04-17 2020-04-17 Method for processing message in network equipment and related equipment

Country Status (2)

Country Link
CN (1) CN113542152A (en)
WO (1) WO2021209016A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114024923A (en) * 2021-10-30 2022-02-08 江苏信而泰智能装备有限公司 Multithreading message capturing method, electronic equipment and computer storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070076602A1 (en) * 2002-06-03 2007-04-05 Jeffries Clark D Flow Control in Computer Networks
US20090245104A1 (en) * 2008-03-27 2009-10-01 Fujitsu Limited Apparatus and method for controlling buffering of an arrival packet
CN101582842A (en) * 2008-05-16 2009-11-18 华为技术有限公司 Congestion control method and congestion control device
CN102006226A (en) * 2010-11-19 2011-04-06 福建星网锐捷网络有限公司 Message cache management method and device as well as network equipment
CN106603426A (en) * 2015-10-19 2017-04-26 大唐移动通信设备有限公司 Message discarding method and device
WO2017080284A1 (en) * 2015-11-10 2017-05-18 深圳市中兴微电子技术有限公司 Packet discard method and device and storage medium
WO2018076641A1 (en) * 2016-10-28 2018-05-03 深圳市中兴微电子技术有限公司 Method and apparatus for reducing delay and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9088520B2 (en) * 2011-09-15 2015-07-21 Ixia Network impairment unit for concurrent delay and packet queue impairments

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070076602A1 (en) * 2002-06-03 2007-04-05 Jeffries Clark D Flow Control in Computer Networks
US20090245104A1 (en) * 2008-03-27 2009-10-01 Fujitsu Limited Apparatus and method for controlling buffering of an arrival packet
CN101582842A (en) * 2008-05-16 2009-11-18 华为技术有限公司 Congestion control method and congestion control device
CN102006226A (en) * 2010-11-19 2011-04-06 福建星网锐捷网络有限公司 Message cache management method and device as well as network equipment
CN106603426A (en) * 2015-10-19 2017-04-26 大唐移动通信设备有限公司 Message discarding method and device
WO2017080284A1 (en) * 2015-11-10 2017-05-18 深圳市中兴微电子技术有限公司 Packet discard method and device and storage medium
WO2018076641A1 (en) * 2016-10-28 2018-05-03 深圳市中兴微电子技术有限公司 Method and apparatus for reducing delay and storage medium

Also Published As

Publication number Publication date
WO2021209016A1 (en) 2021-10-21

Similar Documents

Publication Publication Date Title
US10382362B2 (en) Network server having hardware-based virtual router integrated circuit for virtual networking
CN110493145B (en) Caching method and device
US9294304B2 (en) Host network accelerator for data center overlay network
US9703743B2 (en) PCIe-based host network accelerators (HNAS) for data center overlay network
US9225668B2 (en) Priority driven channel allocation for packet transferring
KR100875739B1 (en) Apparatus and method for packet buffer management in IP network system
US9264371B2 (en) Router, method for controlling the router, and computer program
US10193831B2 (en) Device and method for packet processing with memories having different latencies
US7944829B2 (en) Mechanism for managing access to resources in a heterogeneous data redirection device
EP2928132B1 (en) Flow-control within a high-performance, scalable and drop-free data center switch fabric
US20140036680A1 (en) Method to Allocate Packet Buffers in a Packet Transferring System
US20080069138A1 (en) System and method for managing bandwidth
EP3907944A1 (en) Congestion control measures in multi-host network adapter
CN110830382A (en) Message processing method and device, communication equipment and switching circuit
CN113746743A (en) Data message transmission method and device
CN113542152A (en) Method for processing message in network equipment and related equipment
CN112737970A (en) Data transmission method and related equipment
CN117118762B (en) Method and device for processing package receiving of central processing unit, electronic equipment and storage medium
JP2008235988A (en) Frame transfer device
CN116170377A (en) Data processing method and related equipment
CN116567088A (en) Data transmission method, apparatus, computer device, storage medium, and program product
US9325640B2 (en) Wireless network device buffers
CN116545963A (en) Data caching method, device and storage medium
CN117749726A (en) Method and device for mixed scheduling of output port priority queues of TSN switch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination