WO2022068614A1 - 处理报文的方法和芯片 - Google Patents

处理报文的方法和芯片 Download PDF

Info

Publication number
WO2022068614A1
WO2022068614A1 PCT/CN2021/119066 CN2021119066W WO2022068614A1 WO 2022068614 A1 WO2022068614 A1 WO 2022068614A1 CN 2021119066 W CN2021119066 W CN 2021119066W WO 2022068614 A1 WO2022068614 A1 WO 2022068614A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
processing chip
message
control information
information
Prior art date
Application number
PCT/CN2021/119066
Other languages
English (en)
French (fr)
Inventor
韩冰
陶利春
张先富
李楠
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202011410154.XA external-priority patent/CN114363273A/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022068614A1 publication Critical patent/WO2022068614A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/109Integrated on microchip, e.g. switch-on-chip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1515Non-blocking multistage, e.g. Clos
    • H04L49/1523Parallel switch fabric planes

Definitions

  • the present application relates to the field of information technology, and more particularly, to a method and a chip for processing messages.
  • the processors in the current high-speed network chips are generally set in a pipeline manner. Each message has a corresponding program state (PS) to save the context information in the forwarding process of the message.
  • PS program state
  • the processor on the pipeline processes the message, and saves the processing result in the PS and then sends it to the next processor.
  • the present application provides a method and a chip for processing messages, which can receive and process two or more messages at the same time, thereby improving the performance of the processor.
  • An embodiment of the present application provides a method for processing a packet, including: a processing chip determines that a first packet of at least two packets cannot be processed in parallel, and acquires a program state of the first packet; The program status of the message, processing the first message.
  • the processing chip can be capable of processing two or more messages in parallel, and the performance of the processor can be better utilized, so that the processing chip can support complex services.
  • the processing chip includes multiple processors arranged in a pipeline, and the processing chip, according to the program state of the first packet, processes the first packet including: the multiple processors The first processor among the processors acquires the program state of the first message and processes it; the first processor processes the first message according to the processed program state.
  • the method further includes: acquiring, by the processing chip, first program status information, the processing chip The first program status information includes the compressed program status of the first packet; the processing chip determining that the first packet in the at least two packets cannot be processed in parallel includes: the processing chip performs parallel processing according to the first program status information For the at least two packets, it is determined that the first packet cannot be processed in parallel when the pipelined processors included in the processing chip for processing the first packet cannot meet the processing requirements of the first packet.
  • acquiring the program status of the first packet by the processing chip includes: the processing chip includes a compressed program of the first packet based on the first program status information. state to obtain the program state of the first packet.
  • acquiring the first program status information by the processing chip includes: acquiring the control information of the first packet by the processing chip; determining the control information of the first packet by the processing chip Satisfying the compression rule, the first program state information is obtained based on the program state of the first packet.
  • the processing chip determining that the control information of the first packet satisfies the compression rule includes: the processing chip determining the first packet based on the control information of the first packet The processing chip includes the protocol stack type of the first packet in the compression rule, and determines that the control information of the first packet satisfies the compression rule.
  • Compression rules may include some simple protocols, such as user datagram protocol, transmission control protocol (TCP), multi-protocol label switching (MPLS), etc. If the protocol stack type of the packet is a complex protocol, the packet cannot be compressed and processed in parallel.
  • Complex protocols may include segment routing (segment routing over IPv6, SRv6), generic routing encapsulation (generic routing encapsulation, GRE) based on the sixth version of the Internet Protocol (Internet Protocol version 6, IPv6).
  • the processing chip determining that the control information of the first packet satisfies the compression rule includes: the processing chip determining the first packet according to the control information of the first packet The processing chip includes the port configuration information of the first packet in the compression rule, and determines that the control information of the first packet satisfies the compression rule.
  • the port configuration information may include a manner of how to process the packets carrying the port information. If the port configuration information of the first packet is a way of simply processing the packet carrying the port information, such as not processing the port or unifying all the outbound/inbound ports to the same port number, then the first packet can be determined to be the same. The control information of the file satisfies the compression rule. If the port configuration information of the first packet includes a method that requires complex processing of the packet, such as using access control lists (ACL), unicast reverse path forwarding (URPF), dynamic host configuration protocol (dynamic host configuration protocol, DHCP) binding check and other methods to process the packet, then it can be determined that the control information of the first packet does not meet the compression rules.
  • ACL access control lists
  • URPF unicast reverse path forwarding
  • DHCP dynamic host configuration protocol
  • an embodiment of the present application provides a method for processing a packet, including: a processing chip obtains control information of a first packet of at least two packets; the processing chip is based on a compression rule and a control information to obtain first program status information, where the first program status information includes the compressed program status of the first message; the processing chip, based on the first program status information, The first packet is processed in parallel.
  • the method further includes: acquiring, by the processing chip, control information of a second packet of the at least two packets; the processing chip is based on the compression rule and the control information of the second message to obtain the program state of the second message; the processing chip sequentially processes the second message in the at least two messages based on the program state of the second message .
  • the processing chip acquiring the first program status information based on the compression rule and the control information of the first packet includes: the processing chip is based on the first packet.
  • the control information of the message is determined, and the protocol stack type of the first message is determined;
  • the processing chip includes the protocol stack type of the first message in the compression rule, and the control information of the first message is determined to satisfy the compression rule.
  • the chip acquires the first program state information based on the program state of the first message.
  • the processing chip acquiring the first program status information based on the compression rule and the control information of the first packet includes: the processing chip according to the first packet the control information of the first packet, and determine the port configuration information of the first packet; the processing chip includes the port configuration information of the first packet in the compression rule, and determines that the control information of the first packet satisfies the compression rule; the processing The chip acquires the first program state information based on the program state of the first message.
  • the specific content of the compression rule and port configuration may refer to the corresponding content of the first aspect.
  • the processing chip acquiring the program state of the second packet based on the compression rule and the control information of the second packet includes: the processing chip is based on The control information of the second packet determines the protocol stack type of the second packet; the processing chip obtains program status information of the second packet when the compression rule does not include the protocol stack type of the second packet.
  • the processing chip acquiring the program status of the second packet based on the compression rule and the control information of the second packet includes: the processing chip according to The control information of the second packet determines the port configuration information of the second packet; the processing chip obtains the program status information of the second packet when the compression rule does not include the port configuration information of the second packet.
  • embodiments of the present application provide a processing chip, where the processing chip includes components for implementing the method of the first aspect or any possible implementation manner of the first aspect.
  • the processing chip may include multiple processors arranged in a pipeline.
  • the processing chip may also include a scheduler.
  • the processing chip may also include a converter.
  • the processing chip may also include a compression identifier.
  • an embodiment of the present application provides a processing chip, where the processing chip includes components for implementing the method of the second aspect or any possible implementation manner of the second aspect.
  • the processing chip may include multiple processors arranged in a pipeline.
  • the processing chip may also include a scheduler.
  • the processing chip may also include a converter.
  • the processing chip may also include a compression identifier.
  • an embodiment of the present application provides a network device, where the network device may include the processing chip of the third aspect.
  • an embodiment of the present application provides a network device, and the network device may include the processing chip of the fourth aspect.
  • an embodiment of the present application provides a computer-readable storage medium, where program codes are stored in the computer-readable storage medium, and when the computer storage medium runs on a computer, the computer is made to execute the first aspect or the first aspect. any possible implementation.
  • embodiments of the present application provide a computer-readable storage medium, where program codes are stored in the computer-readable storage medium, and when the computer storage medium is run on a computer, the computer is made to execute the second aspect or the second aspect. any possible implementation.
  • an embodiment of the present application provides a network device, where the network device may include a module for implementing the functions corresponding to the steps included in the first aspect or any possible implementation manner of the first aspect.
  • an embodiment of the present application provides a network device, where the network device may include a module for implementing the functions corresponding to the steps included in the second aspect or any possible implementation manner of the second aspect.
  • FIG. 1 is a schematic diagram of a communication system 100 suitable for an embodiment of the present application.
  • FIG. 2 is a schematic structural block diagram of a network device provided according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for processing a packet according to an embodiment of the present application.
  • Figure 4 is a schematic diagram showing the processor pipeline and converter of the loopback unit.
  • FIG. 5 is a schematic diagram of pipeline processing of messages.
  • FIG. 6 is a schematic diagram of another pipeline processing message.
  • FIG. 7 is a schematic structural diagram of another network device provided according to an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a method for processing a packet according to an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a method for processing a packet according to an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of another method for processing a packet according to an embodiment of the present application.
  • the network architecture and service scenarios described in the embodiments of the present application are for the purpose of illustrating the technical solutions of the embodiments of the present application more clearly, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application.
  • the evolution of the architecture and the emergence of new business scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • At least one means one or more, and “plurality” means two or more.
  • And/or which describes the relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, it can indicate that A exists alone, A and B exist at the same time, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects are an “or” relationship.
  • At least one item(s) below” or similar expressions refers to any combination of these items, including any combination of single item(s) or plural items(s).
  • At least one item (a) of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c may be single or multiple .
  • the message referred to in the embodiments of this application may refer to any transmission unit transmitted by two nodes in computer network communication, for example, it may be a message (message) in the application layer, or a data packet (packet) in the network layer. , data segments in the transport layer, etc.
  • FIG. 1 is a schematic diagram of a communication system 100 suitable for an embodiment of the present application.
  • the communication system 100 includes at least one communication device, such as the communication device 101 , the communication device 102 , the communication device 103 , the communication device 104 , and the communication device 105 shown in FIG. 1 .
  • the communication system 100 also includes a network device 110 .
  • the network device 110 may receive packets sent from any one or more communication devices from the communication device 101 to the communication device 105 , and send the received packets to another one or more communication devices in the communication system 100 .
  • the communication device 101 wishes to send the message M1 to the communication device 103 and the communication device 104 , and the communication device 102 wishes to send the message M2 to the communication device 105 . Then the communication device 101 can send the message M1 to the network device 110, the communication device 102 can send the message M2 to the network device 110, and the network device 110 forwards the received message M1 to the communication device 103 and the communication device 104 , and forward the received message M2 to the communication device 105 .
  • Any of the communication devices 101 to 105 may be computer devices (eg, mobile phones, tablet computers, notebook computers, desktop computers, etc.) or network devices (eg, switches, routers, etc.).
  • FIG. 2 is a schematic structural block diagram of a network device provided according to an embodiment of the present application.
  • the network device 110 includes an input and output circuit 111 and a processing chip 120 .
  • the input-output circuit 111 may be an interface or an interface circuit through which the network device 110 communicates with the outside.
  • the processing chip 120 includes an interface circuit 121, a scheduler 122, a converter 123, and a plurality of processors 125 arranged in a pipeline.
  • the plurality of processors 125 arranged in a pipeline may be collectively referred to as a processor pipeline 124 .
  • the communication device 101 to the communication device 105 shown in FIG. 1 are also network devices, the structure of the communication device 101 to the communication device 105 may also be the structure shown in FIG. 2 .
  • the following describes the packet processing method provided by the embodiment of the present application with reference to the communication system 11 shown in FIG. 1 and the network device 110 shown in FIG. 2 .
  • FIG. 3 is a schematic flowchart of a method for processing a packet according to an embodiment of the present application.
  • the input-output circuit 111 in the network device 110 can receive the message M1 from the communication device 101 and the message M2 from the communication device 102 .
  • the input-output circuit 111 sends the received message M1 and the message M2 to the processing chip 120 .
  • the processing chip 120 receives the message M1 and the message M2.
  • the processing chip 120 may receive the message M1 and the message M2 from the input/output circuit 111 through the interface circuit 121 .
  • program status information 1 includes the program status of the message M1 and the program status of the message M2.
  • the program state of the message M1 may be referred to as PS_M1 for short, and the program state of the message M2 may be referred to as PS_M2 for short.
  • PS_M1 and PS_M2 may be determined by scheduler 122 in processing chip 120 .
  • the scheduler 122 may also determine program state information 1 according to PS_M1 and PS_M2.
  • Program status information 1 may include compressed PS_M1 and compressed PS_M2. Compressing the PS may be to delete invalid bits (eg, consecutive 0s) in the PS.
  • the program status information 1 may include a header field and a payload field.
  • the header field of the program status information 1 may be divided into two parts, such as a first header subfield and a second header subfield.
  • the first header subfield may include the compressed PS_M1 header field.
  • the second header subfield may include the compressed PS_M2 header field.
  • the load field of the program status information 1 may be divided into two parts, such as a first load subfield and a second load subfield.
  • the first load subfield includes the compressed load field of PS_M1.
  • the second load subfield includes the subfield of the compressed PS_M2.
  • the last few consecutive 0s in the load field of PS_M1 are cleared to obtain the compressed load field of PS_M1.
  • the last few consecutive 0s in the load field of PS_M2 are cleared to obtain the compressed load field of PS_M2.
  • the processing of the packet M1 and the packet M2 may be a conventional packet processing manner in the packet forwarding process.
  • the outgoing port of the packet may be determined according to information such as a destination port number or a destination Internet Protocol (Internet Protocol, IP) address in the packet.
  • IP Internet Protocol
  • the packet may be modified according to preset rules, such as modifying the source IP address of the packet, adding a multi-protocol label switching (MPLS) label to the packet header, and the like.
  • MPLS multi-protocol label switching
  • the processing of the message may be implemented by a processor in a pipelined processor.
  • the scheduler 122 sends the program status information 1 to the processor pipeline 124, and the processors in the processor pipeline 124 are responsible for processing the message.
  • the processor in the processor pipeline 124 determines in the process of processing the message M1 and the message M2 in parallel that it cannot continue to process the message M1, but can process the message M2, it continues to process the message M2. In the process of processing the message M1 and the message M2 in parallel, the processor in the processor pipeline 124 determines that it cannot continue to process the message M2, but can process the message M1, and then continues to process the message M1. When the processors in the processor pipeline 124 determine that the packets M1 and M2 cannot continue to be processed during the parallel processing of the packets M1 and M2, they will not continue to process the packets M1 and M2.
  • the processor in the processor pipeline 124 determines during the parallel processing of the message M1 and the message M2 that it cannot continue to process the message M1 and the message M2 in parallel, but can meet the processor requirements of any one of the messages.
  • the processors in the processor pipeline 124 may determine that either of the message M1 and the message M2 is a message that cannot be processed further (eg, the message M1 ), and continue to process the other message (eg, the message M2 ).
  • the inability to continue processing the message includes: the pipeline length allocated to the message in the processor pipeline 124 cannot meet the processing requirements of the message, or the performance of the processors in the processor pipeline 124 cannot meet the packet processing requirements.
  • the processor pipeline 124 may include multiple loopback units, and the conversion indication information may be sent to the converter 123 through the loopback units.
  • the processor that is determined to be unable to continue processing the message M1 may be referred to as the target processor.
  • the loopback unit may be one of at least one loopback unit located downstream of the target processor.
  • the loopback unit may be a loopback unit closest to the target processor among at least one loopback unit located downstream of the target processor.
  • FIG. 4 is a schematic diagram showing the processor pipeline and translator of the loopback unit.
  • the processor pipeline 410 shown in FIG. 4 includes a processor 411 , a processor 412 , a processor 413 , a processor 414 , a processor 415 , a processor 416 , a processor 417 and a processor 418 .
  • the processor 411 is connected to the processor 412 through the bus 12; the processor 412 is connected to the processor 413 through the bus 23; the processor 413 is connected to the processor 414 through the bus 34; the processor 414 is connected to the processor 415 through the bus 45; 415 is connected to processor 416 through bus 56; processor 416 is connected to processor 417 through bus 67; processor 417 is connected to processor 418 through bus 78.
  • the processor 411, the processor 413, the processor 416, and the processor 418 are also connected to the converter 420 through a bus, respectively.
  • the bus connecting processor 413, processor 416, processor 418 and converter 420 may be considered a loopback unit.
  • processor 412 determines that the parallel processing of message M1 and message M2 cannot continue.
  • the switching indication information can be sent to the converter 420 through the nearest loopback unit (ie, the bus 30).
  • the processor 414 determines that it cannot continue to process the message M1 and the message M2 in parallel
  • the conversion indication information can be sent to the converter 420 through the nearest loopback unit (ie, the bus 60 ).
  • the processor 418 determines that the message M1 and the message M2 cannot be processed in parallel
  • the conversion indication information can be sent to the converter 420 through the nearest loopback unit (ie, the bus 80).
  • the transition indication information may include compressed PS, ie program state information 1 .
  • the converter 123 can delete the PS_M1 in the program state information 1 to obtain the program state information 2 .
  • the converter 123 may clear the second header subfield and the second load subfield in the program status information 1 to zero to obtain the program status information 2 .
  • the converter 123 may first clear the first header subfield and the first load subfield in the program status information 1 to zero, and then clear the PS_M2 header in the second header subfield to zero.
  • the information of the field is copied to the first header subfield, the information of the PS_M2 payload field in the second payload subfield is copied to the first payload subfield, and finally the second header subfield and the second payload subfield are cleared to zero.
  • the conversion indication information may include compressed PS_M1.
  • the converter 123 can directly restore the compressed PS_M1 to the complete PS_M1.
  • the compressed PS_M1 is obtained by clearing the value of the PS_M1 header and the payload field to 0. After obtaining the compressed PS_M1, the converter 123 fills in 0s in these fields to obtain the complete PS_M1.
  • PS_M1 before PS_M1 is compressed to obtain program state information 1, PS_M1 may be stored in a storage unit (not shown) in processing chip 120 and an identifier may be assigned to PS_M1. If the packet 1 cannot be processed further, the conversion indication information sent to the converter 123 carries the identifier of PS_M1. In this way, the converter 123 obtains the PS_M1 stored in the storage unit according to the identifier of the PS_M1, and determines the program state information 2 according to the obtained PS_M1.
  • the PS_M1 stored in the storage unit may be the complete PS_M1, the compressed PS_M1, or the program status information 1.
  • the converter 420 in FIG. 4 determines the program state information 2, it can send the program state information 2 to the first processor (ie, the processor 411) in the processor pipeline through the bus 10.
  • the processors in the processor pipeline process the program state information 2 in sequence.
  • the processor 411 first processes the program status information 2, and sends the processed program status information 2 to the processor 412; the processor 412 continues to process the program status information 2 processed by the processor 411, and sends the processed program status information 2 to the processor 412. 2 is sent to processor 413, and so on.
  • the specific implementation manner of processing the message M1 according to the program status information M2 is the same as the existing processing manner of processing a message according to the program status. For the sake of brevity, it will not be repeated here.
  • the processing chip can process two packets in parallel, the processing chip can directly process the two packets in parallel.
  • the method shown in FIG. 3 proposes a solution in the case where the processing chip cannot process one or more of the two packets in parallel. In this way, the processing chip can be capable of receiving and processing two packets at the same time, and the performance of the processor can be better utilized, so that the processing chip can support complex services.
  • FIG. 5 is a schematic diagram of pipeline processing of messages.
  • the pipeline shown in FIG. 5 includes six processors, which are referred to as processor 501 , processor 502 , processor 503 , processor 504 , processor 505 and processor 506 respectively.
  • the program state information including the PS of the message M1 and the PS of the message M2 enters the pipeline, half of the processors in the pipeline are responsible for processing the PS of the message M1, and the other half of the processors are responsible for processing the message M2 PS.
  • the processor 501, the processor 503 and the processor 505 are responsible for processing the PS of the message M1; the processor 502, the processor 504 and the processor 506 are responsible for processing the PS of the message M2.
  • FIG. 6 is a schematic diagram of another pipeline processing message.
  • the pipeline shown in FIG. 6 includes six processors, which are referred to as processor 601 , processor 602 , processor 603 , processor 604 , processor 605 and processor 606 respectively.
  • the program state information including the PS of the message M1 and the PS of the message M2 enters the pipeline, and all processors in the pipeline can be used to process the PS of the message M1 and the PS of the message M2.
  • half of the resources in the processing unit 601, the processing unit 602, the processing unit 603, the processing unit 604, the processing unit 605 and the processing unit 606 are used to process the PS of the message M1, and the other half of the resources are used to process the PS of the message M2.
  • processor pipeline receives the message M1 and the message M2 within one clock cycle, then the processor pipeline can process the message in the following three cases:
  • the message M1 and the message M2 can be processed in parallel, and the processor pipeline can process the two messages in one clock cycle and send them out in one pipeline processing.
  • the processor pipeline processes two packets in one clock cycle. In this way, double-speed packet forwarding can be achieved.
  • a pipeline processing can first send the processed message M2, and the message M1 needs to be looped back, that is After the converter converts the PS of the message M1, the processor pipeline processes the message M1 again, and then sends the processed message M1.
  • the processor pipeline still takes two clock cycles to process the two packets. In this way, the overall performance of the processor pipeline is double the packet rate.
  • any simultaneously input message may be compressed, and then the two messages may be processed in parallel based on the program state information obtained after compression.
  • the packet that cannot be processed in parallel is processed according to the method shown in FIG. 3 . But as mentioned above, in this case the processor pipeline performance becomes two-thirds the packet rate.
  • a compression identifier can be set in the network device, and the compression identifier can be used to judge whether the two packets can be compressed;
  • the PS of the two packets determines the program status information (also referred to as the compressed PS) of the PS containing the two packets, and sends the program status information to the processing chip.
  • the processing chip can process the two messages in parallel according to the program status information. If the processing chip determines during the parallel processing of the two packets that one or more of the packets cannot continue to be processed in parallel, the method shown in FIG. 3 is used to loop back the packets that cannot be processed in parallel. It can be seen that, through the above technical solution, the packets that obviously cannot be processed in parallel can be excluded, which can reduce the occurrence of reducing the pipeline performance of the processor.
  • FIG. 7 is a schematic structural diagram of another network device provided according to an embodiment of the present application.
  • the network device 110 shown in FIG. 7 adds a compression identifier 126 to the network device 110 shown in FIG. 2 .
  • the following describes the packet processing method provided by the embodiment of the present application with reference to the communication system 11 shown in FIG. 1 and the network device 110 shown in FIG. 7 .
  • FIG. 8 is a schematic flowchart of a method for processing a packet according to an embodiment of the present application.
  • the input-output circuit 111 in the network device 110 can receive the message M1 from the communication device 101 and the message M2 from the communication device 102 .
  • the input-output circuit 111 sends the received message M1 and the message M2 to the processing chip 120 .
  • the processing chip 120 receives the message M1 and the message M2.
  • the processing chip 120 may receive the message M1 and the message M2 from the input/output circuit 111 through the interface circuit 121 .
  • PS_M1 ie the program state of the message M1
  • PS_M2 ie the program state of the message M2
  • the compression identifier 126 can obtain the message M1 and the message M2 from the interface circuit 121, extract the protocol stack types of the message M1 and the message M2 from the headers of the message M1 and the message M2, and then according to the message The protocol stack types of message M1 and message M2 determine whether PS_M1 and PS_M2 can be compressed.
  • the compression identifier 126 may maintain a compression rule, which may include a protocol stack type whitelist that includes multiple protocol types.
  • the multiple protocol stack types included in the protocol type whitelist are protocol stack types of packets that can be compressed. If the protocol stack type of the message M1 and the protocol stack type of the message M2 are both in the whitelist of the protocol type, it is determined that PS_M1 and PS_M2 can be compressed to obtain the program status information 1; if the protocol stack type of the message M1 or If the protocol stack type of the message M2 is not in the whitelist of the protocol type, it is determined that PS_M1 and PS_M2 cannot be compressed. In this case, the message M1 and the message M2 can be processed separately.
  • the compression rules stored by the compression identifier 126 may include a protocol stack type blacklist, and the protocol type blacklist includes multiple protocol types. Multiple protocol stack types included in the protocol type blacklist are protocol stack types of packets that cannot be compressed. If the protocol stack type of the message M1 and the protocol stack type of the message M2 are not in the blacklist of the protocol type, it is determined that PS_M1 and PS_M2 can be compressed to obtain the program status information 1; if the protocol stack type of the message M1 or If the protocol stack type of the message M2 is in the blacklist of the protocol type, it is determined that PS_M1 and PS_M2 cannot be compressed. In this case, the message M1 and the message M2 can be processed separately.
  • the compression identifier 126 may store a protocol type whitelist and a protocol type blacklist at the same time, and the multiple protocol stack types included in the protocol type whitelist are protocol stacks of packets capable of being compressed Type, the multiple protocol stack types included in the protocol type blacklist are the protocol stack types of the packets that cannot be compressed. If the protocol stack type of the message M1 and the protocol stack type of the message M2 are both in the whitelist of the protocol type, it is determined that PS_M1 and PS_M2 can be compressed to obtain the program status information 1; if the protocol stack type of the message M1 or If the protocol stack type of the message M2 is in the blacklist of the protocol type, it is determined that PS_M1 and PS_M2 cannot be compressed.
  • the message M1 and the message M2 can be processed separately; if the protocol stack type of the message M1 or the protocol stack type of the message M2 is neither in the blacklist of the protocol type nor in the whitelist of the protocol type, Then it can also be determined that PS_M1 and PS_M2 cannot be compressed. In this case, the message M1 and the message M2 can be processed separately.
  • the protocol stack types included in the protocol type whitelist may include: user datagram protocol (UDP), transmission control protocol (TCP), multi-protocol label switching (multi-protocol) protocol label switching, MPLS), etc.
  • the protocol stack types included in the protocol type blacklist are generally protocol stack types consisting of complex tunnels, for example, may include SRv6, generic routing encapsulation (generic routing encapsulation, GRE), and the like.
  • the compression identifier 126 may also determine whether to compress the PS of the packet according to other contents in the control information of the packet. For example, whether to compress the PS of the packet may be determined according to the port information of the packet.
  • the port information of the packet may be the ingress port number and/or the egress port number of the packet.
  • the compression identifier 126 may also maintain a whitelist of port configuration information.
  • the compression identifier 126 may determine the port configuration information of the message M1 (hereinafter referred to as the port configuration information M1) according to the port information of the message M1, and determine the port configuration information of the message M2 (hereinafter referred to as the port configuration information) according to the port information of the message M2.
  • port configuration information whitelist includes port configuration information M1 and port configuration information M2; if the port configuration information whitelist includes port configuration information M1 and port configuration information M2, then determine that PS_M1 and PS_M2 can be compressed; If the port configuration information whitelist does not include port configuration information M1 or port configuration information M2, it is determined that PS_M1 and PS_M2 cannot be compressed.
  • the compression identifier 126 may maintain a blacklist of port configuration information. After determining the port configuration information M1 and the port configuration information M2, the compression identifier 126 may determine whether the port configuration information blacklist includes the port configuration information M1 and the port configuration information M2; if the port configuration information blacklist includes the port configuration information M1 and/or port configuration information M2, then it is determined that PS_M1 and PS_M2 cannot be compressed; if the port configuration information blacklist does not include port configuration information M1 and port configuration information M2, then it is determined that PS_M1 and PS_M2 can be compressed.
  • the compression identifier 126 may simultaneously store the port configuration information blacklist and the port configuration information whitelist. If the port configuration information M1 and the port configuration information M2 are in the port configuration information whitelist, it is determined that PS_M1 and PS_M2 can be compressed; if at least one of the port configuration information M1 and the port configuration information M2 is in the port configuration information blacklist , then it is determined that PS_M1 and PS_M2 cannot be compressed.
  • the port configuration information may include a manner of how to process the packets carrying the port information.
  • the port configuration information in the port configuration information whitelist may include a way of simply processing the packets carrying the port information, such as not processing the port or unifying all the outbound/inbound ports into the same port number.
  • Port configuration information in the port configuration information blacklist may include methods that require complex processing of packets, such as using access control lists (ACL), URPF, and dynamic host configuration protocol (dynamic host configuration protocol, DHCP) binding checks and other methods to process the message.
  • ACL access control lists
  • URPF dynamic host configuration protocol
  • DHCP dynamic host configuration protocol
  • step 802 determines PS_M1 and sends PS_M1 to processor pipeline 124 .
  • the processor pipeline 124 processes the message M1 according to the PS_M1 ; the scheduler 122 determines the PS_M2 and sends the PS_M2 to the processor pipeline 124 .
  • the processor pipeline 124 processes the message M2 according to PS_M2.
  • step 803 and step 804 can be executed.
  • the scheduler 122 may determine PS_M1 and PS_M2, and perform compression processing on PS_M1 and PS_M2 to obtain program program status information 1, where the program status information 1 includes the compressed PS_M1 and PS_M2. Compressed PS_M2.
  • the scheduler 122 may determine PS_M1 and PS_M2, and perform compression processing on PS_M1 and PS_M2 to obtain program program status information 1, where the program status information 1 includes the compressed PS_M1 and PS_M2. Compressed PS_M2.
  • the processor pipeline 124 obtains the program state information 1 from the scheduler 122 , and processes the message M1 and the message M2 in parallel according to the program state information 1 . If the processor pipeline 124 determines that at least one of the messages M1 and M2 cannot continue to be processed in parallel during the parallel processing of the message M1 and the message M2 according to the program state 1, the method shown in FIG. 3 may be referred to. Process message M1 and message M2.
  • the processor pipeline 124 processes the message M1 and the message M2 in parallel, and passes the processed message M1 and the message M2 through the interface circuit. 122 is sent to the next node.
  • the processing chip only processes two packets in parallel.
  • the processing chip may also acquire more packets (for example, three or more packets).
  • the processing chip can also judge whether the PS of the three packets can be compressed; if so, after compressing the PS, the three packets are processed in parallel according to the compressed PS; if not, according to the existing way to process these three packets respectively.
  • the embodiment shown in FIG. 3 can also be used to restore the PS of the packets that cannot be processed in parallel. to the uncompressed state, and then process the packets that cannot be processed in parallel in the way that the existing packets are processed.
  • FIG. 9 is a schematic flowchart of a method for processing a packet according to an embodiment of the present application.
  • the processing chip determines that a first packet of the at least two packets cannot be processed in parallel, and acquires a program state of the first packet.
  • the processing chip processes the first packet according to the program state of the first packet.
  • the first packet may be the packet M1 in the embodiment shown in FIG. 3
  • the second packet It may be the message M2 in the embodiment shown in FIG. 3 .
  • the processing chip includes a plurality of processors arranged in a pipeline, and the processing chip, according to a program state of the first packet, processing the first packet includes: a first one of the plurality of processors The processor acquires and processes the program state of the first message; the first processor processes the first message according to the processed program state.
  • a plurality of processors arranged in a pipeline may be collectively referred to as a processor pipeline.
  • the processors 411 to 418 are arranged in a pipeline, and these 8 processors may be collectively referred to as a processor pipeline.
  • Processor 411 is the first of eight processors.
  • the program state of the first packet can be processed through the processor pipeline, and then the first packet is processed according to the processed program state.
  • the processor 411 processes the program state of the first packet; the processor 411 processes the first packet according to the processed program state.
  • the processor 411 can send the processed program state to the processor 412, the processor 412 continues to process the program state, and then sends the processed program state to the processor 413, and so on, and finally the processor 418 completes the process.
  • the processor 411 can send the processed program state to the processor 412, the processor 412 continues to process the program state, and then sends the processed program state to the processor 413, and so on, and finally the processor 418 completes the process.
  • the processor 411 can send the processed program state to the processor 412, the processor 412 continues to process the program state, and then sends the processed program state to the processor 413, and so on, and finally the processor 418 completes the process.
  • the processor 418 completes the process.
  • the first message is processed according to the processed program state.
  • the method before the processing chip determines that the first packet of the at least two packets cannot be processed in parallel, the method further includes: acquiring, by the processing chip, first program status information, where the first program status information includes a compressed the program state of the first message; the processing chip determining that the first message in the at least two messages cannot be processed in parallel includes: the processing chip processes the at least two messages in parallel according to the first program state information, and When the pipeline-arranged processors included in the processing chip for processing the first packet cannot meet the processing requirements of the first packet, it is determined that the first packet cannot be processed in parallel.
  • the first packet is the packet M1 in FIG. 5 .
  • the first packet is the packet M1 in FIG. 6 .
  • the processor pipeline shown in FIG. 6 there are six processors in the pipeline shown in FIG. 6 that can be used to process the PS of the message M1, only half of the processing resources of each processor can be used to process the PS of the message M1. In this case, it may also happen that the processor pipeline cannot process the message M1 in parallel. Under this clearing, it can be determined that the message M1 cannot be processed in parallel.
  • acquiring, by the processing chip, the program status of the first packet includes: the processing chip acquiring the first packet based on the compressed program status of the first packet included in the first program status information. program status.
  • the first program status information may be program status information 1 in the embodiment shown in FIG. 3 .
  • For the determination method of the first program state information reference may be made to the embodiment shown in FIG. 3 , which is not repeated here for brevity.
  • acquiring, by the processing chip, the first program status information includes: acquiring, by the processing chip, control information of the first packet; and determining, by the processing chip, that the control information of the first packet satisfies a compression rule, and based on the first packet The program status of the message obtains the first program status information.
  • the first packet may be the packet M1 in the embodiment shown in FIG. 8 .
  • the processing chip determining that the control information of the first packet satisfies the compression rule includes: the processing chip determining the protocol stack type of the first packet based on the control information of the first packet; the processing chip determining the protocol stack type of the first packet; When the compression rule includes the protocol stack type of the first packet, it is determined that the control information of the first packet satisfies the compression rule.
  • the compression rule may include a protocol stack type whitelist, and if the protocol stack type of the first packet is in the whitelist, it may be determined that the control information of the first packet satisfies the compression rule. If the at least two packets satisfy the compression rule, the program states of the at least two packets are compressed to obtain first program state information.
  • the processing chip determining that the control information of the first packet satisfies the compression rule includes: the processing chip determining the port configuration information of the first packet according to the control information of the first packet; the processing chip determining the port configuration information of the first packet; When the compression rule includes port configuration information of the first packet, it is determined that the control information of the first packet satisfies the compression rule.
  • the compression rule may include a whitelist of port configuration information, and if the port configuration information of the first packet is in the whitelist, it may be determined that the control information of the first packet satisfies the compression rule. If the at least two packets satisfy the compression rule, the program states of the at least two packets are compressed to obtain first program state information.
  • FIG. 10 is a schematic flowchart of another method for processing a packet according to an embodiment of the present application.
  • the processing chip acquires control information of a first packet among the at least two packets.
  • the processing chip acquires first program status information based on the compression rule and the control information of the first packet, where the first program status information includes the compressed program status of the first packet.
  • the processing chip Based on the first program state information, the processing chip performs parallel processing on the first packet in the at least two packets.
  • the first packet may be the packet M1 in the embodiment shown in FIG. 8 .
  • the method further includes: acquiring, by the processing chip, control information of a second packet of the at least two packets; and acquiring, by the processing chip, the control information of the second packet based on the compression rule and the control information of the second packet.
  • Program state of the second message the processing chip sequentially processes the second message in the at least two messages based on the program state of the second message.
  • the processing chip may first process the first message according to the program state information of the first message; and then process the second message according to the program state information of the second message.
  • the acquiring, by the processing chip, the first program status information based on the compression rule and the control information of the first packet includes: the processing chip determining, based on the control information of the first packet, the Protocol stack type; the processing chip includes the protocol stack type of the first packet in the compression rule, and determines that the control information of the first packet satisfies the compression rule; the processing chip obtains the first packet based on the program state of the first packet First program status information.
  • the acquiring, by the processing chip, the first program status information based on the compression rule and the control information of the first packet includes: the processing chip determining, according to the control information of the first packet, the port configuration information; the processing chip includes the port configuration information of the first packet in the compression rule, and determines that the control information of the first packet satisfies the compression rule; the processing chip obtains the first packet based on the program state of the first packet First program status information.
  • the processing chip acquiring the program status of the second packet based on the compression rule and the control information of the second packet includes: the processing chip determining the first packet based on the control information of the second packet.
  • the processing chip acquiring the program status of the second packet based on the compression rule and the control information of the second packet includes: the processing chip determining the first packet according to the control information of the second packet Port configuration information of the second packet; the processing chip obtains the program status information of the second packet when the compression rule does not include the port configuration information of the second packet.
  • An embodiment of the present application further provides a processing chip, where the processing chip includes a plurality of processors arranged in a pipeline: a processor among the plurality of processors for processing the first packet is used to determine that at least two packets cannot be processed in parallel. For the first message in the plurality of messages, the program state of the first message is obtained; the first processor among the plurality of processors is further configured to process the first message according to the program state of the first message a message.
  • the first processor is specifically configured to: acquire and process the program status of the first packet; and process the first packet according to the processed program status.
  • the processing chip further includes a scheduler 122, and the scheduler 122 is configured to acquire first program status information, where the first program status information includes the compressed program status of the first packet; the multiple The processor in the plurality of processors for processing the first packet is specifically configured to process the at least two packets in parallel according to the first program state information, and the multiple processors arranged in the multiple pipelines do not include a processor When processing the first message processor, it is determined that the first message cannot be processed in parallel.
  • the processing chip further includes a converter 123, and the converter 123 is configured to obtain the compressed program state of the first message based on the program state of the first message included in the first program state information. Program state; the first processor is also used to obtain the program state of the first message from the converter 123 .
  • the processing chip further includes a compression identifier 126, and the compression identifier 126 is configured to obtain the control information of the first packet, and determine that the control information of the first packet satisfies the compression rule; the scheduler 122 is further configured to obtain the first program status information based on the program status of the first packet when the compression identifier 126 determines that the control information of the first packet satisfies the compression rule.
  • the compression identifier 126 is specifically configured to: determine the protocol stack type of the first packet based on the control information of the first packet; the compression rule includes the protocol stack type of the first packet , it is determined that the control information of the first packet satisfies the compression rule.
  • the compression identifier 126 is specifically configured to: determine the port configuration information of the first packet according to the control information of the first packet; the compression rule includes the port configuration of the first packet information, it is determined that the control information of the first packet satisfies the compression rule.
  • An embodiment of the present application further provides a processing chip, the processing chip includes a compression identifier 126, a scheduler 122, and a plurality of processors arranged according to a pipeline; the compression identifier 126 is used to obtain the first one of the at least two packets.
  • the control information of a packet based on the compression rule and the control information of the first packet, determine that the control information of the first packet satisfies the compression rule; the scheduler 122 is used for determining the compression identifier 126 of the first packet.
  • the processor of the first message is configured to perform parallel processing of the first message in the at least two messages based on the first program state information.
  • the compression identifier 126 is further configured to acquire control information of a second packet in the at least two packets, and is further configured to determine, based on the compression rule and the control information of the second packet, the The control information of the second packet does not satisfy the compression rule; the scheduler 122 is further configured to determine the program state of the second packet when the control information of the second packet does not satisfy the compression rule; the The processor in the plurality of processors is configured to sequentially process the second packet of the at least two packets based on the program state of the second packet.
  • the compression identifier 126 is specifically configured to: determine the protocol stack type of the first packet based on the control information of the first packet; the compression rule includes the protocol stack type of the first packet , it is determined that the control information of the first packet satisfies the compression rule.
  • the compression identifier 126 is specifically configured to: determine the port configuration information of the first packet according to the control information of the first packet; include the port configuration information of the first packet in the compression rule , it is determined that the control information of the first packet satisfies the compression rule.
  • the compression identifier 126 is specifically configured to: determine the protocol stack type of the second packet based on the control information of the second packet; the compression rule does not include the protocol stack of the second packet Type to obtain the program status information of the second packet.
  • the compression identifier 126 is specifically configured to: determine the port configuration information of the second packet according to the control information of the second packet; the compression rule does not include the port configuration of the second packet information to obtain the program status information of the second packet.
  • the processing chip provided by the embodiment of the present application may further include a module for implementing functions corresponding to the steps performed by the processing chip in the method provided by the embodiment of the present application.
  • the function of the processing chip shown in FIG. 2 or FIG. 7 including a plurality of processors 125 arranged in a pipeline may be implemented by a plurality of processing modules arranged in a pipeline.
  • the function of the converter 123 included in the processing chip shown in FIG. 2 or FIG. 7 may be implemented by a conversion module.
  • the function of the scheduler 122 included in the processing chip shown in FIG. 2 or FIG. 7 may be implemented by a scheduling module.
  • the function of the interface circuit 122 included in the processing chip shown in FIG. 2 or FIG. 7 may be implemented by a transceiver module.
  • the function of the compression identifier 126 included in the processing chip shown in FIG. 7 can be implemented by an identification module.
  • the chip in this embodiment of the present application may be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a system on chip (SoC), or an application specific integrated circuit (ASIC). It is a central processing unit (CPU), a network processor (NP), a digital signal processing circuit (DSP), or a microcontroller (microcontroller unit). , MCU), it can also be a programmable logic device (PLD), other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or other integrated chips.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • SoC system on chip
  • ASIC application specific integrated circuit
  • CPU central processing unit
  • NP network processor
  • DSP digital signal processing circuit
  • microcontroller unit microcontroller unit
  • MCU programmable logic device
  • PLD programmable logic device
  • each step of the above-mentioned method can be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, detailed description is omitted here.
  • the processor in this embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • each step of the above method embodiments may be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memory in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • the present application also provides a computer program product, the computer program product includes: computer program code, when the computer program code is run on a computer, the computer is made to execute Fig. 3, Fig. 8 to The method of any one of the embodiments shown in FIG. 10 .
  • the present application further provides a computer-readable medium, where program codes are stored in the computer-readable medium, and when the program codes are run on a computer, the computer is made to execute FIG. 3 , FIG. 8 to FIG. The method of any one of the embodiments shown in FIG. 10 .
  • the present application further provides a system, which includes the aforementioned one or more terminal devices and one or more network devices.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供一种处理报文的方法和芯片,该方法中,处理芯片在发现多个报文中的一个报文无法被并行处理的情况下,将该报文进行环回处理,重新根据该报文的程序状态处理该报文。这样,处理芯片有能力并行处理两个或两个以上报文,更好地利用处理器性能,使得处理芯片可以支持复杂的业务。

Description

处理报文的方法和芯片
本申请要求于2020年09月30日提交中国国家知识产权局、申请号为202011056273.X、申请名称为“转发架构、设备和方法”的中国专利申请的优先权,以及于2020年12月04日提交中国国家知识产权局、申请号为202011410154.X、申请名称为“处理报文的方法和芯片”的中国专利申请的优先权。上述中国专利申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息技术领域,更具体地,涉及处理报文的方法和芯片。
背景技术
当前的高速网络芯片中的处理器一般采用流水线(pipeline)方式设置。每个报文有一个对应的程序状态(program state,PS)来保存这个报文转发过程中的上下文信息。流水线上的处理器对报文进行处理,并将处理结果保存到PS中再送往下一个处理器。
随着技术的进步,处理器性能越来越强大。因此,如何更好地利用处理器的性能成为业界关注的问题。
发明内容
本申请提供一种处理报文的方法和芯片,能够同时接收并处理两个或两个以上报文,提高处理器的性能。
本申请实施例提供一种处理报文的方法,包括:处理芯片确定无法并行处理至少两个报文中的第一报文,获取该第一报文的程序状态;该处理芯片根据该第一报文的程序状态,处理该第一报文。
上述技术方案可以在发现第一报文无法被并行处理的情况下,将第一报文进行环回处理,重新根据第一报文的程序状态处理该第一报文。这样,可以使得处理芯片有能力并行处理两个或两个以上报文,更好地利用处理器性能,使得处理芯片可以支持复杂的业务。
结合第一方面,在一种可能的实现方式中,该处理芯片包括按照流水线排列的多个处理器,该处理芯片根据该第一报文的程序状态,处理该第一报文包括:该多个处理器中的第一个处理器获取该第一报文的程序状态并处理;该第一个处理器根据处理后的程序状态处理该第一报文。
结合第一方面,在一种可能的实现方式中,该处理芯片确定无法并行处理至少两个报文中的第一报文之前,该方法还包括:该处理芯片获取第一程序状态信息,该第一程序状态信息包括压缩后的该第一报文的程序状态;该处理芯片确定无法并行处理至少两个报文中的第一报文包括:该处理芯片根据该第一程序状态信息并行处理该至少两个报文,在该处理芯片包括的用于处理该第一报文的按流水线排列的处理器无法满足该第一报文的处 理需求时确定无法并行处理该第一报文。
结合第一方面,在一种可能的实现方式中,该处理芯片获取该第一报文的程序状态包括:该处理芯片基于该第一程序状态信息包括的压缩后的该第一报文的程序状态,获得该第一报文的程序状态。
结合第一方面,在一种可能的实现方式中,该处理芯片获取第一程序状态信息包括:该处理芯片获取该第一报文的控制信息;该处理芯片确定该第一报文的控制信息满足压缩规则,基于该第一报文的程序状态获得该第一程序状态信息。
结合第一方面,在一种可能的实现方式中,该处理芯片确定该第一报文的控制信息满足压缩规则包括:该处理芯片基于该第一报文的控制信息,确定该第一报文的协议栈类型;该处理芯片在该压缩规则包括该第一报文的协议栈类型,确定该第一报文的控制信息满足压缩规则。
压缩规则可以包括一些简单的协议,例如用户数据报协议、传输控制协议(transmission control protocol,TCP)、多协议标签交换(multi-protocol label switching,MPLS)等。如果报文的协议栈类型是复杂的协议,那么该报文无法被压缩并行处理。复杂的协议可以包括基于第六版互联网协议(Internet Protocol version 6,Ipv6)的段路由(segment routing over IPv6,SRv6)、通用路由封装(generic routing encapsulation,GRE)等。
结合第一方面,在一种可能的实现方式中,该处理芯片确定该第一报文的控制信息满足压缩规则包括:该处理芯片根据该第一报文的控制信息,确定该第一报文的端口配置信息;该处理芯片在该压缩规则包括该第一报文的端口配置信息,确定该第一报文的控制信息满足压缩规则。
端口配置信息可以包括如何处理携带有该端口信息的报文的方式。如果第一报文的端口配置信息是简单处理携带有该端口信息的报文的方式,例如不对端口进行处理或者将所有的出/入端口都统一为相同的端口号,那么可以确定第一报文的控制信息满足该压缩规则。如果第一报文的端口配置信息包括需要复杂处理报文的方式,例如利用访问控制列表(access control lists,ACL)、单播反向路由转发(unicast reverse path forwarding,URPF)、动态主机配置协议(dynamic host configuration protocol,DHCP)绑定检查等方法处理报文,那么可以确定第一报文的控制信息不满足压缩规则。
第二方面,本申请实施例提供一种处理报文的方法,包括:处理芯片获取至少两个报文中的第一报文的控制信息;该处理芯片基于压缩规则和该第一报文的控制信息,获取第一程序状态信息,该第一程序状态信息包括压缩后的该第一报文的程序状态;该处理芯片基于该第一程序状态信息,对该至少两个报文中的该第一报文进行并行处理。
通过上述技术方案,可以排除明显无法并行处理的报文,这样可以减少降低处理器流水线性能的情况发生。
结合第二方面,在第二方面的一种可能的实现方式中,该方法还包括:该处理芯片获取该至少两个报文中的第二报文的控制信息;该处理芯片基于该压缩规则和该第二报文的控制信息,获取该第二报文的程序状态;该处理芯片基于该第二报文的程序状态,对该至少两个报文中的该第二报文进行顺序处理。
结合第二方面,在第二方面的一种可能的实现方式中,该处理芯片基于压缩规则和该第一报文的控制信息,获取第一程序状态信息包括:该处理芯片基于该第一报文的控制信 息,确定该第一报文的协议栈类型;该处理芯片在该压缩规则包括该第一报文的协议栈类型,确定该第一报文的控制信息满足该压缩规则;该处理芯片基于该第一报文的程序状态,获取该第一程序状态信息。
结合第二方面,在第二方面的一种可能的实现方式中,该处理芯片基于压缩规则和该第一报文的控制信息,获取第一程序状态信息包括:该处理芯片根据该第一报文的控制信息,确定该第一报文的端口配置信息;该处理芯片在该压缩规则包括该第一报文的端口配置信息,确定该第一报文的控制信息满足该压缩规则;该处理芯片基于该第一报文的程序状态,获取该第一程序状态信息。
其中,压缩规则和端口配置的具体内容可参见第一方面的相应内容。
结合第二方面,在第二方面的一种可能的实现方式中,该处理芯片基于该压缩规则和该第二报文的控制信息,获取该第二报文的程序状态包括:该处理芯片基于该第二报文的控制信息,确定该第二报文的协议栈类型;该处理芯片在该压缩规则不包括该第二报文的协议栈类型,获取该第二报文的程序状态信息。
结合第二方面,在第二方面的一种可能的实现方式中,该处理芯片基于该压缩规则和该第二报文的控制信息,获取该第二报文的程序状态包括:该处理芯片根据该第二报文的控制信息,确定该第二报文的端口配置信息;该处理芯片在该压缩规则不包括该第二报文的端口配置信息,获取该第二报文的程序状态信息。
第三方面,本申请实施例提供一种处理芯片,该处理芯片包括用于实现第一方面或第一方面的任一种可能的实现方式的方法的部件。
例如,该处理芯片可以包括按照流水线排列的多个处理器。该处理芯片还可以包括调度器。该处理芯片还可以包括转换器。该处理芯片还可以包括压缩识别器。
第四方面,本申请实施例提供一种处理芯片,该处理芯片包括用于实现第二方面或第二方面的任一种可能的实现方式的方法的部件。
例如,该处理芯片可以包括按照流水线排列的多个处理器。该处理芯片还可以包括调度器。该处理芯片还可以包括转换器。该处理芯片还可以包括压缩识别器。
第五方面,本申请实施例提供一种网络设备,该网络设备可以包括第三方面的处理芯片。
第六方面,本申请实施例提供一种网络设备,该网络设备可以包括第四方面的处理芯片。
第七方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质存储有程序代码,当该计算机存储介质在计算机上运行时,使得计算机执行如第一方面或第一方面的任一种可能的实现方式。
第八方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质存储有程序代码,当该计算机存储介质在计算机上运行时,使得计算机执行如第二方面或第二方面的任一种可能的实现方式。
第九方面,本申请实施例提供一种网络设备,该网络设备可以包括用于实现第一方面或第一方面的任一种可能的实现方式包括的步骤对应功能的模块。
第十方面,本申请实施例提供一种网络设备,该网络设备可以包括用于实现第二方面或第二方面的任一种可能的实现方式包括的步骤对应功能的模块。
附图说明
图1是适用于本申请实施例的通信系统100的示意图。
图2是根据本申请实施例提供的一种网络设备的示意性结构框图。
图3是根据本申请实施例提供的处理报文的方法的示意性流程图。
图4是示出了环回单元的处理器流水线和转换器的示意图。
图5是流水线处理报文的示意图。
图6是另一流水线处理报文的示意图。
图7是根据本申请实施例提供的另一网络设备的示意性结构图。
图8是根据本申请实施例提供的处理报文的方法的示意性流程图。
图9是根据本申请实施例提供的一种处理报文的方法的示意性流程图。
图10是根据本申请实施例提供的另一种处理报文的方法的示意性流程图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
本申请将围绕可包括多个设备、组件、模块等的系统来呈现各个方面、实施例或特征。应当理解和明白的是,各个系统可以包括另外的设备、组件、模块等,并且/或者可以并不包括结合附图讨论的所有设备、组件、模块等。此外,还可以使用这些方案的组合。
另外,在本申请实施例中,“示例的”、“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用示例的一词旨在以具体方式呈现概念。
本申请实施例中,“相应的(corresponding,relevant)”和“对应的(corresponding)”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。
本申请实施例中,有时候下标如W 1可能会笔误为非下标的形式如W1,在不强调其区别时,其所要表达的含义是一致的。
本申请实施例描述的网络架构以及业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达, 是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
此外,本申请实施例中所称的报文可以指计算机网络通信中两个节点传输的任意传输单元,例如,可以是应用层中的报文(message),网络层中的数据包(packet),传输层中的数据段(segment)等。
图1是适用于本申请实施例的通信系统100的示意图。如图1所示,通信系统100包括至少一个通信设备,例如图1所示的通信设备101、通信设备102、通信设备103、通信设备104和通信设备105。通信系统100还包括网络设备110。网络设备110可以接收来自于通信设备101至通信设备105中的任一个或多个通信设备发送的报文,并将接收到的报文发送至通信系统100中的另外一个或多个通信设备。例如,通信设备101希望将报文M1发送至通信设备103和通信设备104,通信设备102希望将报文M2发送至通信设备105。那么通信设备101可以将该报文M1发送至网络设备110,通信设备102可以将报文M2发送至网络设备110,该网络设备110将接收到的报文M1转发至通信设备103和通信设备104,将接收到报文M2转发至通信设备105。通信设备101至通信设备105中的任一通信设备可以是计算机设备(例如移动电话、平板电脑、笔记本计算机、台式计算机等),也可以是网络设备(例如交换机、路由器等)。
图2是根据本申请实施例提供的一种网络设备的示意性结构框图。如图2所示,网络设备110包括输入输出电路111和处理芯片120。输入输出电路111可以是网络设备110与外部进行通信的接口或接口电路。处理芯片120包括接口电路121、调度器122、转换器123和按照流水线排列的多个处理器125。为了便于描述,可以将按照流水线排列的多个处理器125统称为处理器流水线124。如果如图1所示的通信设备101至通信设备105也是网络设备,那么通信设备101至通信设备105的结构也可以是如图2所示的结构。
下面结合如图1所示的通信系统11和如图2所示的网络设备110对本申请实施例提供的处理报文的方法进行介绍。
图3是根据本申请实施例提供的处理报文的方法的示意性流程图。
301,获取来自于通信设备101的报文M1和来自于通信设备102的报文M2。
举例说明,网络设备110中的输入输出电路111可以接收来自于通信设备101的报文M1和来自于通信设备102的报文M2。输入输出电路111将接收到的报文M1和报文M2发送至处理芯片120。相应的,处理芯片120接收报文M1和报文M2。处理芯片120可以通过接口电路121接收来自于输入输出电路111的报文M1和报文M2。
302,获取程序状态信息1,程序状态信息1包括报文M1的程序状态和报文M2的程序状态。
本申请实施例中,报文M1的程序状态可简称为PS_M1,报文M2的程序状态可简称为PS_M2。PS_M1和PS_M2可以由处理芯片120中的调度器122确定。调度器122还可以根据PS_M1和PS_M2确定程序状态信息1。程序状态信息1可以包括压缩后的PS_M1和压缩后的PS_M2。压缩PS可以是将PS中的无效位(例如连续的多个0)删除。
举例说明,程序状态信息1可以包括头字段和负载字段。程序状态信息1的头字段可以分为两部分,比如第一头子字段和第二头子字段。第一头子字段中可以包括压缩后的 PS_M1的头字段。第二头子字段可以包括压缩后的PS_M2的头字段。对PS_M1的头字段中最后几位连续的0进行清零操作,可以得到压缩后的PS_M1的头字段。对PS_M2的头字段中最后几位连续的0进行清零操作,可以得到压缩后的PS_M2的头字段。程序状态信息1的负载字段可以分为两部分,比如第一负载子字段和第二负载子字段。第一负载子字段中包括压缩后的PS_M1的负载字段。第二负载子字段包括压缩后的PS_M2的子字段。对PS_M1的负载字段中最后几位连续的0进行清零操作,可以得到压缩后的PS_M1的负载字段。对PS_M2的负载字段中最后几位连续的0进行清零操作,可以得到压缩后的PS_M2的负载字段。
303,根据程序状态信息1并行处理报文M1和报文M2。
举例说明,对报文M1和报文M2的处理可以是常规的报文转发过程中对报文的处理方式。例如,可以是根据报文中的目的端口号或者目的互联网协议(Internet Protocol,IP)地址等信息确定报文的出端口。又如,可以是根据预设的规则对报文进行修改,例如修改报文的源IP地址,在报文头添加多协议标签交换(multi-protocol label switching,MPLS)标签等。对报文的处理可以由流水线处理器中的处理器实现。调度器122将程序状态信息1发送至处理器流水线124,处理器流水线124中的处理器负责对报文进行处理。
304,在根据程序状态信息1并行处理报文M1和报文M2的过程中,确定报文M1和报文M2中的至少一个无法继续被并行处理。
举例说明,处理器流水线124中的处理器在并行处理报文M1和报文M2的过程中确定无法继续处理报文M1,但是可以处理报文M2,则继续处理报文M2。处理器流水线124中的处理器在并行处理报文M1和报文M2的过程中确定无法继续处理报文M2,但是可以处理报文M1,则继续处理报文M1。处理器流水线124中的处理器在并行处理报文M1和报文M2的过程中确定无法继续处理报文M1和M2,则不再继续处理报文M1和M2。处理器流水线124中的处理器在并行处理报文M1和报文M2的过程中确定无法继续并行处理报文M1和报文M2,但是可以满足其中任一个报文的处理器需求。处理器流水线124中的处理器可以确定报文M1和报文M2中的任一个为无法继续处理的报文(例如报文M1),并继续处理另一个报文(例如报文M2)。其中,无法继续处理报文包括:处理器流水线124中分配给报文的流水线长度不能满足该报文的处理需求,或者处理器流水线124中的处理器的性能不能满足报文处理需求。
本申请实施例以无法并行处理报文M1为例进行说明。如果报文M2也无法继续进行并行处理,报文M2的处理方式可刹那间处理器流水线124=对报文M1的处理方式,本申请实施例不再赘述。
305,向转换器123发送转换指示信息,该转换指示信息用于指示报文M1无法继续处理。
举例说明,处理器流水线124中可以包括多个环回单元,可以通过环回单元将该转换指示信息发送给转换器123。为了便于描述,可以将确定无法继续处理报文M1的处理器称为目标处理器。该环回单元可以是位于目标处理器下游的至少一个环回单元中的一个。在一些实施例中,该环回单元可以是位于目标处理器下游的至少一个环回单元中距离该目标处理器最近的一个环回单元。
图2中的处理器流水线124中并未示出环回单元。图4是示出了环回单元的处理器流 水线和转换器的示意图。如图4所示的处理器流水线410中包括处理器411、处理器412、处理器413、处理器414、处理器415、处理器416、处理器417和处理器418。处理器411通过总线12与处理器412相连;处理器412通过总线23与处理器413相连;处理器413通过总线34与处理器414相连;处理器414通过总线45与处理器415相连;处理器415通过总线56与处理器416相连;处理器416通过总线67与处理器417相连;处理器417通过总线78与处理器418相连。此外,处理器411、处理器413、处理器416和处理器418也分别通过总线与转换器420相连。用于连接处理器413、处理器416、处理器418和转换器420的总线可以认为是环回单元。例如,处理器412确定无法继续并行处理报文M1和报文M2。在此情况下,可以通过最近的环回单元(即总线30)将该转换指示信息发送至转换器420。又如,如果是处理器414确定无法继续并行处理报文M1和报文M2,那么可以通过最近的环回单元(即总线60)将该转换指示信息发送至转换器420。又如,如果是处理器418确定无法继续并行处理报文M1和报文M2,那么可以通过最近的环回单元(即总线80)将该转换指示信息发送至转换器420。
306,确定程序状态信息2,该程序状态信息2中可以包括PS_M1。
在一些实施例中,该转换指示信息可以包括压缩的PS,即程序状态信息1。在此情况下,转换器123可以删除程序状态信息1中的PS_M1,得到程序状态信息2。例如,转换器123可以将程序状态信息1中的第二头子字段和第二负载子字段清零,得到程序状态信息2。如果无法继续处理的报文是报文M2,那么转换器123可以先将程序状态信息1中的第一头子字段和第一负载子字段清零,然后将第二头子字段中的PS_M2的头字段的信息复制到第一头子字段中,将第二负载子字段中的PS_M2的负载字段的信息复制到第一负载子字段中,最后将第二头子字段和第二负载子字段清零。
在另一些实施例中,该转换指示信息可以包括压缩后PS_M1。在此情况下,转换器123可以直接将压缩后的PS_M1恢复为完整的PS_M1。例如,对PS_M1头和负载字段为0的值进行清零操作得到压缩后的PS_M1。转换器123在得到压缩后的PS_M1后,将这些字段中的0补齐,得到完整的PS_M1。
在另一些实施例中,对PS_M1进行压缩得到程序状态信息1之前,可以将PS_M1保存在处理芯片120中的存储单元(未示出)并未PS_M1分配一个标识符。若报文1无法被继续处理,则向转换器123发送的转换指示信息中携带PS_M1的标识符。这样,转换器123根据该PS_M1的标识符获取保存在存储单元中的PS_M1,根据获取的PS_M1确定程序状态信息2。可选的,保存在存储单元的PS_M1可以是完整的PS_M1,也可以是压缩后的PS_M1,也可以是程序状态信息1。
307,根据程序状态信息2处理报文M1。
举例说明,图4中的转换器420在确定了程序状态信息2之后,可以通过总线10将程序状态信息2发送至处理器流水线中的第一个处理器(即处理器411)。处理器流水线中的处理器按照顺序对程序状态信息2进行处理。例如,处理器411先处理程序状态信息2,将处理后的程序状态信息2发送至处理器412;处理器412继续处理由处理器411处理后的程序状态信息2,将处理后的程序状态信息2发送至处理器413,以此类推。
根据程序状态信息M2处理报文M1的具体实现方式和现有的根据程序状态处理一个报文的处理方式相同,为了简洁,在此就不再赘述。
如果处理芯片可以并行处理两个报文,那么该处理芯片可以直接并行处理这两个报文。图3所示的方法提出了处理芯片无法并行处理两个报文中的一个或多个的情况下的解决方案。这样,可以使得处理芯片有能力同时接收并处理两个报文,更好地利用处理器性能,使得处理芯片可以支持复杂的业务。
图5是流水线处理报文的示意图。如图5所示的流水线中包括六个处理器,分别称为处理器501、处理器502、处理器503、处理器504、处理器505和处理器506。如图5所示,包含报文M1的PS和报文M2的PS的程序状态信息进入该流水线,该流水线中的一半处理器负责处理报文M1的PS,另一半处理器负责处理报文M2的PS。例如,处理器501、处理器503和处理器505负责处理报文M1的PS;处理器502、处理器504和处理器506负责处理报文M2的PS。
图6是另一流水线处理报文的示意图。如图6所示的流水线中包括六个处理器,分别称为处理器601、处理器602、处理器603、处理器604、处理器605和处理器606。如图6所示,包含报文M1的PS和报文M2的PS的程序状态信息进入该流水线,该流水线中的全部处理器都可以用于处理报文M1的PS和报文M2的PS。例如,处理单元601、处理单元602、处理单元603、处理单元604、处理单元605和处理单元606中的一半资源用于处理报文M1的PS,另一半资源用于处理报文M2的PS。
如果处理器流水线在一个时钟周期内接收到报文M1和报文M2,那么该处理器流水线处理报文可以有以下三种情况:
情况1,报文M1和报文M2可以并行处理,处理器流水线可以在一个时钟周期处理这两个报文,并在一次流水线处理中发送出去。对于上述情况1,处理器流水线在一个时钟周期内处理了两个报文。这样,可以实现倍速转发报文。
情况2,报文M1和报文M2中的一个报文无法并行处理(例如报文M1),那么一次流水线处理可以先将处理后的报文M2发出,报文M1需要进行环回操作,即转换器转换报文M1的PS后再由处理器流水线处理一遍报文M1,然后将处理后的报文M1发送出去。对于上述情况2,虽然两个报文在一个时钟周期内输入到处理器流水线,但是处理器流水线还是用了两个时钟周期处理这两个报文。这样处理器流水线的整体性能是一倍包速率。
情况3,报文M1和报文M2都无法并行处理。在此情况下,报文M1和报文M2都需要进行环回操作。报文M1的PS转换后仍按照原线程执行,报文M1经过一遍处理器流水线处理后发送出去。同时,激活一个新线程运行报文M2的PS,报文M2经过一遍处理器流水线处理后发送出去。对于上述情况3,虽然两个报文子一个时钟周期内输入到处理器流水线,但是处理器流水线在三个时钟周期内才完成了两个报文的处理。换句话说,需要三个时钟周期处理两个报文,处理器流水线的性能变为三分之二包速率。在此情况下,处理器流水线的性能比分别处理两个报文时的性能还低。
在一些实施例中,可以对任何同时输入的报文都进行压缩处理,然后基于压缩后得到的程序状态信息对两个报文进行并行处理。在并行处理的过程中如果确定其中一个报文无法继续并行处理,则按照图3所示的方法处理这个无法并行处理的报文。但是如上所述,在这种情况下,处理器流水线性能变为三分之二包速率。
为了减少处理器流水线性能降低的情况发生,可以在网络设备中设置一个压缩识别器,该压缩识别器可以用于判断能否对两个报文进行压缩处理;如果可以进行压缩处理, 则根据两个报文的PS确定包含有这两个报文的PS的程序状态信息(也可以称为压缩PS),并将该程序状态信息发送至处理芯片。处理芯片可以根据该程序状态信息并行处理这两个报文。如果处理芯片在并行处理这两个报文的过程中确定其中一个或多个报文没有办法继续并行处理,那么再利用图3所示的方法对无法并行处理的报文进行环回操作。可见,通过上述技术方案,可以排除明显无法并行处理的报文,这样可以减少降低处理器流水线性能的情况发生。
图7是根据本申请实施例提供的另一网络设备的示意性结构图。如图7所示的网络设备110在如图2所示的网络设备110的基础上增加了压缩识别器126。下面结合如图1所示的通信系统11和如图7所示的网络设备110对本申请实施例提供的处理报文的方法进行介绍。
图8是根据本申请实施例提供的处理报文的方法的示意性流程图。
801,获取来自于通信设备101的报文M1和来自于通信设备102的报文M2。
举例说明,网络设备110中的输入输出电路111可以接收来自于通信设备101的报文M1和来自于通信设备102的报文M2。输入输出电路111将接收到的报文M1和报文M2发送至处理芯片120。相应的,处理芯片120接收报文M1和报文M2。处理芯片120可以通过接口电路121接收来自于输入输出电路111的报文M1和报文M2。
802,根据报文M1的控制信息和报文M2的控制信息,确定是否可以将PS_M1(即报文M1的程序状态)和PS_M2(即报文M2的程序状态)进行压缩。
举例说明,压缩识别器126可以从接口电路121获取报文M1和报文M2,从报文M1和报文M2的报文头中提取报文M1和报文M2的协议栈类型,然后根据报文M1和报文M2的协议栈类型确定是否可以对PS_M1和PS_M2进行压缩。
在一些实施例中,压缩识别器126可以保存一个压缩规则,该压缩规则可以包括一个协议栈类型白名单,该协议类型白名单中包括多个协议类型。该协议类型白名单中包括的多个协议栈类型是能够被压缩的报文的协议栈类型。如果报文M1的协议栈类型和报文M2的协议栈类型都在该协议类型白名单内,则确定可以对PS_M1和PS_M2进行压缩,得到程序状态信息1;如果报文M1的协议栈类型或报文M2的协议栈类型不在该协议类型白名单内,那么则确定无法对PS_M1和PS_M2进行压缩。在此情况下,可以分别处理报文M1和报文M2。
又如,在一些实施例中,压缩识别器126保存的压缩规则可以包括一个协议栈类型黑名单,该协议类型黑名单中包括多个协议类型。该协议类型黑名单中包括的多个协议栈类型是不能够被压缩的报文的协议栈类型。如果报文M1的协议栈类型和报文M2的协议栈类型都不在该协议类型黑名单内,则确定可以对PS_M1和PS_M2进行压缩,得到程序状态信息1;如果报文M1的协议栈类型或报文M2的协议栈类型在该协议类型黑名单内,那么则确定无法对PS_M1和PS_M2进行压缩。在此情况下,可以分别处理报文M1和报文M2。
又如,在另一些实施例中,压缩识别器126可以同时保存协议类型白名单和协议类型黑名单,该协议类型白名单中包括的多个协议栈类型是能够被压缩的报文的协议栈类型,该协议类型黑名单中包括的多个协议栈类型是不能够被压缩的报文的协议栈类型。如果报文M1的协议栈类型和报文M2的协议栈类型都在该协议类型白名单内,则确定可以对 PS_M1和PS_M2进行压缩,得到程序状态信息1;如果报文M1的协议栈类型或报文M2的协议栈类型在该协议类型黑名单内,那么则确定无法对PS_M1和PS_M2进行压缩。在此情况下,可以分别处理报文M1和报文M2;如果报文M1的协议栈类型或报文M2的协议栈类型既不在该协议类型黑名单内,也不在该协议类型白名单内,那么也可以确定无法对PS_M1和PS_M2进行压缩。在此情况下,可以分别处理报文M1和报文M2。
在一些实施例中,该协议类型白名单中包括的协议栈类型可以包括:用户数据报协议(user datagram protocol,UDP)、传输控制协议(transmission control protocol,TCP)、多协议标签交换(multi-protocol label switching,MPLS)等。协议类型黑名单中包括的协议栈类型一般是由复杂的隧道的协议栈类型,例如,可以包括SRv6、通用路由封装(generic routing encapsulation,GRE)等。
除了利用协议类型判断是否可以压缩两个报文的PS之外,压缩识别器126还可以根据报文的控制信息中的其他内容来确定是否压缩报文的PS。例如,可以根据报文的端口信息来确定是否压缩报文的PS。报文的端口信息可以是报文的入端口号和/或出端口号。
在一些实施例中,压缩识别器126还可以保存端口配置信息白名单。压缩识别器126可以根据报文M1的端口信息确定报文M1的端口配置信息(以下简称端口配置信息M1),根据报文M2的端口信息确定报文M2的端口配置信息(以下简称端口配置信息M2);确定端口配置信息白名单中是否包括端口配置信息M1和端口配置信息M2;如果端口配置信息白名单中包括端口配置信息M1和端口配置信息M2,那么确定可以对PS_M1和PS_M2进行压缩;如果端口配置信息白名单中不包括端口配置信息M1或端口配置信息M2,则确定不可以对PS_M1和PS_M2进行压缩。
在另一些实施例中,压缩识别器126可以保存端口配置信息黑名单。压缩识别器126可以在确定了端口配置信息M1和端口配置信息M2后,可以确定该端口配置信息黑名单中是否包括端口配置信息M1和端口配置信息M2;如果端口配置信息黑名单中包括端口配置信息M1和/或端口配置信息M2,那么确定不可以对PS_M1和PS_M2进行压缩;如果端口配置信息黑名单中不包括端口配置信息M1和端口配置信息M2,那么确定可以对PS_M1和PS_M2进行压缩。
在另一些实施例中,压缩识别器126可以同时保存端口配置信息黑名单和端口配置信息白名单。如果端口配置信息M1和端口配置信息M2在该端口配置信息白名单中,那么确定可以对PS_M1和PS_M2进行压缩;如果端口配置信息M1和端口配置信息M2中的至少一个在端口配置信息黑名单中,那么确定不可以对PS_M1和PS_M2进行压缩。
端口配置信息可以包括如何处理携带有该端口信息的报文的方式。例如,端口配置信息白名单中的端口配置信息可以包括简单处理携带有该端口信息的报文的方式,例如不对端口进行处理或者将所有的出/入端口都统一为相同的端口号。端口配置信息黑名单中的端口配置信息可以包括需要复杂处理报文的方式,例如利用访问控制列表(access control lists,ACL)、URPF、动态主机配置协议(dynamic host configuration protocol,DHCP)绑定检查等方法处理报文。
如果步骤802的确定结果为否(即不能对PS_M1和PS_M2进行压缩),那么可以分别处理报文M1和报文M2。换句话说,如果步骤802的确定结果为否,那么可以按照现有的一个时钟周期处理一个报文的方式分别处理报文M1和报文M2。例如,调度器122 确定PS_M1,将PS_M1发送至处理器流水线124。处理器流水线124根据PS_M1处理报文M1;调度器122确定PS_M2,将PS_M2发送至处理器流水线124。处理器流水线124根据PS_M2处理报文M2。
如果步骤802的确定结果为是(即可以对PS_M1和PS_M2进行压缩处理),则可以执行步骤803和步骤804。
803,对PS_M1和PS_M2进行压缩处理,得到程序程序状态信息1。
举例说明,在步骤802的确定结果为是的情况下,调度器122可以确定PS_M1和PS_M2,并PS_M1和PS_M2进行压缩处理,得到程序程序状态信息1,该程序状态信息1包括压缩后的PS_M1和压缩后的PS_M2。程序状态信息1的具体内容和结构可以参考图3所示实施例中的描述,为了简洁,在此就不再赘述。
804,根据程序状态信息1并行处理报文M1和报文M2。
举例说明,处理器流水线124从调度器122获取程序状态信息1,并根据程序状态信息1并行处理报文M1和报文M2。如果处理器流水线124在根据程序状态1并行处理报文M1和报文M2的过程中,确定报文M1和报文M2中的至少一个无法继续被并行处理,那么可以参考图3所示的方法处理报文M1和报文M2。
举例说明,如果处理器流水线124可以并行处理报文M1和报文M2,那么处理器流水线124就并行处理报文M1和报文M2,并将处理后的报文M1和报文M2通过接口电路122发送至下一节点。
举例说明,在图3和图8所示的实施例中处理芯片只并行处理了两个报文。在另一些实施例中,处理芯片也可以获取更多的报文(例如三个或三个以上)。该处理芯片也可以判断是否能够对这三个报文的PS进行压缩;如果可以,则对PS进行压缩后,根据压缩后的PS并行处理这三个报文;如果不可以,则按照现有的方式分别处理这三个报文。此外,如果处理芯片在并行处理这三个报文的过程中发现其中一个或多个报文无法并行处理,那么也可以采用图3所示的实施例,将无法并行处理的报文的PS恢复到未压缩状态,然后按照现有处理报文的方式处理无法并行处理的报文。
图9是根据本申请实施例提供的一种处理报文的方法的示意性流程图。
901,处理芯片确定无法并行处理至少两个报文中的第一报文,获取该第一报文的程序状态。
902,处理芯片根据该第一报文的程序状态,处理该第一报文。
以图3所示的实施例为例,如果该至少两个报文只包括两个报文,那么该第一报文可以是图3所示的实施例中的报文M1,第二报文可以是图3所示实施例中的报文M2。
在一些实施例中,该处理芯片包括按照流水线排列的多个处理器,该处理芯片根据该第一报文的程序状态,处理该第一报文包括:该多个处理器中的第一个处理器获取该第一报文的程序状态并处理;该第一个处理器根据处理后的程序状态处理该第一报文。
举例说明,按照流水线排列的多个处理器可以统称为处理器流水线。以图4为例,处理器411至处理器418按流水线排列,这8个处理器可以通称为处理器流水线。处理器411是八个处理器中的第一个处理器。第一报文的程序状态可以通过该处理器流水线处理,然后根据处理后的程序状态对第一报文进行处理。例如,处理器411处理第一报文的程序状态;处理器411根据处理后的程序状态处理第一报文。例如处理器411可以将处理后的 程序状态发送给处理器412,处理器412继续对该程序状态进行处理,然后将后的程序状态发送至处理器413,以此类推,最后处理器418在完成程序状态处理后,根据处理后的程序状态,对第一报文进行处理。
在一些实施例中,该处理芯片确定无法并行处理至少两个报文中的第一报文之前,该方法还包括:该处理芯片获取第一程序状态信息,该第一程序状态信息包括压缩后的该第一报文的程序状态;该处理芯片确定无法并行处理至少两个报文中的第一报文包括:该处理芯片根据该第一程序状态信息并行处理该至少两个报文,在该处理芯片包括的用于处理该第一报文的按流水线排列的处理器无法满足该第一报文的处理需求时确定无法并行处理该第一报文。
以图5为例,假设第一报文是图5中的报文M1。流水线中共有三个处理器能够用于处理报文M1。如果处理芯片确定用于处理器报文M1的流水线长度不够(换句话说,三个处理器(即处理器501、处理器503和处理器505)无法处理报文M1的PS),那么可以确定无法并行处理报文M1。
以图6为例,假设第一报文是图6中的报文M1。虽然图6所示的流水线中有六个处理器都可以用于处理报文M1的PS,但是每个处理器只有一半的处理资源能用于处理报文M1的PS。在此情况下,也可能出现处理器流水线无法并行处理报文M1的情况。在此清下,可以确定无法并行处理报文M1。
在一些实施例中,该处理芯片获取该第一报文的程序状态包括:该处理芯片基于该第一程序状态信息包括的压缩后的该第一报文的程序状态,获得该第一报文的程序状态。该第一程序状态信息可以是如图3所示实施例中的程序状态信息1。该第一程序状态信息的确定方式可以参考图3所示实施例,为了简洁,在此就不再赘述。
在一些实施例中,该处理芯片获取第一程序状态信息包括:该处理芯片获取该第一报文的控制信息;该处理芯片确定该第一报文的控制信息满足压缩规则,基于该第一报文的程序状态获得该第一程序状态信息。
以图8所示的实施例为例,如果该至少两个报文只包括两个报文,那么该第一报文可以是图8所示的实施例中的报文M1。
在一些实施例中,该处理芯片确定该第一报文的控制信息满足压缩规则包括:该处理芯片基于该第一报文的控制信息,确定该第一报文的协议栈类型;该处理芯片在该压缩规则包括该第一报文的协议栈类型,确定该第一报文的控制信息满足压缩规则。
换句话说,该压缩规则可以包括一个协议栈类型白名单,如果第一报文的协议栈类型在该白名单内,则可以确定第一报文的控制信息满足压缩规则。如果至少两个报文都满足压缩规则,则对至少两个报文的程序状态进行压缩,得到第一程序状态信息。
在一些实施例中,该处理芯片确定该第一报文的控制信息满足压缩规则包括:该处理芯片根据该第一报文的控制信息,确定该第一报文的端口配置信息;该处理芯片在该压缩规则包括该第一报文的端口配置信息,确定该第一报文的控制信息满足压缩规则。
换句话说,该压缩规则可以包括一个端口配置信息白名单,如果第一报文的端口配置信息在该白名单内,则可以确定第一报文的控制信息满足压缩规则。如果至少两个报文都满足压缩规则,则对至少两个报文的程序状态进行压缩,得到第一程序状态信息。
图10是根据本申请实施例提供的另一种处理报文的方法的示意性流程图。
1001,处理芯片获取至少两个报文中的第一报文的控制信息。
1002,该处理芯片基于压缩规则和该第一报文的控制信息,获取第一程序状态信息,该第一程序状态信息包括压缩后的该第一报文的程序状态。
1003,该处理芯片基于该第一程序状态信息,对该至少两个报文中的该第一报文进行并行处理。
以图8所示的实施例为例,如果该至少两个报文只包括两个报文,那么该第一报文可以是图8所示的实施例中的报文M1。
在一些实施例中,该方法还包括:该处理芯片获取该至少两个报文中的第二报文的控制信息;该处理芯片基于该压缩规则和该第二报文的控制信息,获取该第二报文的程序状态;该处理芯片基于该第二报文的程序状态,对该至少两个报文中的该第二报文进行顺序处理。例如,该处理芯片可以先根据第一报文的程序状态信息,对第一报文进行处理;然后根据第二报文的程序状态信息,对第二报文进行处理。
在一些实施例中,该处理芯片基于压缩规则和该第一报文的控制信息,获取第一程序状态信息包括:该处理芯片基于该第一报文的控制信息,确定该第一报文的协议栈类型;该处理芯片在该压缩规则包括该第一报文的协议栈类型,确定该第一报文的控制信息满足压缩规则;该处理芯片基于该第一报文的程序状态,获取该第一程序状态信息。
在一些实施例中,该处理芯片基于压缩规则和该第一报文的控制信息,获取第一程序状态信息包括:该处理芯片根据该第一报文的控制信息,确定该第一报文的端口配置信息;该处理芯片在该压缩规则包括该第一报文的端口配置信息,确定该第一报文的控制信息满足压缩规则;该处理芯片基于该第一报文的程序状态,获取该第一程序状态信息。
在一些实施例中,该处理芯片基于该压缩规则和该第二报文的控制信息,获取该第二报文的程序状态包括:该处理芯片基于该第二报文的控制信息,确定该第二报文的协议栈类型;该处理芯片在该压缩规则不包括该第二报文的协议栈类型,获取该第二报文的程序状态信息。
在一些实施例中,该处理芯片基于该压缩规则和该第二报文的控制信息,获取该第二报文的程序状态包括:该处理芯片根据该第二报文的控制信息,确定该第二报文的端口配置信息;该处理芯片在该压缩规则不包括该第二报文的端口配置信息,获取该第二报文的程序状态信息。
本申请实施例还提供一种处理芯片,该处理芯片包括按照流水线排列的多个处理器:该多个处理器中的用于处理第一报文的处理器,用于确定无法并行处理至少两个报文中的该第一报文,获取该第一报文的程序状态;该多个处理器中的第一个处理器,还用于根据该第一报文的程序状态,处理该第一报文。
在一些实施例中,该第一个处理器,具体用于:获取该第一报文的程序状态并处理;根据处理后的程序状态处理该第一报文。
在一些实施例中,该处理芯片还包括调度器122,该调度器122用于获取第一程序状态信息,该第一程序状态信息包括压缩后的该第一报文的程序状态;该该多个处理器中的用于处理第一报文的处理器,具体用于根据该第一程序状态信息并行处理该至少两个报文,在该多个流水线排列的多个处理器中不包括用于处理该第一报文处理器时确定无法并行处理该第一报文。
在一些实施例中,该处理芯片,还包括转换器123,该转换器123用于基于该第一程序状态信息包括的压缩后的该第一报文的程序状态,获得该第一报文的程序状态;该第一个处理器还用于从该转换器123获取该第一报文的程序状态。
在一些实施例中,该处理芯片,还包括压缩识别器126,该压缩识别器126用于获取该第一报文的控制信息,确定该第一报文的控制信息满足压缩规则;该调度器122,还用于在该压缩识别器126确定该第一报文的控制信息满足压缩规则的情况下,基于该第一报文的程序状态获得该第一程序状态信息。
在一些实施例中,该压缩识别器126具体用于:基于该第一报文的控制信息,确定该第一报文的协议栈类型;在该压缩规则包括该第一报文的协议栈类型,确定该第一报文的控制信息满足压缩规则。
在一些实施例中,该压缩识别器126具体用于根:据该第一报文的控制信息,确定该第一报文的端口配置信息;在该压缩规则包括该第一报文的端口配置信息,确定该第一报文的控制信息满足压缩规则。
本申请实施例还提供一种处理芯片,该处理芯片包括压缩识别器126、调度器122和按照流水线排列的多个处理器;该压缩识别器126,用于获取至少两个报文中的第一报文的控制信息,基于压缩规则和该第一报文的控制信息,确定该第一报文的控制信息满足该压缩规则;该调度器122,用于在该压缩识别器126确定该第一报文的控制信息满足该压缩规则的情况下,获取第一程序状态信息,该第一程序状态信息包括压缩后的该第一报文的程序状态;该多个处理器中用于处理所述第一报文的处理器,用于基于该第一程序状态信息,对该至少两个报文中的该第一报文进行并行处理。
在一些实施例中,该压缩识别器126还用于获取该至少两个报文中的第二报文的控制信息,还用于基于该压缩规则和该第二报文的控制信息,确定该第二报文的控制信息不满足该压缩规则;该调度器122,还用于在该第二报文的控制信息不满足该压缩规则的情况下,确定该第二报文的程序状态;该多个处理器中的处理器用于基于该第二报文的程序状态,对该至少两个报文中的该第二报文进行顺序处理。
在一些实施例中,该压缩识别器126具体用于:基于该第一报文的控制信息,确定该第一报文的协议栈类型;在该压缩规则包括该第一报文的协议栈类型,确定该第一报文的控制信息满足压缩规则。
在一些实施例中,该压缩识别器126具体用于:根据该第一报文的控制信息,确定该第一报文的端口配置信息;在该压缩规则包括该第一报文的端口配置信息,确定该第一报文的控制信息满足压缩规则。
在一些实施例中,该压缩识别器126具体用于:基于该第二报文的控制信息,确定该第二报文的协议栈类型;在该压缩规则不包括该第二报文的协议栈类型,获取该第二报文的程序状态信息。
在一些实施例中,该压缩识别器126具体用于:根据该第二报文的控制信息,确定该第二报文的端口配置信息;在该压缩规则不包括该第二报文的端口配置信息,获取该第二报文的程序状态信息。
本申请实施例提供的处理芯片还可包括用于实现本申请实施例提供的方法中处理芯片所执行的步骤对应功能的模块。在一种可能的设计中,图2或图7所示的处理芯片包括 按照流水线排列的多个处理器125的功能可以由按照流水线排列的多个处理模块实现。图2或图7所示的处理芯片包括的转换器123的功能可由转换模块实现。图2或图7所示的处理芯片包括的调度器122的功能可由调度模块实现。图2或图7所示的处理芯片包括的接口电路122的功能可由收发模块实现。图7所示的处理芯片包括的压缩识别器126的功能可由识别模块实现。
本申请实施例中的芯片可以是编程门阵列(field programmable gate array,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system on chip,SoC),还可以是中央处理器(central processor unit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)、其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,或其他集成芯片。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应注意,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
根据本申请实施例提供的方法,本申请还提供一种计算机程序产品,该计算机程序产 品包括:计算机程序代码,当该计算机程序代码在计算机上运行时,使得该计算机执行图3、图8至图10所示实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种计算机可读介质,该计算机可读介质存储有程序代码,当该程序代码在计算机上运行时,使得该计算机执行图3、图8至图10所示实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种系统,其包括前述的一个或多个终端设备以及一个或多个网络设备。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (26)

  1. 一种处理报文的方法,其特征在于,包括:
    处理芯片确定无法并行处理至少两个报文中的第一报文,获取所述第一报文的程序状态;
    所述处理芯片根据所述第一报文的程序状态,处理所述第一报文。
  2. 如权利要求1所述的方法,其特征在于,所述处理芯片包括按照流水线排列的多个处理器,所述处理芯片根据所述第一报文的程序状态,处理所述第一报文包括:
    所述多个处理器中的第一个处理器获取所述第一报文的程序状态并处理;
    所述第一个处理器根据处理后的程序状态处理所述第一报文。
  3. 如权利要求1或2所述的方法,其特征在于,所述处理芯片确定无法并行处理至少两个报文中的第一报文之前,所述方法还包括:所述处理芯片获取第一程序状态信息,所述第一程序状态信息包括压缩后的所述第一报文的程序状态;
    所述处理芯片确定无法并行处理至少两个报文中的第一报文包括:所述处理芯片根据所述第一程序状态信息并行处理所述至少两个报文,在所述处理芯片包括的用于处理所述第一报文的按流水线排列的处理器无法满足所述第一报文的处理需求时确定无法并行处理所述第一报文。
  4. 根据权利要求3所述的方法,其特征在于,所述处理芯片获取所述第一报文的程序状态包括:
    所述处理芯片基于所述第一程序状态信息包括的压缩后的所述第一报文的程序状态,获得所述第一报文的程序状态。
  5. 如权利要求3或4所述的方法,其特征在于,所述处理芯片获取第一程序状态信息包括:
    所述处理芯片获取所述第一报文的控制信息;
    所述处理芯片确定所述第一报文的控制信息满足压缩规则,基于所述第一报文的程序状态获得所述第一程序状态信息。
  6. 如权利要求5所述的方法,其特征在于,所述处理芯片确定所述第一报文的控制信息满足压缩规则包括:
    所述处理芯片基于所述第一报文的控制信息,确定所述第一报文的协议栈类型;
    所述处理芯片在所述压缩规则包括所述第一报文的协议栈类型,确定所述第一报文的控制信息满足压缩规则。
  7. 如权利要求5所述的方法,其特征在于,所述处理芯片确定所述第一报文的控制信息满足压缩规则包括:
    所述处理芯片根据所述第一报文的控制信息,确定所述第一报文的端口配置信息;
    所述处理芯片在所述压缩规则包括所述第一报文的端口配置信息,确定所述第一报文的控制信息满足压缩规则。
  8. 一种处理报文的方法,其特征在于,包括:
    处理芯片获取至少两个报文中的第一报文的控制信息;
    所述处理芯片基于压缩规则和所述第一报文的控制信息,获取第一程序状态信息,所述第一程序状态信息包括压缩后的所述第一报文的程序状态;
    所述处理芯片基于所述第一程序状态信息,对所述至少两个报文中的所述第一报文进行并行处理。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    所述处理芯片获取所述至少两个报文中的第二报文的控制信息;
    所述处理芯片基于所述压缩规则和所述第二报文的控制信息,获取所述第二报文的程序状态;
    所述处理芯片基于所述第二报文的程序状态,对所述至少两个报文中的所述第二报文进行顺序处理。
  10. 如权利要求8或9所述的方法,其特征在于,所述处理芯片基于压缩规则和所述第一报文的控制信息,获取第一程序状态信息包括:
    所述处理芯片基于所述第一报文的控制信息,确定所述第一报文的协议栈类型;
    所述处理芯片在所述压缩规则包括所述第一报文的协议栈类型,确定所述第一报文的控制信息满足所述压缩规则;
    所述处理芯片基于所述第一报文的程序状态,获取所述第一程序状态信息。
  11. 如权利要求8或9所述的方法,其特征在于,所述处理芯片基于压缩规则和所述第一报文的控制信息,获取第一程序状态信息包括:
    所述处理芯片根据所述第一报文的控制信息,确定所述第一报文的端口配置信息;
    所述处理芯片在所述压缩规则包括所述第一报文的端口配置信息,确定所述第一报文的控制信息满足所述压缩规则;
    所述处理芯片基于所述第一报文的程序状态,获取所述第一程序状态信息。
  12. 如权利要求9所述的方法,其特征在于,所述处理芯片基于所述压缩规则和所述第二报文的控制信息,获取所述第二报文的程序状态包括:
    所述处理芯片基于所述第二报文的控制信息,确定所述第二报文的协议栈类型;
    所述处理芯片在所述压缩规则不包括所述第二报文的协议栈类型,获取所述第二报文的程序状态信息。
  13. 如权利要求9所述的方法,其特征在于,所述处理芯片基于所述压缩规则和所述第二报文的控制信息,获取所述第二报文的程序状态包括:
    所述处理芯片根据所述第二报文的控制信息,确定所述第二报文的端口配置信息;
    所述处理芯片在所述压缩规则不包括所述第二报文的端口配置信息,获取所述第二报文的程序状态信息。
  14. 一种处理芯片,其特征在于,所述处理芯片包括按照流水线排列的多个处理器:
    所述多个处理器中的用于处理第一报文的处理器用于确定无法并行处理至少两个报文中的所述第一报文,获取所述第一报文的程序状态;
    所述多个处理器中的第一个处理器用于根据所述第一报文的程序状态,处理所述第一报文。
  15. 如权利要求14所述的处理芯片,其特征在于,所述第一个处理器具体用于:
    获取所述第一报文的程序状态并处理;
    根据处理后的程序状态处理所述第一报文。
  16. 如权利要求14或15所述的处理芯片,其特征在于,所述处理芯片还包括调度器,所述调度器用于获取第一程序状态信息,所述第一程序状态信息包括压缩后的所述第一报文的程序状态;
    所述多个处理器中用于处理所述第一报文的处理器具体用于根据所述第一程序状态信息并行处理所述至少两个报文,在所述多个流水线排列的多个处理器中不包括用于处理所述第一报文处理器时确定无法并行处理所述第一报文。
  17. 根据权利要求16所述的处理芯片,其特征在于,所述处理芯片还包括:
    转换器,用于基于所述第一程序状态信息包括的压缩后的所述第一报文的程序状态,获得所述第一报文的程序状态;
    所述第一个处理器还用于从所述转换器获取所述第一报文的程序状态。
  18. 如权利要求16或17所述的处理芯片,其特征在于,所述处理芯片还包括压缩识别器,所述压缩识别器用于获取所述第一报文的控制信息,确定所述第一报文的控制信息满足压缩规则;
    所述调度器还用于在所述压缩识别器确定所述第一报文的控制信息满足压缩规则的情况下,基于所述第一报文的程序状态获得所述第一程序状态信息。
  19. 如权利要求18所述的处理芯片,其特征在于,所述压缩识别器具体用于:
    基于所述第一报文的控制信息,确定所述第一报文的协议栈类型;
    在所述压缩规则包括所述第一报文的协议栈类型,确定所述第一报文的控制信息满足压缩规则。
  20. 如权利要求18所述的处理芯片,其特征在于,所述压缩识别器具体用于:
    根据所述第一报文的控制信息,确定所述第一报文的端口配置信息;
    在所述压缩规则包括所述第一报文的端口配置信息,确定所述第一报文的控制信息满足压缩规则。
  21. 一种处理芯片,其特征在于,所述处理芯片包括压缩识别器、调度器和按照流水线排列的多个处理器;
    所述压缩识别器用于获取至少两个报文中的第一报文的控制信息,基于压缩规则和所述第一报文的控制信息,确定所述第一报文的控制信息满足所述压缩规则;
    所述调度器用于在所述压缩识别器确定所述第一报文的控制信息满足所述压缩规则的情况下,获取第一程序状态信息,所述第一程序状态信息包括压缩后的所述第一报文的程序状态;
    所述多个处理器中用于处理所述第一报文的处理器用于基于所述第一程序状态信息,对所述至少两个报文中的所述第一报文进行并行处理。
  22. 根据权利要求21所述的处理芯片,其特征在于,
    所述压缩识别器还用于获取所述至少两个报文中的第二报文的控制信息,基于所述压缩规则和所述第二报文的控制信息,确定所述第二报文的控制信息不满足所述压缩规则;
    所述调度器还用于在所述第二报文的控制信息不满足所述压缩规则的情况下,确定所述第二报文的程序状态;
    所述多个处理器中的处理器用于基于所述第二报文的程序状态,对所述至少两个报文 中的所述第二报文进行顺序处理。
  23. 如权利要求21或22所述的处理芯片,其特征在于,所述压缩识别器具体用于:
    基于所述第一报文的控制信息,确定所述第一报文的协议栈类型;
    在所述压缩规则包括所述第一报文的协议栈类型,确定所述第一报文的控制信息满足压缩规则。
  24. 如权利要求21或22所述的处理芯片,其特征在于,所述压缩识别器具体用于:
    根据所述第一报文的控制信息,确定所述第一报文的端口配置信息;
    在所述压缩规则包括所述第一报文的端口配置信息,确定所述第一报文的控制信息满足压缩规则。
  25. 如权利要求22所述的处理芯片,其特征在于,所述压缩识别器具体用于:
    基于所述第二报文的控制信息,确定所述第二报文的协议栈类型;
    在所述压缩规则不包括所述第二报文的协议栈类型,获取所述第二报文的程序状态信息。
  26. 如权利要求22所述的处理芯片,其特征在于,所述压缩识别器具体用于:
    根据所述第二报文的控制信息,确定所述第二报文的端口配置信息;
    在所述压缩规则不包括所述第二报文的端口配置信息,获取所述第二报文的程序状态信息。
PCT/CN2021/119066 2020-09-30 2021-09-17 处理报文的方法和芯片 WO2022068614A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202011056273 2020-09-30
CN202011056273.X 2020-09-30
CN202011410154.X 2020-12-04
CN202011410154.XA CN114363273A (zh) 2020-09-30 2020-12-04 处理报文的方法和芯片

Publications (1)

Publication Number Publication Date
WO2022068614A1 true WO2022068614A1 (zh) 2022-04-07

Family

ID=80949577

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/119066 WO2022068614A1 (zh) 2020-09-30 2021-09-17 处理报文的方法和芯片

Country Status (1)

Country Link
WO (1) WO2022068614A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070067608A1 (en) * 2005-09-19 2007-03-22 Lofgren John D Method and apparatus for removing a pipeline bubble
CN105072047A (zh) * 2015-09-22 2015-11-18 浪潮(北京)电子信息产业有限公司 一种报文传输及处理方法
CN107770090A (zh) * 2017-10-20 2018-03-06 深圳市楠菲微电子有限公司 用于控制流水线中寄存器的方法和装置
CN110808924A (zh) * 2019-11-12 2020-02-18 迈普通信技术股份有限公司 芯片环回报文处理方法、装置及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070067608A1 (en) * 2005-09-19 2007-03-22 Lofgren John D Method and apparatus for removing a pipeline bubble
CN105072047A (zh) * 2015-09-22 2015-11-18 浪潮(北京)电子信息产业有限公司 一种报文传输及处理方法
CN107770090A (zh) * 2017-10-20 2018-03-06 深圳市楠菲微电子有限公司 用于控制流水线中寄存器的方法和装置
CN110808924A (zh) * 2019-11-12 2020-02-18 迈普通信技术股份有限公司 芯片环回报文处理方法、装置及存储介质

Similar Documents

Publication Publication Date Title
US9178831B2 (en) Methods and apparatus for RBridge hop-by-hop compression and frame aggregation
US10848442B2 (en) Heterogeneous packet-based transport
US9893984B2 (en) Path maximum transmission unit discovery
US6172990B1 (en) Media access control micro-RISC stream processor and method for implementing the same
WO2020224503A1 (zh) SRv6网络生成段列表、报文转发的方法、设备和系统
CN110022264B (zh) 控制网络拥塞的方法、接入设备和计算机可读存储介质
RU2487483C2 (ru) Способ и фильтрующее устройство для фильтрации сообщений, поступающих абоненту коммуникационной сети по последовательной шине данных этой сети
JP4890613B2 (ja) パケットスイッチ装置
US9154586B2 (en) Method for parsing network packets having future defined tags
WO2017000593A1 (zh) 报文处理方法及装置
US20090232137A1 (en) System and Method for Enhancing TCP Large Send and Large Receive Offload Performance
JP2006325054A (ja) Tcp/ip受信処理回路及びそれを具備する半導体集積回路
EP3813318B1 (en) Packet transmission method, communication device, and system
US5864553A (en) Multiport frame exchange system
US11936759B2 (en) Systems and methods for compressing a SID list
WO2019179161A1 (zh) 一种数据流量处理方法、设备及系统
WO2022068614A1 (zh) 处理报文的方法和芯片
US20060062229A1 (en) Terminal adapter device capable of performing IEEE1394-to-Ethernet conversion
CN114363273A (zh) 处理报文的方法和芯片
US9559857B2 (en) Preprocessing unit for network data
WO2020200113A1 (zh) 网络设备
US20050044261A1 (en) Method of operating a network switch
KR20120038196A (ko) 라우팅 장치 및 네트워크 장치
US10805436B2 (en) Deliver an ingress packet to a queue at a gateway device
WO2022012073A1 (zh) 报文转发的方法、设备以及系统

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21874275

Country of ref document: EP

Kind code of ref document: A1