WO2023284590A1 - 处理大流量协议报文的方法、系统及存储介质 - Google Patents

处理大流量协议报文的方法、系统及存储介质 Download PDF

Info

Publication number
WO2023284590A1
WO2023284590A1 PCT/CN2022/103936 CN2022103936W WO2023284590A1 WO 2023284590 A1 WO2023284590 A1 WO 2023284590A1 CN 2022103936 W CN2022103936 W CN 2022103936W WO 2023284590 A1 WO2023284590 A1 WO 2023284590A1
Authority
WO
WIPO (PCT)
Prior art keywords
protocol
processing queue
bandwidth
flow
traffic
Prior art date
Application number
PCT/CN2022/103936
Other languages
English (en)
French (fr)
Inventor
程兵旺
向奇敏
林开强
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023284590A1 publication Critical patent/WO2023284590A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0894Packet rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1458Denial of Service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/08Protocols for interworking; Protocol conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols

Definitions

  • the present application relates to but not limited to the field of packet transmission network, and in particular relates to a method, system and storage medium for processing large-traffic protocol messages.
  • Network security faces a variety of challenges, the most common of which are illegal intrusion into network systems or creating DoS (Deny of Service, denial of service) by sending large-traffic protocol packets to paralyze network device services.
  • DoS Deny of Service, denial of service
  • Network attacks or sudden large-traffic protocol packets pose serious challenges to the security of operational networks. How to effectively prevent and deal with such problems has always been a major issue faced by equipment manufacturers.
  • ACL Access Control Lists, access control list
  • the embodiment of the present application provides a method, system and storage medium for processing large-flow protocol packets, which can effectively solve the problem that multiple ports are simultaneously attacked by large-flow packets at least to a certain extent, and can also solve the problem that multiple ports are simultaneously attacked by large-flow packets. Problems with congestion caused by normal input streams.
  • the embodiment of the present application provides a method for processing large-flow protocol packets, the method includes: obtaining and counting the average output flow rate of each protocol packet at each port in a unit time; calculating the first The total traffic bandwidth of the protocol message received by a protocol message processing queue; when the total traffic bandwidth reaches or exceeds the warning threshold, create a second protocol message processing queue; output the first protocol message processing queue The protocol packet with the highest flow average rate is switched to be received by the second protocol packet processing queue.
  • the embodiment of the present application also provides a system for processing large-flow protocol packets.
  • the system includes a monitoring module, a scheduling control module, and a learning prediction module; the monitoring module is configured to monitor the protocol packets of each port. text flow, statistics the average rate of the output streams of various protocol messages of each port per unit time, calculates the bandwidth of the traffic entering the protocol message processing queue per unit time, and feeds back to the scheduling control module and the learning prediction module Traffic parameter information; the scheduling control module is responsible for the creation of protocol message processing queues, protocol message switching, expansion and deletion operations of protocol message processing queues; the learning prediction module is based on the real-time traffic parameter information and system resources The situation determines the required bandwidth of the newly created protocol packet processing queue, and updates the empirical value according to the latest bandwidth parameter of the protocol packet processing queue.
  • the embodiment of the present application further provides a computer-readable storage medium storing computer-executable instructions, and the computer-executable instructions are used to execute the method described in the above-mentioned first aspect.
  • Fig. 1 is a flow chart of a method for processing large-traffic protocol packets according to an embodiment of the present application
  • Fig. 2 is a schematic module diagram of a system for processing large-traffic protocol messages according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of processing a protocol message according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of processing a protocol message according to another embodiment of the present application.
  • FIG. 5 is a schematic diagram of processing a protocol message according to another embodiment of the present application.
  • FIG. 6 is a schematic diagram of processing protocol packets according to another embodiment of the present application.
  • configuring ACL policies is to extract certain characteristic parameters (such as IP, MAC address, etc.) normal operation.
  • the attack message is a normal protocol message and the single-port traffic does not exceed the preset bandwidth, there is nothing to do.
  • an attacker sends a large-traffic protocol packet to multiple ports at the same time.
  • the traffic of each port does not exceed the limit, and even the sent packets meet the requirements of the ACL policy and can pass through normally.
  • the aggregated traffic of each port exceeds the total number of downstream processing modules. bandwidth, leading to congestion and packet discarding, which can eventually lead to abnormal device functions.
  • This application provides a method, system and storage medium for processing large-flow protocol messages, which can realize adaptive adjustment through the newly-built protocol message processing queue and dynamic expansion in the case of multi-port burst large-flow protocol messages Queue bandwidth effectively solves the problem that the traffic of a single port does not exceed the standard, but the total protocol traffic output by multiple ports exceeds the standard. It can not only deal with the impact of legitimate bursts of large traffic, but also effectively prevent malicious DoS attacks.
  • the port described in this application is a port or a logical port in a broad sense.
  • the port is a physical port on the business single board; and for the main control board, the port It is the protocol channel for each service board to enter the main control.
  • the bandwidth of the protocol packet processing queue can be in bps, Kbps, Mbps or pps.
  • Fig. 1 is the flow chart of the method for processing the large-traffic protocol message of an embodiment of the present application, as shown in Fig. 1, the method for processing the large-traffic protocol message at least includes:
  • Step S100 Acquiring and collecting statistics on the average rate of output streams of each protocol message on each port per unit time.
  • Step S200 Calculate and obtain the total traffic bandwidth of the protocol packets received by the first protocol packet processing queue.
  • Step S300 When the total traffic bandwidth reaches or exceeds the warning threshold, create a second protocol packet processing queue.
  • Step S400 switch the protocol packet with the highest average output flow rate in the first protocol packet processing queue to be received by the second protocol packet processing queue.
  • the average flow rate of the protocol packets entering the protocol packet processing queue on each port is equal to the output flow average rate of the protocol packets on each port.
  • the method of the present application shows that the average rate of the output flow of the protocol packets of each port is counted, and the higher the average rate of the output flow, the greater the bandwidth it occupies.
  • the average output flow rate of the protocol packets output from the port is calculated to obtain the total traffic bandwidth of the protocol packets received by the first protocol packet processing queue.
  • a new second protocol message processing queue is created, and the newly created second protocol message processing queue is used to receive the protocol message with the highest output flow average rate in the first protocol message processing queue.
  • the warning threshold is a configurable parameter and can be initially configured as 90% of the bandwidth of the first protocol message processing queue.
  • the bandwidth of the first protocol message processing queue is 2 Mbps
  • the warning threshold is 1.8 Mbps.
  • the total traffic bandwidth of the first protocol message processing queue reaches or exceeds the warning threshold, and the output flow in the current first protocol message processing queue is averaged
  • the protocol packets with the highest rate are switched to be received by the second protocol packet processing queue.
  • the first protocol packet processing If the total traffic bandwidth in the queue is still greater than or equal to the warning threshold, switch the protocol packet with the highest average rate of output flow in the current first protocol packet processing queue to be received by the second protocol packet processing queue. It is worth noting that , the current protocol packet with the highest average rate of the output flow is actually the protocol packet with the second largest average rate of the output flow among the protocol packets initially received in the first protocol packet processing queue, that is, the protocol packet with the second largest rate.
  • the protocol packet with the second largest output flow average rate becomes the protocol packet with the largest output flow average rate among the remaining protocol packets. Moreover, the above-mentioned process is repeated, that is, after two or more switching receptions, the total bandwidth of the traffic in the first protocol message processing is still greater than or equal to the warning threshold, then continue to switch and receive until the first protocol message When the total bandwidth of the traffic in the text processing queue is less than the warning threshold, the switching reception will be stopped.
  • the second protocol packet processing queue stops receiving new protocol message, and at the same time transfer the new protocol message to the first protocol message processing queue for reception.
  • the second protocol packet processing queue will no longer receive new protocol packets, and all protocol packets will be received by the first protocol packet processing queue again, for example, the port 1 outputs protocol packets to the first protocol packet processing queue, port 2 outputs protocol packets to the second protocol packet processing queue, and when the sum of traffic bandwidth is less than the warning threshold, the second protocol packet processing queue no longer receives port 2 A new protocol packet is output, and the above new protocol packet is received by the first protocol packet processing queue.
  • the second protocol packet processing queue when the protocol packets in the second protocol packet processing queue are processed, the second protocol packet processing queue is deleted. It is understandable that the protocol packets in the second protocol packet processing queue still need to be processed subsequently, such as continuing to send downstream, etc., waiting for the protocol packets in the second protocol packet processing queue to be processed, and delete The second protocol message processing queue helps to make reasonable use of system resources and avoid waste of system resources.
  • the bandwidth of the second protocol message processing queue is expandable, that is, when the initial bandwidth of the second protocol message processing queue cannot meet the bandwidth requirements of the currently received protocol message,
  • the bandwidth of the second protocol message processing queue is expanded, but the expansion is limited, that is, the bandwidth of the second protocol message processing queue has a bandwidth upper limit.
  • the bandwidth of the second protocol packet processing queue expands to the bandwidth upper limit, the second protocol packet processing queue is made to discard newly received protocol packets. Expanding the bandwidth of the second protocol message processing queue can improve the processing capability of the large-flow protocol message, enable the system to accommodate more protocol messages, and help slow down the impact of the large-flow message.
  • the bandwidth upper limit of the second protocol packet processing queue is determined by system resources.
  • obtaining and counting the average rate of output streams of each protocol message of each port in a unit time includes: real-time statistics of the average rate of output streams of each protocol message in each port, and calculating the average rate of output streams of each protocol message per unit time Each protocol packet in the protocol is sorted.
  • the average rate of the output flow is used as the basis for sorting, which can be sorted in ascending order or descending order. For example, by sorting in descending order, the protocol packet with the highest average rate of the output flow, the protocol packet with the second largest average rate of the output flow, etc. are sequentially obtained, which is convenient for subsequent switching and reception.
  • the initial bandwidth of the second protocol packet processing queue is determined according to an experience value
  • the experience value is determined by system resources and historical actual effective values
  • the experience value is dynamically adjusted according to the actual situation.
  • the initial bandwidth of the packet processing queue of the second protocol can be determined according to the experience value.
  • the empirical value is the effective bandwidth value learned after encountering large traffic shocks or DoS attacks for several times, but the experience value is not lower than the packet processing queue of the first protocol. of the bandwidth value. It should be noted that the setting of the initial bandwidth of the second protocol message processing queue does not exceed the upper limit of the bandwidth.
  • the initial bandwidth of the second protocol packet processing queue is greater than or equal to the first protocol packet processing queue.
  • the initial bandwidth of the second protocol message processing queue is greater than or equal to the bandwidth of the first protocol message processing queue with the largest bandwidth, for example, there are two first protocol message processing queues Qa and Qb , where the bandwidth of Qa is greater than Qb, then the initial bandwidth of the second protocol packet processing queue is greater than or equal to the bandwidth value of Qa.
  • the priority of the second protocol packet processing queue is less than or equal to the priority of the first protocol packet processing queue.
  • FIG. 2 is a schematic diagram of modules of a system for processing large-traffic protocol packets according to an embodiment of the present application.
  • the system includes a monitoring module, a scheduling control module and a learning prediction module.
  • the monitoring module is set to monitor the protocol message flow of each port, statistics the output flow average rate of various protocol messages of each port per unit time, and calculates the bandwidth of the flow entering the protocol message processing queue per unit time, For example, the total traffic bandwidth of the protocol packets received by the first protocol packet processing queue, the sum of the traffic bandwidths of the protocol packets received by the first protocol packet processing queue and the second protocol packet processing queue, etc. are calculated within a unit time.
  • the monitoring module sends flow feedback parameter information to the scheduling control module and the learning prediction module. It should be noted that the flow feedback parameter information includes but not limited to the output flow average rate information and bandwidth information of the port protocol packet flow.
  • the scheduling control module is responsible for the establishment of the protocol message processing queue, protocol message switching, expansion and deletion of the protocol message processing queue, such as creating the first protocol message processing queue and the second protocol message processing queue. Similarly, for Switching the protocol packet from the first protocol packet processing queue to the second protocol packet processing queue is also responsible for the scheduling control module. The scheduling control module is also responsible for the bandwidth expansion and deletion operations of the second protocol packet processing queue.
  • the scheduling of the first protocol message processing queue and the second protocol message processing queue can adopt the scheduling rules in the QoS (Quality of Service, Quality of Service) queue scheduling algorithm, such as WFQ (Weighted Fair Queuing, weighted fair queue) scheduling rules, etc.
  • QoS Quality of Service
  • WFQ Weighted Fair Queuing, weighted fair queue
  • the learning prediction module decides the bandwidth of the newly created protocol packet processing queue based on real-time traffic parameter information and system resources, such as the initial bandwidth and bandwidth upper limit of the second protocol packet processing queue.
  • the learning prediction module can also update the experience value according to the bandwidth parameters of the latest protocol message processing queue. For example, after the system suffers from a large traffic impact or a DoS attack, the system will update the experience value so that the bandwidth of the protocol message processing queue can be Response to large traffic impact or DoS attack; or, the system is repeatedly impacted by large traffic, so that the bandwidth of the protocol message processing queue is maintained at a certain bandwidth value for many times, then correspondingly, the experience value is updated to make the protocol message processing queue The bandwidth can meet the requirements without multiple expansions.
  • FIG. 3 is a schematic diagram of protocol message processing according to an embodiment of the present application, showing the situation that the same type of protocol message enters a protocol message processing queue under multiple ports on the same board.
  • the multiple ports are different physical ports on the same service board, and the message streams of the first protocol are output from different ports to the downstream at the same time, and enter the first protocol message processing queue, such as protocol A
  • the message flow of the port is simultaneously output to the downstream at Port 1, Port 2, Port n, and enters the protocol message processing queue Qa of protocol A.
  • the monitoring module counts the output flow average rate of the protocol A message flow of each port, in this embodiment, the output flow average rate bandwidth adopts Mbps as the unit, and the output flow average rate per unit time is carried out Sorting, such as sorting in ascending or descending order, record the packet flow with the highest rate as A max , the packet flow with the second largest rate as A max-1 , and so on, then mark the packet with the lowest average rate as A max -n .
  • the total traffic ⁇ (A i ) arriving at Qa per unit time is calculated to obtain the total bandwidth. Passing the output stream average rate information and total bandwidth information of the sorted port protocol message stream to the scheduling control module.
  • the scheduling control module obtains the sorted output flow average rate information and total bandwidth information of each port in real time.
  • the packet flow of protocol A enters Qa by default.
  • the total bandwidth of Qa reaches or exceeds the warning threshold within a certain period of time, a new protocol packet processing queue Qna is created.
  • the scheduling control module switches the message flow A max with the highest rate to be received by Qna, If the total bandwidth still reaches or exceeds the warning threshold, continue to switch the sub-maximum flow A max-1 to be received by Qna, and so on until the total bandwidth of Qa remains below the warning threshold.
  • the upper limit of the bandwidth is 5 times of the initial bandwidth, and X new is less than or equal to X 0 of 5 times, the bandwidth of Qna is no longer expanded, and Qna no longer receives new protocol message streams. And discard the newly received protocol packet flow.
  • the monitoring module makes real-time statistics of the average rate of the output flow of the protocol A message flow of each port, and calculates the sum of the total traffic arriving at the protocol message processing queue per unit time, such as the traffic bandwidth of the protocol message flow arriving at Qa and Qna per unit time
  • the scheduling processing module will re-receive the subsequently received protocol packet flow by Qa, and delete Qna after all the originally received packets in Qna are processed. It can be understood that the original protocol packets on a certain port are processed in Qna.
  • the protocol packets of this port will be processed. Re-received by Qa, for the protocol message that has arrived at Qna, it is still processed in Qna, and after the processing is completed, Qna is deleted.
  • Fig. 4 is the processing schematic diagram of the protocol message of another embodiment of the present application, has shown the situation that multiple protocol messages enter the same protocol message processing queue under the multi-port of the same board, as shown in Fig. 4, protocol A
  • protocol A The message flow and the message flow of protocol X are transmitted into the protocol message processing queue Qmix through different ports at the same time.
  • the message flow of protocol A is input into Qmix through Port 1, Port 2,...,Port n.
  • the message flow of protocol X is input into Qmix through Portx 1,..., Portx m ports.
  • the monitoring module counts the average rate of the output flow of the protocol A and protocol X message flows output to Qmix in real time from each port, and sorts the average rate of the output flow per unit time, and the report with the highest rate
  • the packet flow is marked as P max
  • the packet flow with the second largest rate is P max-1 , and so on
  • the packet flow with the smallest average rate is marked as P max-mn .
  • the scheduling control module acquires the sorted output flow average rate information and total bandwidth information of each port in real time. When the total bandwidth does not exceed the warning threshold, the message flow of protocol A and protocol X enters Qmix by default. When the total bandwidth of Qmix reaches or exceeds the warning threshold within a certain period of time, a new protocol message processing queue Qnmix is created.
  • the scheduling control module switches the message flow P max with the highest rate to be received by Qnmix, If the total bandwidth still reaches or exceeds the warning threshold, continue to switch the sub-maximum flow P max-1 to be received by Qmix, and so on until the total bandwidth of Qmix remains below the warning threshold.
  • the initial bandwidth of Qnmix is X mix0 , (P max +P max-1 )>X mix0
  • dynamically expand the bandwidth of Qnmix for example, Expand X mix0 each time until (P max +P max-1 ) ⁇ X newmix is satisfied within a unit time, where X newmix is the expanded bandwidth of Qnmix.
  • the expansion reaches the upper limit of bandwidth, as in the present embodiment, the upper limit of bandwidth is 5 times of initial bandwidth, X newmix is less than or equal to X 0 of 5 times, the bandwidth of Qnmix no longer expands, and Qnmix no longer receives new protocol message flow, And discard the newly received protocol packet flow.
  • the monitoring module makes real-time statistics of the average rate of the output flow of the protocol A and protocol X message streams of each port, and calculates the sum of the total traffic arriving at the protocol message processing queue per unit time, such as the protocol messages arriving at Qmix and Qnmix per unit time
  • the total traffic bandwidth of the stream when the sum of the traffic bandwidth is maintained below the warning threshold per unit time, the scheduling processing module will re-receive the subsequently received protocol packet stream by Qmix, and wait for the original received packets in Qnmix to be processed , delete Qnmix. It can be understood that the original protocol packets on a certain port are processed in Qnmix.
  • Fig. 5 is a schematic diagram of the processing of protocol messages according to another embodiment of the present application, showing that there is a protocol message corresponding to a protocol message processing queue and multiple protocol messages corresponding to a protocol message under the multi-port on the same board.
  • protocol packet processing queue Qa receives the message flow of protocol A
  • protocol B receives the message flow of protocol X.
  • the monitoring module real-time statistics of the output flow average rate of the packet flow of the protocol A of each port input Qa and the output flow average rate of the protocol B and protocol X packet flow of each port input Qmix, correspondingly, Sort the packet flow input into Qa, such as recording the packet flow with the largest output average rate as A max , and similarly sort the packet flow input into Qmix, such as recording the packet flow with the largest output average rate as A max . is P max .
  • the total traffic arriving at Qa and the total traffic arriving at Qmix are calculated per unit time to obtain the total bandwidth A and the total bandwidth P. Pass the output flow average rate information and total bandwidth information of the sorted port protocol message flow to the scheduling control module.
  • the bandwidth of Qnmix can also be dynamically expanded.
  • the bandwidth of Qnmix is dynamically expanded. It can be understood that the packet flow received by Qnmix The bandwidth is the sum of the bandwidths of the packet streams input to Qnmix.
  • Qnmix if at least one of Qa and Qmix reaches or exceeds the early warning threshold, Qnmix will be created, but there is only one Qnmix, that is, after Qnmix is newly created, no matter whether it is Qa or Qmix, the protocol reporting rate with the largest rate when reaching or exceeding the early warning threshold
  • the text streams all enter the same protocol message processing queue; Qnmix is only deleted when the sum of the traffic bandwidths arriving at Qa and Qnmix per unit time and the sum of the traffic bandwidths arriving at Qmix and Qnmix per unit time do not exceed the warning threshold.
  • the following scheme can also be adopted: different protocol messages are input into different protocol message processing queues, for example, the message of protocol A
  • the message flow corresponds to the input protocol message processing queue Qa
  • the protocol B message flow corresponds to the input protocol message processing queue Qb.
  • a new protocol message processing queue is created. Queue Qna switches the packet flow with the highest rate to Qna.
  • the message flow of protocol A and protocol X corresponds to the input protocol message processing queue Qamix
  • the messages of protocol B and protocol X The flow corresponds to the input protocol message processing queue Qbmix.
  • Fig. 6 is the processing schematic diagram of the protocol message of another embodiment of the present application, has shown the situation that one or more ports of different boards simultaneously output the message stream of the same protocol, in this case, each service single board
  • the protocol output processing is regarded as a whole, and the output is regarded as the output of a logical port, and the processing of the protocol message in each service single board is shown in FIG. 3 as the processing flow of the embodiment.
  • Each logical port outputs a message stream of the same protocol, such as a message stream of protocol A, and enters a protocol message processing queue.
  • the monitoring module, scheduling control module and learning prediction module are located on the main control single board.
  • the processing for the above situation is similar to the processing flow of the embodiment shown in FIG. 3 , but it is processing for logical ports.
  • the message flow of protocol A is simultaneously output on logical port 1, logical port 2, ..., logical port n, and enters the protocol message processing queue Qa.
  • the monitoring module makes statistics on the average rate of the output flow of each logical port, and the The average rate of the internal output flow is sorted, and the packet flow with the highest rate is recorded as A max , the packet flow with the second largest rate is A max-1 , and so on, and the packet with the lowest average rate is marked as A max-n .
  • the total flow ⁇ (A i ) arriving at Qa per unit time is calculated to obtain the total bandwidth.
  • the output flow average rate information and total bandwidth information of the sorted logical port protocol packet flow are delivered to the scheduling control module.
  • the scheduling control module obtains the sorted output flow average rate information and total bandwidth information of each logical port in real time.
  • the packet flow of protocol A enters Qa by default.
  • the total bandwidth of Qa reaches or exceeds the warning threshold within a certain period of time, a new protocol packet processing queue Qna is created.
  • the scheduling control module switches the message flow A max with the highest rate to be received by Qna, If the total bandwidth still reaches or exceeds the warning threshold, continue to switch the sub-maximum flow A max-1 to be received by Qna, and so on until the total bandwidth of Qa remains below the warning threshold.
  • the initial bandwidth of Qna is X 0 , (A max +A max-1 )>X 0 , dynamically expand the bandwidth of Qna, for example, Expand X 0 each time until (A max +A max ⁇ 1 ) ⁇ X new is satisfied within a unit time, where X new is the expanded bandwidth of Qna.
  • the expansion reaches the upper limit of the bandwidth, as in the present embodiment, the upper limit of the bandwidth is 5 times of the initial bandwidth, and X new is less than or equal to X 0 of 5 times, the bandwidth of Qna is no longer expanded, and Qna no longer receives new protocol message streams. And discard the newly received protocol packet flow.
  • the monitoring module counts the average output flow rate of the protocol A message flow of each logical port in real time, and calculates the sum of the total traffic arriving at the protocol message processing queue per unit time, such as the bandwidth of the protocol message flow arriving at Qa and Qna per unit time Sum, when the bandwidth sum is maintained below the warning threshold per unit time, the scheduling processing module will re-receive the subsequently received protocol packet flow by Qa, and delete Qna after all the originally received packets in Qna are processed. It can be understood that the original protocol message on a certain logical port enters Qna for processing. When the bandwidth sum of the protocol message streams of Qa and Qna is less than the warning threshold per unit time, the protocol message of this logical port is reported. The message is received by Qa again, and the protocol message that has arrived at Qna is still processed in Qna, and Qna is deleted after the processing is completed.
  • the protocol message in the service board is processed as shown in FIG. As a whole, as a logical port to output protocol packets.
  • An embodiment of the present application also provides a computer-readable storage medium storing computer-executable instructions, and the computer-executable instructions are used to execute the method of the above-mentioned embodiment.
  • the embodiment of the present application can realize that in the case of a multi-port sudden large-flow protocol message, through the newly-built protocol message processing queue and dynamic expansion, the queue bandwidth can be adaptively adjusted, effectively solving the problem that the flow rate of a single port does not exceed the standard, but multiple ports
  • the problem that the total output protocol traffic exceeds the standard can not only deal with the impact of legal sudden large traffic, but also effectively prevent malicious DoS attacks.
  • memory can be used to store non-transitory software programs and non-transitory computer-executable programs.
  • the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage devices.
  • the memory may include memory located remotely from the processor, which remote memory may be connected to the processor via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the mobile communication device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种处理大流量协议报文的方法、系统及存储介质,在本申请中,当流量总带宽达到或超过预警门限,创建新的协议报文处理队列,将端口中带宽最大的协议报文流转入新的协议报文处理队列中处理。

Description

处理大流量协议报文的方法、系统及存储介质
相关申请的交叉引用
本申请基于申请号为202110807384.8、申请日为2021年7月16日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及但不限于分组传送网领域,特别涉及一种处理大流量协议报文的方法、系统及存储介质。
背景技术
随着网络的普及以及越来越广泛的网络应用,网络安全问题日益突出。网络安全面临的挑战种类繁多,其中最常见的就是非法入侵网络系统或者通过发送大流量的某种协议报文制造DoS(Deny of Service,拒绝服务)以达到瘫痪网络设备服务的目的。网络攻击或突发大流量协议报文给运营网络的安全带来严峻挑战,如何有效防范和应对此类问题一直是设备制造商面临的一大课题。为应对大流量协议报文的冲击,目前最常见的方法就是配置ACL(Access Control Lists,存取控制列表)策略,但对于多端口在合法流(正常输入流)同时输入的情况下,ACL策略并不能有效地应对而导致板级、主控级的CPU繁忙直至出现DoS,没能解决多端口在合法流同时输入情况下导致的下游系统资源占用超限受冲击的问题。
发明内容
本申请实施例提供一种处理大流量协议报文的方法、系统及存储介质,至少在一定程度上能够有效解决多个端口同时受大流量报文攻击的问题,且还能解决多个端口同时出现正常输入流而导致拥塞的问题。
第一方面,本申请实施例提供了一种处理大流量协议报文的方法,所述方法包括:获取并统计单位时间内各端口的每种协议报文的输出流平均速率;计算得出第一协议报文处理队列接收到的协议报文的流量总带宽;当所述流量总带宽达到或超过预警门限,新建第二协议报文处理队列;将所述第一协议报文处理队列中输出流平均速率最大的协议报文切换至由所述第二协议报文处理队列接收。
第二方面,本申请实施例还提供了一种处理大流量协议报文的系统,所述系统包括监测模块、调度控制模块和学习预测模块;所述监测模块被设置为监控各个端口的协议报文流量,统计单位时间内每个端口各种协议报文的输出流平均速率,计算单位时间内进入协议报文处理队列的流量的带宽,并向所述调度控制模块和所述学习预测模块反馈流量参数信息;所述调度控制模块负责协议报文处理队列的创建、协议报文切换、协议报文处理队列的扩容及删除操作;所述学习预测模块根据实时的所述流量参数信息及系统资源情况决策新建的协议报文处理队列所需的带宽,根据最新的协议报文处理队列的带宽参数更新经验值。
第三方面,本申请实施例还提供了一种计算机可读存储介质,存储有计算机可执行指令, 所述计算机可执行指令用于执行如上述第一方面所述的方法。
附图说明
图1为本申请一实施例处理大流量协议报文的方法的流程图;
图2为本申请一实施例处理大流量协议报文的系统的模块示意图;
图3为本申请一实施例的协议报文的处理示意图;
图4为本申请另一实施例的协议报文的处理示意图;
图5为本申请另一实施例的协议报文的处理示意图;
图6为本申请另一实施例的协议报文的处理示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
虽然本申请在附图中示出了逻辑顺序,但是在某些情况下,可以以不同于附图中的顺序执行所示出或描述的步骤。说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
在本领域一些情形中,配置ACL策略是通过提取输入报文的某些特征参数(如IP、MAC地址等)与ACL策略进行匹配,从而识别出异常协议流量并采取干预措施,最大程度保障设备正常运行。采用ACL策略控制,一般针对某个接入端口。这种方式存在一个弱点,当攻击报文是正常的协议报文且单端口流量没有超过预设带宽时,则无能为力。例如,攻击者对多个端口同时发送大流量协议报文,每个端口流量没有超限,甚至发送的报文也符合ACL策略要求可以正常通过,但是各个端口汇总后的流量超过下游处理模块总带宽,导致出现拥塞和报文丢弃,最终可引发设备功能异常。
上述问题无法通过单个端口的ACL防范措施解决问题。因为对单个端口来说,输入流量是合法、正常的。但多个端口同时作用,则可导致板级、主控级的CPU繁忙直至出现DoS。
本申请提供了一种处理大流量协议报文的方法、系统及存储介质,可以实现在多端口突发大流量协议报文的情况下,通过新建的协议报文处理队列、动态扩容自适应调整队列带宽,有效解决了单个端口流量不超标,但是多个端口输出的协议总流量超标的问题,既可以应对合法的突发大流量的冲击,也可以有效防范恶意的DoS攻击。
本申请所述端口为广义上的端口或逻辑端口,对于处理接入业务的单板(简称业务单板)而言,端口为业务单板上的物理端口;而对于主控板而言,端口为各业务单板进入主控的协议通道。
本申请所述流平均速率有两种统计方式,第一种,按照比特计数,单位为bps、Kbps或Mbps,第二种,按照接收包数目统计,单位为pps。相应的,协议报文处理队列的带宽可以采用bps、Kbps、Mbps或pps。
下面结合附图,对本申请实施例作进一步阐述。
图1为本申请一实施例处理大流量协议报文的方法的流程图,如图1所示,处理大流量协议报文的方法至少包括:
步骤S100:获取并统计单位时间内各端口的每种协议报文的输出流平均速率。
步骤S200:计算得出第一协议报文处理队列接收到的协议报文的流量总带宽。
步骤S300:当流量总带宽达到或超过预警门限,新建第二协议报文处理队列。
步骤S400:将第一协议报文处理队列中输出流平均速率最大的协议报文切换至由第二协议报文处理队列接收。
对于协议报文处理队列而言,各端口中的协议报文进入协议报文处理队列的流平均速率等于各端口中的协议报文的输出流平均速率。
对于大流量协议报文的处理,本申请的方法示出对各端口的协议报文的输出流平均速率进行统计,输出流平均速率越大,则其占据的带宽也就越大。对从端口输出的协议报文的输出流平均速率进行计算,得到第一协议报文处理队列接收到的协议报文的流量总带宽。当流量总带宽大于或等于预警门限,则新建第二协议报文处理队列,新建的第二协议报文处理队列用于接收第一协议报文处理队列中输出流平均速率最大的协议报文,以解决下游因带宽超限而导致拥塞的问题。
需要说明的是,预警门限为可配置参数,可初始配置为第一协议报文处理队列带宽的90%,例如,第一协议报文处理队列的带宽为2Mbps,则预警门限为1.8Mbps。
在本申请的一些实施例中,当协议报文切换接收后,第一协议报文处理队列的流量总带宽达到或超过预警门限,则将当前的第一协议报文的处理队列中输出流平均速率最大的协议报文切换至由第二协议报文处理队列接收。
可以理解的是,在协议报文切换接收后,即第一协议报文处理队列中输出流平均速率最大的协议报文已经切换由第二协议报文处理队列接收后,第一协议报文处理队列内的流量总带宽仍大于或等于预警门限,则将当前的第一协议报文处理队列中输出流平均速率最大的协议报文切换至由第二协议报文处理队列接收,值得注意的是,当前输出流平均速率最大协议报文其实为第一协议报文处理队列中一开始接收的协议报文中输出流平均速率次最大,即第二大的协议报文,当输出流平均速率最大的协议报文切换接收后,输出流平均速率次最大的协议报文则成为剩余的协议报文中输出流平均速率最大的协议报文。而且上述过程是重复进行的,即经过两次甚至是更多次的切换接收后,第一协议报文处理内的流量总带宽依旧大于或等于预警门限,则继续切换接收,直至第一协议报文处理队列内的流量总带宽小于预警门限,才停止切换接收。
在本申请一些实施例中,当单位时间内第一协议报文处理队列和第二协议报文处理队列所接收的协议报文的流量带宽总和小于预警门限,第二协议报文处理队列停止接收新的协议报文,同时将新的协议报文转由第一协议报文处理队列接收。
可以理解的是,流量带宽总和满足小于预警门限的条件,则第二协议报文处理队列不再接收新的协议报文,所有协议报文重新由第一协议报文处理队列接收,例如,端口1向第一协议报文处理队列输出协议报文,端口2向第二协议报文处理队列输出协议报文,当流量带宽总和小于预警门限,则第二协议报文处理队列不再接收端口2输出的新的协议报文,上述新的协议报文由第一协议报文处理队列接收。
在本申请一些实施例中,当处理完毕第二协议报文处理队列内的协议报文,删除第二协议报文处理队列。可以理解的是,第二协议报文处理队列内的协议报文仍需要进行后续的处理,如向下游继续发送等,等待第二协议报文处理队列内的协议报文均被处理完毕,删除第 二协议报文处理队列,这样有助于对系统资源进行合理利用,避免系统资源的浪费。
在本申请一些实施例中,第二协议报文处理队列的带宽是可拓展的,即当第二协议报文处理队列的初始带宽不能满足当前接收到的协议报文所需的带宽的要求,则对第二协议报文处理队列的带宽进行拓展,但该拓展是有限制的,即第二协议报文处理队列的带宽存在带宽上限。当第二协议报文处理队列的带宽拓展至带宽上限,则使第二协议报文处理队列丢弃新接收的协议报文。对第二协议报文处理队列的带宽进行拓展,可以使得提升对大流量协议报文的处理能力,使系统能够容纳更多的协议报文,有助于减缓大流量报文的冲击。
在一些示例中,第二协议报文处理队列的带宽上限是由系统资源确定的。
在本申请一些实施例中,获取并统计单位时间内各端口的每种协议报文的输出流平均速率包括:实时统计各端口中的每种协议报文的输出流平均速率,并对单位时间内各协议报文进行排序。
其中,以输出流平均速率为排序的依据,可以按照升序或者降序的次序进行排序。例如,以降序的次序进行排序,依次得出输出流平均速率最大的协议报文、输出流平均速率次最大的协议报文等,便于后续进行的切换接收。
本申请一些实施例中,第二协议报文处理队列的初始带宽根据经验值确定,经验值由系统资源和历史实际有效值确定,且经验值根据实际情况动态调整。根据经验值可以确定第二协议报文处理队列的初始带宽,经验值为若干次遭遇大流量冲击或DoS攻击后学习得到的有效带宽值,但经验值是不低于第一协议报文处理队列的带宽值的。值得注意的是,对于第二协议报文处理队列的初始带宽的设定是不超过带宽上限的。
即第二协议报文处理队列的初始带宽是大于或等于第一协议报文处理队列的,当第一协议报文处理队列存在多个,即对应于多个不同的协议报文,存在多个第一协议报文处理队列,则第二协议报文处理队列的初始带宽大于或等于带宽最大的第一协议报文处理队列的带宽,例如,存在两个第一协议报文处理队列Qa和Qb,其中,Qa的带宽大于Qb,则第二协议报文处理队列的初始带宽大于或等于Qa的带宽值。另,在队列调度中,第二协议报文处理队列的优先级小于或等于第一协议报文处理队列的优先级的。
图2为本申请实施例的一种处理大流量协议报文的系统的模块示意图。如图2所示,系统包括监测模块、调度控制模块和学习预测模块。其中,监测模块被设置为监控各个端口的协议报文流量,统计单位时间内每个端口的各种协议报文的输出流平均速率,计算单位时间内进入协议报文处理队列的流量的带宽,如计算单位时间内第一协议报文处理队列接收到的协议报文的流量总带宽、第一协议报文处理队列和第二协议报文处理队列所接收的协议报文的流量带宽总和等。监测模块向调度控制模块和学习预测模块发送流量反馈参数信息,需要说明的是,流量反馈参数信息包括但不限于端口协议报文流的输出流平均速率信息及带宽信息。
调度控制模块负责协议报文处理队列的创建、协议报文切换、协议报文处理队列的扩容及删除操作,如创建第一协议报文处理队列和第二协议报文处理队列,同样的,对于将协议报文从第一协议报文处理队列切换由第二协议报文处理队列接收也是由调度控制模块负责。对于第二协议报文处理队列的带宽拓展、删除操作同样由调度控制模块负责。
对于协议报文处理队列的调度,例如,对第一协议报文处理队列和第二协议报文处理队列的调度,可以采用QoS(Quality of Service,服务质量)队列调度算法中的调度规则, 如WFQ(Weighted Fair Queuing,加权公平队列)调度规则等。
学习预测模块则根据实时的流量参数信息及系统资源决策新建的协议报文处理队列的带宽,如第二协议报文处理队列的初始带宽、带宽上限等。学习预测模块还可以根据最新的协议报文处理队列的带宽参数更新经验值,例如,系统遭受大流量冲击或DoS攻击后,系统会进行经验值的更新,以使协议报文处理队列的带宽能够应对大流量冲击或DoS攻击;或者,系统多次受到大流量的冲击,使得协议报文处理队列的带宽多次维持在某一带宽值,则对应地,更新经验值,使协议报文处理队列的带宽无需多次拓展即满足要求。
图3为本申请一实施例协议报文的处理示意图,示出了在同板多端口下同一种协议报文进入一个协议报文处理队列的情况。如图3所示,其中多个端口分别为同一业务单板上的不同物理端口,第一协议的报文流同时从不同的端口向下游输出,进入第一协议报文处理队列,如协议A的报文流同时在Port 1、Port 2、···、Port n端口向下游输出,进入协议A的协议报文处理队列Qa。
结合图2进一步说明,监测模块统计各端口的协议A报文流的输出流平均速率,在本实施例中,输出流平均速率带宽采用Mbps为单位,并对单位时间内的输出流平均速率进行排序,如按照升序或降序的次序进行排序,将速率最大的报文流记为A max,速率次最大的报文流为A max-1,依次类推,则流平均速率最小的标记为A max-n。同时计算单位时间内到达Qa的总流量∑(A i),得到总带宽。将排序后的端口协议报文流的输出流平均速率信息及总带宽信息传递给调度控制模块。
调度控制模块实时获取排序后的各端口输出流平均速率信息及总带宽信息。当总带宽没有超过预警门限,协议A的报文流默认进入Qa,当Qa总带宽在一定时间内达到或超过预警门限,创建一个新的协议报文处理队列Qna。单位时间内,进入Qa的总流量(A max-n+···+A max -1+A max)大于预警门限X 1mt,调度控制模块将速率最大的报文流A max切换由Qna接收,若总带宽仍达到或超过预警门限,则继续将次最大流A max-1切换由Qna接收,依次类推,直至Qa总带宽维持在预警门限以下。
例如,当A max-1由Qna接收后,Qa的总带宽维持在门限以下,即(A max-n+···+A max-2)小于预警门限X 1mt,则协议报文流A max和A max-1由Qna接收,其余报文流仍由Qa接收。
当进入Qna的协议报文流的总带宽超过Qna的初始带宽,例如,Qna的初始带宽为X 0,(A max+A max-1)>X 0,对Qna的带宽进行动态拓展,例如,每次拓展X 0,即X new=X new+X 0,直至在单位时间内满足(A max+A max-1)<X new,其中,X new为Qna拓展后的带宽。当拓展达到带宽上限,如在本实施例中带宽上限为5倍初始带宽,X new小于或等于5倍的X 0,Qna的带宽不再拓展,Qna也不再接收新的协议报文流,且对新接收的协议报文流做丢弃处理。
监测模块实时统计各端口的协议A报文流的输出流平均速率,计算单位时间内到达协议报文处理队列的总流量的总和,如单位时间内到达Qa和Qna的协议报文流的流量带宽总和,当该流量带宽总和在单位时间内维持在预警门限以下,调度处理模块将后续接收的协议报文流重新由Qa接收,待Qna中的原本接收的报文均处理完毕后,删除Qna。可以理解的是,原本某一端口上的协议报文是进入Qna中处理的,当单位时间内,Qa和Qna的协议报文流的流量带宽总和小于预警门限,则将该端口的协议报文重新由Qa接收,对于已经到达Qna的协议报文,仍在Qna内处理,待处理完毕后,删除Qna。
图4为本申请另一实施例的协议报文的处理示意图,示出了在同板多端口下多种协议报 文进入同一协议报文处理队列的情况,如图4所示,协议A的报文流和协议X的报文流同时由不同的多个端口传输进入协议报文处理队列Qmix,如协议A的报文流由Port 1、Port 2、···、Port n端口输入Qmix,而协议X的报文流则由Portx 1、···、Portx m端口输入Qmix。
结合图2进一步说明,监测模块实时统计各个端口中输出到Qmix的协议A和协议X的报文流的输出流平均速率,并对单位时间内的输出流平均速率进行排序,将速率最大的报文流记为P max,速率次最大的报文流为P max-1,依次类推,输出流平均速率最小的标记为P max-m-n。同时计算单位时间内到达Qmix的总流量∑(P i),得到总带宽。将排序后的端口协议报文流的输出流平均速率信息及总带宽信息传递给调度控制模块。
调度控制模块实时获取排序后的各端口输出流平均速率信息及总带宽信息。当总带宽没有超过预警门限,协议A和协议X的报文流默认进入Qmix,当Qmix总带宽在一定时间内达到或超过预警门限,创建一个新的协议报文处理队列Qnmix。单位时间内,进入Qmix的总流量(P max-n+···+P max-1+P max)大于预警门限X mix1mt,调度控制模块将速率最大的报文流P max切换由Qnmix接收,若总带宽仍达到或超过预警门限,则继续将次最大流P max-1切换由Qmix接收,依次类推,直至Qmix总带宽维持在预警门限以下。可以理解的是,当P max-1由Qnmix接收后,Qmix的总带宽维持在门限以下,即(P max-n+···+P max-2)小于预警门限X mix1mt,协议报文流P max和P max-1由Qnmix接收,其余报文流仍由Qmix接收。
当进入Qnmix的协议报文流的总带宽超过Qnmix的初始带宽,例如,Qnmix的初始带宽为X mix0,(P max+P max-1)>X mix0,对Qnmix的带宽进行动态拓展,例如,每次拓展X mix0,直至在单位时间内满足(P max+P max-1)<X newmix,其中,X newmix为Qnmix拓展后的带宽。当拓展达到带宽上限,如在本实施例中带宽上限为5倍初始带宽,X newmix小于或等于5倍的X 0,Qnmix的带宽不再拓展,Qnmix也不再接收新的协议报文流,且对新接收的协议报文流做丢弃处理。
监测模块实时统计各端口的协议A和协议X的报文流的输出流平均速率,计算单位时间内到达协议报文处理队列的总流量的总和,如单位时间内到达Qmix和Qnmix的协议报文流的流量带宽总和,当该流量带宽总和在单位时间内维持在预警门限以下,调度处理模块将后续接收的协议报文流重新由Qmix接收,待Qnmix中的原本接收的报文均处理完毕后,删除Qnmix。可以理解的是,原本某一端口上的协议报文是进入Qnmix中处理的,当单位时间内,Qmix和Qnmix的协议报文流的流量带宽总和小于预警门限,则将该端口的协议报文重新由Qmix接收,对于已经到达Qnmix的协议报文,仍在Qnmix内处理,待处理完毕后,删除Qnmix。
图5为本申请另一实施例的协议报文的处理示意图,示出了在同板多端口下同时存在一种协议报文对应一个协议报文处理队列以及多种协议报文对应一个协议报文处理队列的情况,如图5所示,不同端口输出不同协议的报文流,或者一个端口输出多种不同协议的报文流,一些端口如Port 1、Port 2等输出协议A和协议B的报文流,一些端口如Port n等输出协议A和协议X的报文流,对应地,存在两个协议报文处理队列,如协议A的协议报文处理队列Qa和包含协议B、协议X等的混合报文流的协议报文处理队列Qmix,Qa则接收协议A的报文流,Qmix接收协议B、协议X的报文流。
结合图2进一步说明,监测模块实时统计各端口输入Qa的协议A的报文流的输出流平均速率和各端口输入Qmix的协议B、协议X的报文流的输出流平均速率,对应地,对输入Qa的报文流进行排序,如将输出流平均速率最大的报文流记为A max,同样地,对输入Qmix的报文流进行排序,如其中输出平均速率最大的报文流记为P max。同时计算单位时间内到达Qa的 总流量以及到达Qmix的总流量,得到总带宽A和总带宽P。将上述排序后的端口协议报文流的输出流平均速率信息及总带宽信息传递给调度控制模块。
当总带宽A或总带宽P在单位时间内达到超过预警门限,则创建一个新的协议报文处理队列Qnmix,当总带宽A在单位时间内达到或超过预警门限,将报文流A max切换至Qnmix,若总带宽A仍达到或超过预计门限,则将次最大的报文流切换至Qnminx,直至总带宽A维持在预警门限以下;当总带宽P在单位时间内达到或超过预警门限,将报文流P max切换至Qnmix,若总带宽P仍达到或超过预计门限,则将次最大的报文流切换至Qnminx,直至总带宽P维持在预警门限以下。
同样的,Qnmix的带宽也是可动态拓展的,当Qnmix接收到的报文流的带宽超过Qnmix的初始带宽,则对Qnmix的带宽进行动态拓展,可以理解的是,Qnmix接收到的报文流的带宽为输入Qnmix的报文流的带宽之和。
应当想到的是,Qa和Qmix中的至少一个达到或超过预警门限,都会新建Qnmix,但Qnmix只有一个,即Qnmix新建后,无论是Qa还是Qmix,达到或超过预警门限时的速率最大的协议报文流均进入同一协议报文处理队列;当单位时间内到达Qa和Qnmix的流量带宽总和以及单位时间内到达Qmix和Qnmix的流量带宽总和均未超过预警门限,才删除Qnmix。
在本申请另一实施例中,对于同板多端口同时存在多种协议输出的情况,还可以采用如下方案:不同的协议报文输入不同的协议报文处理队列中,例如,协议A的报文流对应输入协议报文处理队列Qa,协议B的报文流对应输入协议报文处理队列Qb,则当Qa接收到的协议A报文流的带宽达到或超过预警门限,新建协议报文处理队列Qna,将速率最大的报文流切换至Qna,同样的,当Qb接收到的协议B报文流的带宽达到或超过预警门限,新建协议报文处理队列Qnb,将速率最大的报文流切换至Qnb,其处理过程如图3所示实施例的处理流程。
在本申请另一实施例中,对于同板多端口同时存在多种协议输出的情况,例如协议A和协议X的报文流对应输入协议报文处理队列Qamix,协议B和协议X的报文流对应输入协议报文处理队列Qbmix,当Qa接收到的协议A和协议X的报文流的带宽达到或超过预警门限,新建协议报文处理队列Qnamix,将速率最大的报文流切换至Qnamix,同样的,当Qb接收到的协议B和协议X的报文流的带宽达到或超过预警门限,新建协议报文处理队列Qnbmix,将速率最大的报文流切换至Qnbmix,其处理过程如图4所示实施例的处理流程。
图6为本申请另一实施例的协议报文的处理示意图,示出了不同板的一个或多个端口同时输出同一协议的报文流的情况,在这一情况下,每一业务单板的协议输出处理看成一个整体,输出当成一个逻辑端口的输出,而各业务单板内对协议报文的处理如图3所示实施例的处理流程。各逻辑端口输出同一协议的报文流,如协议A的报文流,进入一协议报文处理队列。则监控模块、调度控制模块和学习预测模块是位于主控单板上。
因此,对于上述情况的处理类似于图3所示实施例的处理流程,但却是针对逻辑端口的处理。
协议A的报文流同时在逻辑端口1、逻辑端口2、···、逻辑端口n输出,进入协议报文处理队列Qa,监测模块对各逻辑端口的输出流平均速率进行统计,对单位时间内输出流平均速率进行排序,将速率最大的报文流记为A max,速率次最大的报文流为A max-1,依次类推,则流平均速率最小的标记为A max-n。同时计算单位时间内到达Qa的总流量∑(A i),得到总带宽。将排序后的逻辑端口协议报文流的输出流平均速率信息及总带宽信息传递给调度控制模 块。
调度控制模块实时获取排序后的各逻辑端口输出流平均速率信息及总带宽信息。当总带宽没有超过预警门限,协议A的报文流默认进入Qa,当Qa总带宽在一定时间内达到或超过预警门限,创建一个新的协议报文处理队列Qna。单位时间内,进入Qa的总流量(A max-n+···+A max -1+A max)大于预警门限X 1mt,调度控制模块将速率最大的报文流A max切换由Qna接收,若总带宽仍达到或超过预警门限,则继续将次最大流A max-1切换由Qna接收,依次类推,直至Qa总带宽维持在预警门限以下。可以理解的是,当A max-1由Qna接收后,Qa的总带宽维持在门限以下,即(A max-n+···+A max-2)小于预警门限X 1mt,协议报文流A max和A max-1由Qna接收,其余报文流仍由Qa接收。
当进入Qna的协议报文流的总带宽超过Qna的初始带宽,例如,Qna的初始带宽为X 0,(A max+A max-1)>X 0,对Qna的带宽进行动态拓展,例如,每次拓展X 0,直至在单位时间内满足(A max+A max-1)<X new,其中,X new为Qna拓展后的带宽。当拓展达到带宽上限,如在本实施例中带宽上限为5倍初始带宽,X new小于或等于5倍的X 0,Qna的带宽不再拓展,Qna也不再接收新的协议报文流,且对新接收的协议报文流做丢弃处理。
监测模块实时统计各逻辑端口的协议A报文流的输出流平均速率,计算单位时间内到达协议报文处理队列的总流量的总和,如单位时间内到达Qa和Qna的协议报文流的带宽总和,当该带宽总和在单位时间内维持在预警门限以下,调度处理模块将后续接收的协议报文流重新由Qa接收,待Qna中的原本接收的报文均处理完毕后,删除Qna。可以理解的是,原本某一逻辑端口上的协议报文是进入Qna中处理的,当单位时间内,Qa和Qna的协议报文流的带宽总和小于预警门限,则将该逻辑端口的协议报文重新由Qa接收,对于已经到达Qna的协议报文,仍在Qna内处理,待处理完毕后,删除Qna。
可以理解的是,在本实施例中,对于不同业务单板上输出同一协议的报文流,对业务单板内的协议报文处理如图3所示的处理流程,而将一个业务单板看作一个整体,作为一个逻辑端口以输出协议报文。
本申请的一个实施例还提供了一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行上述实施例的方法。
本申请实施例可以实现在多端口突发大流量协议报文的情况下,通过新建的协议报文处理队列、动态扩容自适应调整队列带宽,有效解决了单个端口流量不超标,但是多个端口输出的协议总流量超标的问题,既可以应对合法的突发大流量的冲击,也可以有效防范恶意的DoS攻击。
存储器作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序以及非暂态性计算机可执行程序。此外,存储器可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施方式中,存储器可包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至该处理器。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
以上所描述的移动通信设备实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统可以被实 施为软件、固件、硬件及其适当的组合。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
以上是对本申请的较佳实施进行了具体说明,但本申请并不局限于上述实施方式,熟悉本领域的技术人员在不违背本申请方案的前提下还可作出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。

Claims (10)

  1. 一种处理大流量协议报文的方法,包括:
    获取并统计单位时间内各端口的每种协议报文的输出流平均速率;
    计算得出第一协议报文处理队列接收到的协议报文的流量总带宽;
    当所述流量总带宽达到或超过预警门限,新建第二协议报文处理队列;
    将所述第一协议报文处理队列中所述输出流平均速率最大的协议报文切换至由所述第二协议报文处理队列接收。
  2. 根据权利要求1所述的处理大流量协议报文的方法,还包括:
    当协议报文切换接收后所述第一协议报文处理队列的所述流量总带宽达到或超过所述预警门限,则将当前的所述第一协议报文处理队列中所述输出流平均速率最大的协议报文切换至由所述第二协议报文处理队列接收;直至所述流量总带宽小于所述预警门限。
  3. 根据权利要求1或2所述的处理大流量协议报文的方法,其中,还包括:
    当单位时间内所述第一协议报文处理队列和所述第二协议报文处理队列所接收的协议报文的流量带宽总和小于所述预警门限,使所述第二协议报文处理队列停止接收新的协议报文,同时将新的协议报文转由所述第一协议报文处理队列接收。
  4. 根据权利要求3所述的处理大流量协议报文的方法,其中,当处理完毕所述第二协议报文处理队列内的协议报文,删除所述第二协议报文处理队列。
  5. 根据权利要求1所述的处理大流量协议报文的方法,还包括:
    所述第二协议报文处理队列的带宽可动态拓展,当所述第二协议报文处理队列的带宽拓展至带宽上限,使所述第二协议报文处理队列丢弃新接收到的协议报文。
  6. 根据权利要求5所述的处理大流量协议报文的方法,其中,所述第二协议报文处理队列的带宽上限根据系统资源确定。
  7. 根据权利要求1所述的处理大流量协议报文的方法,其中,所述获取并统计单位时间内各端口的每种协议报文的输出流平均速率,包括:
    实时统计各端口中的每种协议报文的所述输出流平均速率,并对单位时间内各协议报文以所述输出流平均速率为依据进行排序。
  8. 根据权利要求1所述的处理大流量协议报文的方法,其中,所述第二协议报文处理队列的初始带宽根据经验值确定,且所述第二协议报文处理队列的初始带宽大于或等于所述第一协议报文处理队列的带宽,所述第二协议报文处理队列的优先级小于或等于所述第一协议报文处理队列的优先级。
  9. 一种处理大流量协议报文的系统,包括:监测模块、调度控制模块和学习预测模块;
    所述监测模块被设置为监控各个端口的协议报文流量,统计单位时间内每个端口各种协 议报文的输出流平均速率,计算单位时间内进入协议报文处理队列的流量的带宽,并向所述调度控制模块和所述学习预测模块反馈流量参数信息;
    所述调度控制模块负责协议报文处理队列的创建、协议报文切换、协议报文处理队列的扩容及删除操作;
    所述学习预测模块根据实时的所述流量参数信息及系统资源情况决策新建的协议报文处理队列所需的带宽,根据最新的协议报文处理队列的带宽参数更新经验值。
  10. 一种计算机可读存储介质,存储有计算机可执行指令,其中,所述计算机可执行指令用于执行权利要求1至8任意一项所述的处理大流量协议报文的方法。
PCT/CN2022/103936 2021-07-16 2022-07-05 处理大流量协议报文的方法、系统及存储介质 WO2023284590A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110807384.8A CN115622922A (zh) 2021-07-16 2021-07-16 处理大流量协议报文的方法、系统及存储介质
CN202110807384.8 2021-07-16

Publications (1)

Publication Number Publication Date
WO2023284590A1 true WO2023284590A1 (zh) 2023-01-19

Family

ID=84854738

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/103936 WO2023284590A1 (zh) 2021-07-16 2022-07-05 处理大流量协议报文的方法、系统及存储介质

Country Status (2)

Country Link
CN (1) CN115622922A (zh)
WO (1) WO2023284590A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1953421A (zh) * 2006-11-21 2007-04-25 华为技术有限公司 一种基于网络设备的带宽预留方法及装置
CN101447929A (zh) * 2008-12-26 2009-06-03 华为技术有限公司 一种流量选路方法、路由器和通信系统
CN103457881A (zh) * 2012-06-01 2013-12-18 美国博通公司 执行数据直通转发的系统
CN105227481A (zh) * 2015-09-02 2016-01-06 重庆邮电大学 基于路径开销和流调度代价最小化的sdn拥塞控制路由方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1953421A (zh) * 2006-11-21 2007-04-25 华为技术有限公司 一种基于网络设备的带宽预留方法及装置
CN101447929A (zh) * 2008-12-26 2009-06-03 华为技术有限公司 一种流量选路方法、路由器和通信系统
CN103457881A (zh) * 2012-06-01 2013-12-18 美国博通公司 执行数据直通转发的系统
CN105227481A (zh) * 2015-09-02 2016-01-06 重庆邮电大学 基于路径开销和流调度代价最小化的sdn拥塞控制路由方法

Also Published As

Publication number Publication date
CN115622922A (zh) 2023-01-17

Similar Documents

Publication Publication Date Title
Feng et al. BLUE: A new class of active queue management algorithms
US7616572B2 (en) Call admission control/session management based on N source to destination severity levels for IP networks
Ahammed et al. Anakyzing the performance of active queue management algorithms
US7274666B2 (en) Method and system for managing traffic within a data communication network
US7916718B2 (en) Flow and congestion control in switch architectures for multi-hop, memory efficient fabrics
CN111788803B (zh) 网络中的流管理
US8665892B2 (en) Method and system for adaptive queue and buffer control based on monitoring in a packet network switch
US8443444B2 (en) Mitigating low-rate denial-of-service attacks in packet-switched networks
US11388114B2 (en) Packet processing method and apparatus, communications device, and switching circuit
US7286552B1 (en) Method and apparatus for providing quality of service across a switched backplane for multicast packets
CN115150334A (zh) 基于时间敏感网络的数据传输方法、装置及通信设备
CN112104564A (zh) 一种负载分担方法及设备
WO2017000861A1 (zh) 交换机虚拟局域网中mac地址的学习方法及装置
WO2023284590A1 (zh) 处理大流量协议报文的方法、系统及存储介质
Ceco et al. Performance comparison of active queue management algorithms
US20050223056A1 (en) Method and system for controlling dataflow to a central system from distributed systems
Zhu et al. A novel frame aggregation scheduler to solve the head-of-line blocking problem for real-time udp traffic in aggregation-enabled WLANs
Turner et al. An approach for congestion control in InfiniBand
Su et al. QoS guarantee for IPTV using low latency queuing with various dropping schemes
CN110300069B (zh) 数据传输方法、优化装置及系统
KR100603570B1 (ko) 네트워크 혼잡 제어 장치 및 방법
سميرة حمد محمود حمد Simulation Modeling and Performance Comparison of RED and ERED Algorithms using Congestion Indicators
Teijeiro-Ruiz et al. On fair bandwidth sharing with RED
Montaser et al. RED-Based Technique for Detecting and Avoiding Anomaly Network Congestion
Bommisetti et al. Extended ECN mechanism to mitigate ECN-based attacks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22841227

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE