WO2016206520A1 - Method and apparatus for implementing flow table traversal service - Google Patents

Method and apparatus for implementing flow table traversal service Download PDF

Info

Publication number
WO2016206520A1
WO2016206520A1 PCT/CN2016/083703 CN2016083703W WO2016206520A1 WO 2016206520 A1 WO2016206520 A1 WO 2016206520A1 CN 2016083703 W CN2016083703 W CN 2016083703W WO 2016206520 A1 WO2016206520 A1 WO 2016206520A1
Authority
WO
WIPO (PCT)
Prior art keywords
service
flow table
module
processing
control module
Prior art date
Application number
PCT/CN2016/083703
Other languages
French (fr)
Chinese (zh)
Inventor
路鹏
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016206520A1 publication Critical patent/WO2016206520A1/en

Links

Images

Definitions

  • This document relates to, but is not limited to, the field of data communication technology, and in particular, to a method and device for implementing a flow table traversal service.
  • the storage device that directly accesses the storage flow table through the CPU has lower efficiency when there are more entries due to CPU processing capability and PCI (Peripheral Component Interconnect) bus speed.
  • PCI Peripheral Component Interconnect
  • This paper provides a method and device for implementing a flow table traversal service to improve the processing efficiency of the flow table traversal service.
  • the method for implementing the flow table traversal service includes: the control module obtains the service parameters of the flow table traversal service, and sends the service parameters to the co-processing module; the co-processing module controls each network processor to process the flow table according to the service parameters. Traverse the business.
  • the method further includes: the control module acquiring a processing result of the coprocessing module.
  • the processing result of the control module acquiring the coprocessing module includes: the coprocessing module sends the processing result to the storage module, and the control module obtains the processing result from the storage module.
  • the method further includes: the control module performs statistics according to the processing result, and displays the statistical result to the user.
  • the coprocessing module controls each network processor to process the flow table according to the service parameter
  • the traversal service includes: the scheduling core in the network processor schedules the flow table to each service core according to the service parameter, and the service core processes the scheduled Flow table.
  • the present invention provides a device for implementing a flow table traversal service, comprising: a control module, configured to acquire a service parameter of a flow table traversal service, and send the service parameter to a co-processing module; the co-processing module is configured to control each network processor therein The flow table is traversed according to the business parameters.
  • control module is further configured to obtain a processing result of the coprocessing module.
  • the device further includes a storage module
  • the coprocessing module is further configured to send the processing result to the storage module;
  • the control module is further configured to obtain a processing result from the storage module.
  • control module is further configured to perform statistics according to the processing result, and display the statistical result to the user.
  • the coprocessing module is configured to control, in the following manner, each network processor to process the flow table according to the service parameter traversal service: the scheduling core in the network processor schedules the flow table to each service core according to the service parameter.
  • the business core processes its scheduled flow table.
  • the embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented when executed by a processor.
  • the embodiment of the invention provides a method for implementing a flow table traversal service, which uses a network processor to perform a flow table traversal service, and various algorithms can be implemented by hardware, and implement complex congestion management, queue scheduling, flow classification, and QoS functions. Under the premise, it can also achieve extremely high search and forwarding performance and achieve "hard forwarding". Compared with pure hardware chips, the network processor fully supports programming, and the programming mode is simple. Once new technologies or requirements appear, it is convenient. The ground is implemented by microcode programming; in addition, the network processor is also scalable, and multiple network processors can be interconnected to form a network processor cluster to support larger and high-speed network processing, which can effectively solve A less efficient problem exists with related art solutions.
  • FIG. 1 is a schematic structural diagram of an apparatus for implementing a flow table traversal service according to a first embodiment of the present invention.
  • FIG. 2 is a flowchart of a method for implementing a flow table traversal service according to a second embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for implementing a flow table traversal service according to a third embodiment of the present invention.
  • the apparatus 1 for implementing a flow table traversal service includes:
  • the control module 11 is configured to acquire the service parameters of the flow table traversal service, and send it to the co-processing module 12;
  • the co-processing module 12 is configured to control each of the network processors to process the flow table to traverse the service according to the service parameters.
  • control module 11 in the above embodiment is further configured to obtain the processing result of the coprocessing module 12.
  • the implementation apparatus 1 in the above embodiment further includes a storage module 13 that is further configured to send the processing result to the storage module 13, and the control module 11 is further configured to be stored.
  • the module gets 13 processing results.
  • control module 11 in the above embodiment is further configured to perform statistics according to the processing result and display the statistical result to the user.
  • the co-processing module 12 in the above embodiment is configured to control each of the network processors in the following manner to process the flow table according to the service parameter traversal service: the scheduling core in the network processor will flow the table according to the service parameters. It is scheduled to each service core, and the service core processes its scheduled flow table.
  • FIG. 2 is a flowchart of an implementation method according to a second embodiment of the present invention. As shown in FIG. 2, in this embodiment, a method for implementing a flow table traversal service includes the following steps:
  • control module obtains the service parameter of the flow table traversal service, and sends the service parameter to the co-processing module;
  • the co-processing module controls each network processor to process the flow table according to the service parameter to traverse the service.
  • the above embodiment further includes: the control module acquiring a processing result of the co-processing module.
  • the obtaining, by the control module in the foregoing embodiment, the processing result of the co-processing module includes: the co-processing module sends the processing result to the storage module, and the control module acquires the processing result from the storage module.
  • the above embodiment further includes: the control module performs statistics according to the processing result, and displays the statistical result to the user.
  • the coprocessing module in the foregoing embodiment controls each network processor to process the flow table according to the service parameter to traverse the service, and the scheduling core in the network processor schedules the flow table to each service core according to the service parameter.
  • the business core processes its scheduled flow table.
  • FIG. 3 is a flowchart of an implementation method according to a third embodiment of the present invention. As shown in FIG. 3, in the embodiment, the method for implementing a flow table traversal service includes the following steps:
  • control module obtains the service parameter and sends it to the coprocessing module.
  • the user sends a service request
  • the control module receives the service request sent by the user, and sends the obtained parameter to the underlying co-processing module in the form of a message, and simultaneously listens to the processing process of the co-processing module.
  • the coprocessing module processes the flow table traversal service.
  • the two network processors of the co-processing module respectively receive the statistics messages sent by the control module, parse the parameters and perform corresponding service processing according to the user requirements.
  • the flow table service processing is directly traversed with respect to the traditional CPU.
  • Multi-core parallel processing one core is responsible for command scheduling, other cores are responsible for traversing the flow table to handle the corresponding business, rather than one kernel traversing all flow tables, but the entire kernel is divided into other business processing kernel processing to improve efficiency, Finally, the result is either written to the peripheral storage device or sent directly to the control module in the form of a message.
  • the flow table traversal process is placed in the co-processing module processing, and the total flow table is set to M.
  • the design uses multiple cores to process the entire flow table traversal process in parallel, one of which is used for command scheduling, and the rest is left.
  • the L cores are used for specific service processing.
  • the core of the command scheduling receives the service message sent by the upper layer control module, it is initialized first, including setting a global control variable index_cnt.
  • index_cnt can be used to know the service processing. schedule.
  • index_cnt is equal to L, the other cores are notified to start traversing the flow table.
  • the core for doing business processing is not a core traversing all the flow tables, but dividing the entire flow table into L blocks, where the average is It is not an absolute average, but an approximate average.
  • the principle is that there is no overlap between the flow table blocks responsible for each core, and all flow tables are covered at the same time.
  • the business volume of each business core processing is as large as possible to achieve the most efficient overall efficiency. Excellent purpose.
  • the L service cores process the flow table blocks that are respectively responsible for each other.
  • the scheduling core integrates the processing results of all service cores, and directly reports the final result to the control module or to the peripheral storage device. If it is written to the peripheral storage device, it also needs to notify the upper control module service. Processing has been completed.
  • the control module monitors that the co-processing module has been statistically completed, the results of the two network processors are respectively obtained for re-processing, and the final result is fed back to the upper-layer user.
  • the specific TOP-N statistics are For example, the TOP-N statistic is based on a more complicated embodiment of the flow table traversal described above, and can quickly count traffic ranking information based on ip addresses, protocols, etc., and display the top 10 and top 20, respectively. Top 50.
  • the TOP-N statistics include the following steps:
  • the user sends the flow table service to the control module through command line parameter configuration.
  • control module When the control module receives the user statistics request, the relevant parameters are parsed, and then the parameters are configured into messages, which are respectively sent to the two network processors NP of the coprocessing module in the form of messages.
  • the TOP-N statistics message sent by the upper layer is obtained by the core for command scheduling in each NP, and the initialization process is performed.
  • the global variable index_cnt is set to L (the number of service cores), and the entire block flow table is divided into L shares.
  • the service core starts to traverse the flow table blocks that are responsible for each.
  • a hash list is created based on each flow table and the delivered parameters, and all flow tables are counted based on the delivered parameters. Before creating a HASH table, the HASH table specification is specified.
  • the M-th square bar of 2 the corresponding flow table information in the flow table is taken out as a keyword according to the parameters obtained from the message, and the HASH value is obtained by HASH operation, and then, The 0-(M-1) bit of the value H is used as the INDEX table index IDX corresponding to the HASH entry. If there is no matching INDEX entry, a new INDEX table and the corresponding ENTRY table need to be created. If there is a matching INDEX table, the corresponding ENTRY table is mapped by the value in the INDEX table.
  • the keyword is also searched for an exact match to confirm whether it is actually indexed.
  • index_cnt When the core responsible for command scheduling monitors that index_cnt is 0, it represents the completion of the business core statistics service. Because the HASH table is relatively large after the statistics are completed, the statistics cannot be counted. The result is immediately fed back to the upper control module, but all the HASH tables are saved in the peripheral storage device, and the underlying statistical service completion is notified to the upper control module.
  • the peripheral storage devices that are suspended from the NP are directly read from the respective HASH statistical linked list. Because the statistical results of the two network processors are integrated into one, the HASH chain established by one of the network processors in the peripheral storage device can be read once in a single block, and another network processing is traversed on this basis.
  • the HASH statistics chain of the device, through the same HASH establishment method as the bottom layer or cumulative or new, will eventually be the two network processors
  • the statistical results are integrated into a HASH chain and stored in a dynamically requested memory. Finally, the HASH chain is traversed, and the statistical result of the previous TOP-N is discharged by the binary sorting method to the upper layer user.
  • an embodiment of the present invention further provides a computer readable storage medium storing computer executable instructions, which are implemented when executed by a processor.
  • the network processor is used to perform the flow table traversal service, and various algorithms can be implemented by hardware. Under the premise of implementing complex congestion management, queue scheduling, flow classification, and QoS functions, the algorithm can also achieve extremely high search and forwarding performance. Hard forwarding"; compared to pure hardware chips, the network processor fully supports programming, the programming mode is simple, once new technologies or requirements appear, it can be easily implemented by microcode programming; in addition, the network processor also has Scalability, multiple network processors can be interconnected to form a network processor cluster to support larger and high-speed network processing, which can improve the processing efficiency of the flow table traversal service.
  • each module/unit in the above embodiment may be implemented in the form of hardware, for example, by implementing an integrated circuit to implement its corresponding function, or may be implemented in the form of a software function module, for example, executing a program stored in the memory by a processor. Instructions to achieve their corresponding functions. This application is not limited to any specific combination of hardware and software.
  • the technical solution provided by the embodiment of the present invention uses a network processor to perform a flow table traversal service, and various algorithms can be implemented by hardware.
  • the network processor Under the premise of implementing complex congestion management, queue scheduling, flow classification, and QoS functions, High search and forwarding performance, achieving "hard forwarding"; compared to pure hardware chips, the network processor fully supports programming, and the programming mode is simple. Once new technologies or requirements appear, it can be easily implemented by microcode programming.
  • the network processor is also scalable, and multiple network processors can be interconnected to form a network processor cluster to support larger and high-speed network processing, which can improve the processing efficiency of the flow table traversal service.

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Provided is a method for implementing a flow table traversal service. The method comprises: a control module acquires a service parameter of a flow table traversal service, and delivers the service parameter to a coprocessor module; and the coprocessor module controls network processors in the coprocessor module to process the flow table traversal service according to the service parameter.

Description

一种流表遍历业务的实现方法及装置Method and device for implementing flow table traversal service 技术领域Technical field
本文涉及但不限于数据通信技术领域,尤其涉及的是一种流表遍历业务的实现方法及装置。This document relates to, but is not limited to, the field of data communication technology, and in particular, to a method and device for implementing a flow table traversal service.
背景技术Background technique
随着互联网爆炸式发展,网络流量急剧增加,各种新兴业务层出不穷,通过建立流表描述每个数据报文的上下文环境的数据通信设备会建立大量会话表项,通常在十兆数量级以上,而对这大量流表如何高效管理成为通信设备一个必须要解决的问题。With the explosive development of the Internet, network traffic has increased dramatically, and various emerging services have emerged in an endless stream. Data communication devices that describe the context of each data packet by establishing a flow table will establish a large number of session entries, usually in the order of ten megabytes or more. How to manage this large number of flow meters efficiently becomes a problem that must be solved for communication equipment.
在传统的解决方案中,通过CPU直接访问存储流表的存储设备,由于CPU处理能力和PCI(Peripheral Component Interconnect,外设部件互连标准)总线速度的制约,在表项较多时效率比较低。In the traditional solution, the storage device that directly accesses the storage flow table through the CPU has lower efficiency when there are more entries due to CPU processing capability and PCI (Peripheral Component Interconnect) bus speed.
因此,如何提供一种可提高效率的流表遍历业务的实现方法,是本领域技术人员亟待解决的技术问题。Therefore, how to provide an implementation method of a flow table traversal service that can improve efficiency is a technical problem to be solved by those skilled in the art.
发明内容Summary of the invention
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。The following is an overview of the topics detailed in this document. This Summary is not intended to limit the scope of the claims.
本文提供一种流表遍历业务的实现方法及装置,以提高流表遍历业务的的处理效率。This paper provides a method and device for implementing a flow table traversal service to improve the processing efficiency of the flow table traversal service.
本文提供了一种流表遍历业务的实现方法,包括:控制模块获取流表遍历业务的业务参数,并下发给协处理模块;协处理模块控制其内各网络处理器根据业务参数处理流表遍历业务。The method for implementing the flow table traversal service is provided, the method includes: the control module obtains the service parameters of the flow table traversal service, and sends the service parameters to the co-processing module; the co-processing module controls each network processor to process the flow table according to the service parameters. Traverse the business.
可选地,所述方法还包括:控制模块获取协处理模块的处理结果。Optionally, the method further includes: the control module acquiring a processing result of the coprocessing module.
可选地,控制模块获取协处理模块的处理结果包括:协处理模块将处理结果发送至存储模块,控制模块从存储模块获取处理结果。 Optionally, the processing result of the control module acquiring the coprocessing module includes: the coprocessing module sends the processing result to the storage module, and the control module obtains the processing result from the storage module.
可选地,所述方法还包括:控制模块根据处理结果进行统计,并向用户展示统计结果。Optionally, the method further includes: the control module performs statistics according to the processing result, and displays the statistical result to the user.
可选地,协处理模块控制其内各网络处理器根据业务参数处理流表遍历业务包括:网络处理器中的调度核根据业务参数将流表调度给各业务核,业务核处理为其调度的流表。Optionally, the coprocessing module controls each network processor to process the flow table according to the service parameter, and the traversal service includes: the scheduling core in the network processor schedules the flow table to each service core according to the service parameter, and the service core processes the scheduled Flow table.
本文提供了一种流表遍历业务的实现装置,包括:控制模块,设置为获取流表遍历业务的业务参数,并下发给协处理模块;协处理模块,设置为控制其内各网络处理器根据业务参数处理流表遍历业务。The present invention provides a device for implementing a flow table traversal service, comprising: a control module, configured to acquire a service parameter of a flow table traversal service, and send the service parameter to a co-processing module; the co-processing module is configured to control each network processor therein The flow table is traversed according to the business parameters.
可选地,所述控制模块,还设置为获取协处理模块的处理结果。Optionally, the control module is further configured to obtain a processing result of the coprocessing module.
可选地,所述装置还包括存储模块;Optionally, the device further includes a storage module;
所述协处理模块,还设置为将处理结果发送至所述存储模块;The coprocessing module is further configured to send the processing result to the storage module;
所述控制模块,还设置为从所述存储模块获取处理结果。The control module is further configured to obtain a processing result from the storage module.
可选地,所述控制模块,还设置为根据处理结果进行统计,并向用户展示统计结果。Optionally, the control module is further configured to perform statistics according to the processing result, and display the statistical result to the user.
可选地,所述协处理模块,是设置为通过以下方式控制其内各网络处理器根据业务参数处理流表遍历业务:网络处理器中的调度核根据业务参数将流表调度给各业务核,业务核处理为其调度的流表。Optionally, the coprocessing module is configured to control, in the following manner, each network processor to process the flow table according to the service parameter traversal service: the scheduling core in the network processor schedules the flow table to each service core according to the service parameter. The business core processes its scheduled flow table.
本发明实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述方法。The embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented when executed by a processor.
本发明实施例提供了一种流表遍历业务的实现方法,采用网络处理器执行流表遍历业务,各种算法可以通过硬件实现,在实现复杂的拥塞管理、队列调度、流分类和QoS功能的前提下,还可以达到极高的查找、转发性能,实现“硬转发”;相对于纯硬件的芯片,网络处理器完全支持编程,编程模式简单,一旦有新的技术或者需求出现,可以很方便地通过微码编程进行实现;此外,网络处理器还具有可扩展性,多个网络处理器之间可以互连,构成网络处理器簇,以支持更为大型高速的网络处理,可以有效地解决相关技术的解决方案所存在的效率较低问题。 The embodiment of the invention provides a method for implementing a flow table traversal service, which uses a network processor to perform a flow table traversal service, and various algorithms can be implemented by hardware, and implement complex congestion management, queue scheduling, flow classification, and QoS functions. Under the premise, it can also achieve extremely high search and forwarding performance and achieve "hard forwarding". Compared with pure hardware chips, the network processor fully supports programming, and the programming mode is simple. Once new technologies or requirements appear, it is convenient. The ground is implemented by microcode programming; in addition, the network processor is also scalable, and multiple network processors can be interconnected to form a network processor cluster to support larger and high-speed network processing, which can effectively solve A less efficient problem exists with related art solutions.
在阅读并理解了附图和详细描述后,可以明白其他方面。Other aspects will be apparent upon reading and understanding the drawings and detailed description.
附图概述BRIEF abstract
图1为本发明第一实施例提供的一种流表遍历业务的实现装置的结构示意图。FIG. 1 is a schematic structural diagram of an apparatus for implementing a flow table traversal service according to a first embodiment of the present invention.
图2为本发明第二实施例提供的一种流表遍历业务的实现方法的流程图。FIG. 2 is a flowchart of a method for implementing a flow table traversal service according to a second embodiment of the present invention.
图3为本发明第三实施例提供的一种流表遍历业务的实现方法的流程图。FIG. 3 is a flowchart of a method for implementing a flow table traversal service according to a third embodiment of the present invention.
本发明的实施方式Embodiments of the invention
下文中将结合附图对本发明的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that, in the case of no conflict, the features in the embodiments and the embodiments in the present application may be arbitrarily combined with each other.
现通过具体实施方式结合附图的方式对本发明做出进一步的诠释说明。The invention will now be further illustrated by way of specific embodiments in conjunction with the accompanying drawings.
第一实施例:First embodiment:
图1为本发明第一实施例提供的实现装置的结构示意图,由图1可知,在本实施例中,流表遍历业务的实现装置1包括:1 is a schematic structural diagram of an implementation apparatus according to a first embodiment of the present invention. As shown in FIG. 1, in the embodiment, the apparatus 1 for implementing a flow table traversal service includes:
控制模块11,设置为获取流表遍历业务的业务参数,并下发给协处理模块12;The control module 11 is configured to acquire the service parameters of the flow table traversal service, and send it to the co-processing module 12;
协处理模块12,设置为控制其内各网络处理器根据业务参数处理流表遍历业务。The co-processing module 12 is configured to control each of the network processors to process the flow table to traverse the service according to the service parameters.
在一些实施例中,上述实施例中的控制模块11还设置为获取协处理模块12的处理结果。In some embodiments, the control module 11 in the above embodiment is further configured to obtain the processing result of the coprocessing module 12.
在一些实施例中,如图1所示,上述实施例中的实现装置1还包括存储模块13,协处理模块12还设置为将处理结果发送至存储模块13,控制模块11还设置为从存储模块获取13处理结果。 In some embodiments, as shown in FIG. 1, the implementation apparatus 1 in the above embodiment further includes a storage module 13 that is further configured to send the processing result to the storage module 13, and the control module 11 is further configured to be stored. The module gets 13 processing results.
在一些实施例中,上述实施例中的控制模块11还设置为根据处理结果进行统计,并向用户展示统计结果。In some embodiments, the control module 11 in the above embodiment is further configured to perform statistics according to the processing result and display the statistical result to the user.
在一些实施例中,上述实施例中的协处理模块12是设置为通过以下方式控制其内各网络处理器根据业务参数处理流表遍历业务:网络处理器中的调度核根据业务参数将流表调度给各业务核,业务核处理为其调度的流表。In some embodiments, the co-processing module 12 in the above embodiment is configured to control each of the network processors in the following manner to process the flow table according to the service parameter traversal service: the scheduling core in the network processor will flow the table according to the service parameters. It is scheduled to each service core, and the service core processes its scheduled flow table.
第二实施例:Second embodiment:
图2为本发明第二实施例提供的实现方法的流程图,由图2可知,在本实施例中,流表遍历业务的实现方法包括以下步骤:2 is a flowchart of an implementation method according to a second embodiment of the present invention. As shown in FIG. 2, in this embodiment, a method for implementing a flow table traversal service includes the following steps:
S201:控制模块获取流表遍历业务的业务参数,并下发给协处理模块;S201: The control module obtains the service parameter of the flow table traversal service, and sends the service parameter to the co-processing module;
S202:协处理模块控制其内各网络处理器根据业务参数处理流表遍历业务。S202: The co-processing module controls each network processor to process the flow table according to the service parameter to traverse the service.
在一些实施例中,上述实施例还包括:控制模块获取协处理模块的处理结果。In some embodiments, the above embodiment further includes: the control module acquiring a processing result of the co-processing module.
在一些实施例中,上述实施例中的控制模块获取协处理模块的处理结果包括:协处理模块将处理结果发送至存储模块,控制模块从存储模块获取处理结果。In some embodiments, the obtaining, by the control module in the foregoing embodiment, the processing result of the co-processing module includes: the co-processing module sends the processing result to the storage module, and the control module acquires the processing result from the storage module.
在一些实施例中,上述实施例还包括:控制模块根据处理结果进行统计,并向用户展示统计结果。In some embodiments, the above embodiment further includes: the control module performs statistics according to the processing result, and displays the statistical result to the user.
在一些实施例中,上述实施例中的协处理模块控制其内各网络处理器根据业务参数处理流表遍历业务包括:网络处理器中的调度核根据业务参数将流表调度给各业务核,业务核处理为其调度的流表。In some embodiments, the coprocessing module in the foregoing embodiment controls each network processor to process the flow table according to the service parameter to traverse the service, and the scheduling core in the network processor schedules the flow table to each service core according to the service parameter. The business core processes its scheduled flow table.
现结合具体应用实例对本发明做进一步的诠释说明。The present invention will be further explained in conjunction with specific application examples.
第三实施例:Third embodiment:
图3为本发明第三实施例提供的实现方法的流程图,由图3可知,在本实施例中,流表遍历业务的实现方法包括以下步骤: FIG. 3 is a flowchart of an implementation method according to a third embodiment of the present invention. As shown in FIG. 3, in the embodiment, the method for implementing a flow table traversal service includes the following steps:
S301:控制模块获取业务参数并下发给协处理模块。S301: The control module obtains the service parameter and sends it to the coprocessing module.
用户下发业务请求,控制模块收到用户下发的业务请求,将获得的参数以消息的形式下发给底层协处理模块,同时监听协处理模块的处理进程。The user sends a service request, and the control module receives the service request sent by the user, and sends the obtained parameter to the underlying co-processing module in the form of a message, and simultaneously listens to the processing process of the co-processing module.
S302:协处理模块处理流表遍历业务。S302: The coprocessing module processes the flow table traversal service.
协处理模块的两个网络处理器分别收到控制模块下发的统计消息,将参数解析出来后根据用户需求进行相应的业务处理,这里流表业务处理,相对于传统的CPU直接遍历,本申请采用多内核并行处理,一个内核负责做命令调度,其他内核负责遍历流表处理相应业务,而非一个内核遍历所有流表,而是将整块内核均分给其他业务处理内核处理以提高效率,最后将结果或写入外设存储设备中或直接以消息的形式发给控制模块。The two network processors of the co-processing module respectively receive the statistics messages sent by the control module, parse the parameters and perform corresponding service processing according to the user requirements. Here, the flow table service processing is directly traversed with respect to the traditional CPU. Multi-core parallel processing, one core is responsible for command scheduling, other cores are responsible for traversing the flow table to handle the corresponding business, rather than one kernel traversing all flow tables, but the entire kernel is divided into other business processing kernel processing to improve efficiency, Finally, the result is either written to the peripheral storage device or sent directly to the control module in the form of a message.
在本申请中,将流表遍历流程放在协处理模块处理,设流表总量为M,设计用多个核来并行处理整个流表遍历流程,其中一个核用来做命令调度,剩下的L个核用来做具体业务处理,当做命令调度的核收到上层控制模块下发的业务消息时,先初始化,包括设置一个全局的控制变量index_cnt,通过index_cnt的值可以知道此次业务处理进度。当index_cnt等于L时,通知其他核开始遍历流表,同时为提高效率,做业务处理的核并不是一个核遍历所有流表,而是将整块流表均分为L块,这里的平均并不是绝对平均,而是一个大概均分,原则是各个核负责的流表块之间没有重叠,同时覆盖所有流表,另外,尽量让每个业务核处理的业务量相当,以达到统筹效率最优的目的。如图3所示,假设有M条流表,a=M/L,这样第i个业务核只要负责第(i-1)*a+1到i*a范围内的流表业务就可以了,L个业务核并行处理各自负责的流表块,当每个业务核处理完了各自负责的流表块,则将全局的控制变量index_cnt减1,当index_cnt减为0时就代表此整个业务处理完成,这时调度核就将所有业务核的处理结果整合,将最后结果或直接上报给控制模块或写入外设存储设备,如果是写入外设存储设备,则同时需通知上层控制模块业务已处理完成。In the present application, the flow table traversal process is placed in the co-processing module processing, and the total flow table is set to M. The design uses multiple cores to process the entire flow table traversal process in parallel, one of which is used for command scheduling, and the rest is left. The L cores are used for specific service processing. When the core of the command scheduling receives the service message sent by the upper layer control module, it is initialized first, including setting a global control variable index_cnt. The value of index_cnt can be used to know the service processing. schedule. When index_cnt is equal to L, the other cores are notified to start traversing the flow table. At the same time, to improve efficiency, the core for doing business processing is not a core traversing all the flow tables, but dividing the entire flow table into L blocks, where the average is It is not an absolute average, but an approximate average. The principle is that there is no overlap between the flow table blocks responsible for each core, and all flow tables are covered at the same time. In addition, the business volume of each business core processing is as large as possible to achieve the most efficient overall efficiency. Excellent purpose. As shown in FIG. 3, it is assumed that there are M flow tables, a=M/L, so that the i-th service core can be responsible for the flow table service in the range of (i-1)*a+1 to i*a. The L service cores process the flow table blocks that are respectively responsible for each other. When each service core finishes the flow table block that is responsible for each, the global control variable index_cnt is decremented by one, and when index_cnt is reduced to 0, it represents the entire service processing. After completion, the scheduling core integrates the processing results of all service cores, and directly reports the final result to the control module or to the peripheral storage device. If it is written to the peripheral storage device, it also needs to notify the upper control module service. Processing has been completed.
S303:控制模块统计处理结果。S303: The control module counts the processing result.
当控制模块监听到协处理模块已经统计完成时,分别获取两个网络处理器的结果进行再加工,将最终结果反馈给上层用户。具体的以TOP-N统计为 例进行说明,TOP-N统计是基于上述流表遍历的基础上的一个更复杂的实施例,可以快速统计基于ip地址、协议等的流量排名信息,分别显示排行前10位,前20位,前50位。When the control module monitors that the co-processing module has been statistically completed, the results of the two network processors are respectively obtained for re-processing, and the final result is fed back to the upper-layer user. The specific TOP-N statistics are For example, the TOP-N statistic is based on a more complicated embodiment of the flow table traversal described above, and can quickly count traffic ranking information based on ip addresses, protocols, etc., and display the top 10 and top 20, respectively. Top 50.
TOP-N统计包括以下步骤:The TOP-N statistics include the following steps:
用户通过命令行参数配置将流表业务下发给控制模块。The user sends the flow table service to the control module through command line parameter configuration.
当控制模块收到用户统计请求时解析出相关参数,然后将参数构造成消息,以消息的形式分别下发给协处理模块的两个网络处理器NP。When the control module receives the user statistics request, the relevant parameters are parsed, and then the parameters are configured into messages, which are respectively sent to the two network processors NP of the coprocessing module in the form of messages.
每个NP中由做命令调度的核获取上层下发的TOP-N统计消息,做初始化流程,将全局变量index_cnt设为L(业务核的数目),将整块流表均分为L份,业务核开始遍历各自负责的流表块,基于每个流表及下发参数做一个hash链表,将所有流表基于下发参数做一个统计。在创建HASH表前都会指定HASH表的规格,如2的M次方条,根据从消息中获取的参数取出流表中相应的流表信息作为关键字做HASH运算得到HASH值H,然后,取值H的0-(M-1)比特位作为HASH表项对应的INDEX表索引IDX,如果没有匹配的INDEX表项,则需要新建一个INDEX表及相应的ENTRY表。如果存在匹配的INDEX表则通过INDEX表内的值映射其对应的ENTRY表,这里,因为可能存在HASH冲突,所以取出ENTRY表后,要也查找关键字进行精确匹配来确认是否为实际要索引的表项,如果是则将表里的统计值进行累计,如果不是则需要新建一个ENTRY表,并将其index值保存在前一个与其hash冲突的ENTRY表中,以建立联系。每个业务核处理完各自负责的业务后就将index_cnt减1,当负责命令调度的核监测到index_cnt为0时,代表业务核统计业务完成,因为完成统计后HASH表比较大,所以不能将统计结果即时反馈给上层控制模块,而是将所有的HASH表保存在外设存储设备中,同时通知上层控制模块底层统计业务完成。The TOP-N statistics message sent by the upper layer is obtained by the core for command scheduling in each NP, and the initialization process is performed. The global variable index_cnt is set to L (the number of service cores), and the entire block flow table is divided into L shares. The service core starts to traverse the flow table blocks that are responsible for each. A hash list is created based on each flow table and the delivered parameters, and all flow tables are counted based on the delivered parameters. Before creating a HASH table, the HASH table specification is specified. For example, the M-th square bar of 2, the corresponding flow table information in the flow table is taken out as a keyword according to the parameters obtained from the message, and the HASH value is obtained by HASH operation, and then, The 0-(M-1) bit of the value H is used as the INDEX table index IDX corresponding to the HASH entry. If there is no matching INDEX entry, a new INDEX table and the corresponding ENTRY table need to be created. If there is a matching INDEX table, the corresponding ENTRY table is mapped by the value in the INDEX table. Here, since there may be a HASH conflict, after the ENTRY table is fetched, the keyword is also searched for an exact match to confirm whether it is actually indexed. The entry, if it is, accumulates the statistics in the table. If not, you need to create a new ENTRY table and save its index value in the previous ENTRY table that conflicts with its hash to establish contact. After each service core processes its own responsible business, it decrements index_cnt by 1. When the core responsible for command scheduling monitors that index_cnt is 0, it represents the completion of the business core statistics service. Because the HASH table is relatively large after the statistics are completed, the statistics cannot be counted. The result is immediately fed back to the upper control module, but all the HASH tables are saved in the peripheral storage device, and the underlying statistical service completion is notified to the upper control module.
当控制模块监听到两个NP统计完成时,直接从NP下挂下的外设存储设备读取各自的HASH统计链表。因为要将两个网络处理器的统计结果整合为一个,所以在实现的时候可以整块一次读取其中一个网络处理器在外设存储设备里建立的HASH链,在此基础上遍历另一个网络处理器的HASH统计链,通过与底层同样的HASH建立方式或累计或新建,最终将两个网络处理器的 统计结果整合为一个HASH链保存到一块动态申请的内存中。最后遍历这条HASH链,用二分排序法排出前TOP-N的统计结果反馈给上层用户。When the control module monitors that two NP statistics are completed, the peripheral storage devices that are suspended from the NP are directly read from the respective HASH statistical linked list. Because the statistical results of the two network processors are integrated into one, the HASH chain established by one of the network processors in the peripheral storage device can be read once in a single block, and another network processing is traversed on this basis. The HASH statistics chain of the device, through the same HASH establishment method as the bottom layer or cumulative or new, will eventually be the two network processors The statistical results are integrated into a HASH chain and stored in a dynamically requested memory. Finally, the HASH chain is traversed, and the statistical result of the previous TOP-N is discharged by the binary sorting method to the upper layer user.
此外,本发明实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述方法。In addition, an embodiment of the present invention further provides a computer readable storage medium storing computer executable instructions, which are implemented when executed by a processor.
综上可知,本发明的上述实施例至少存在以下有益效果:In summary, the above embodiments of the present invention have at least the following beneficial effects:
采用网络处理器执行流表遍历业务,各种算法可以通过硬件实现,在实现复杂的拥塞管理、队列调度、流分类和QoS功能的前提下,还可以达到极高的查找、转发性能,实现“硬转发”;相对于纯硬件的芯片,网络处理器完全支持编程,编程模式简单,一旦有新的技术或者需求出现,可以很方便地通过微码编程进行实现;此外,网络处理器还具有可扩展性,多个网络处理器之间可以互连,构成网络处理器簇,以支持更为大型高速的网络处理,可以提高流表遍历业务的处理效率。The network processor is used to perform the flow table traversal service, and various algorithms can be implemented by hardware. Under the premise of implementing complex congestion management, queue scheduling, flow classification, and QoS functions, the algorithm can also achieve extremely high search and forwarding performance. Hard forwarding"; compared to pure hardware chips, the network processor fully supports programming, the programming mode is simple, once new technologies or requirements appear, it can be easily implemented by microcode programming; in addition, the network processor also has Scalability, multiple network processors can be interconnected to form a network processor cluster to support larger and high-speed network processing, which can improve the processing efficiency of the flow table traversal service.
本领域普通技术人员可以理解上述方法中的全部或部分步骤可通过程序来指令相关硬件(例如处理器)完成,所述程序可以存储于计算机可读存储介质中,如只读存储器、磁盘或光盘等。可选地,上述实施例的全部或部分步骤也可以使用一个或多个集成电路来实现。相应地,上述实施例中的各模块/单元可以采用硬件的形式实现,例如通过集成电路来实现其相应功能,也可以采用软件功能模块的形式实现,例如通过处理器执行存储于存储器中的程序指令来实现其相应功能。本申请不限制于任何特定形式的硬件和软件的结合。One of ordinary skill in the art will appreciate that all or a portion of the above steps may be performed by a program to instruct related hardware, such as a processor, which may be stored in a computer readable storage medium, such as a read only memory, disk or optical disk. Wait. Alternatively, all or part of the steps of the above embodiments may also be implemented using one or more integrated circuits. Correspondingly, each module/unit in the above embodiment may be implemented in the form of hardware, for example, by implementing an integrated circuit to implement its corresponding function, or may be implemented in the form of a software function module, for example, executing a program stored in the memory by a processor. Instructions to achieve their corresponding functions. This application is not limited to any specific combination of hardware and software.
需要说明的是,本申请还可有其他多种实施例,在不背离本申请精神及其实质的情况下,熟悉本领域的技术人员可根据本申请作出各种相应的改变和变形,但这些相应的改变和变形都应属于本申请所附的权利要求的保护范围。 It should be noted that various other embodiments and modifications may be made by those skilled in the art without departing from the spirit and scope of the application, Corresponding changes and modifications are intended to fall within the scope of the appended claims.
工业实用性Industrial applicability
本发明实施例提供的技术方案,采用网络处理器执行流表遍历业务,各种算法可以通过硬件实现,在实现复杂的拥塞管理、队列调度、流分类和QoS功能的前提下,还可以达到极高的查找、转发性能,实现“硬转发”;相对于纯硬件的芯片,网络处理器完全支持编程,编程模式简单,一旦有新的技术或者需求出现,可以很方便地通过微码编程进行实现;此外,网络处理器还具有可扩展性,多个网络处理器之间可以互连,构成网络处理器簇,以支持更为大型高速的网络处理,可以提高流表遍历业务的处理效率。 The technical solution provided by the embodiment of the present invention uses a network processor to perform a flow table traversal service, and various algorithms can be implemented by hardware. Under the premise of implementing complex congestion management, queue scheduling, flow classification, and QoS functions, High search and forwarding performance, achieving "hard forwarding"; compared to pure hardware chips, the network processor fully supports programming, and the programming mode is simple. Once new technologies or requirements appear, it can be easily implemented by microcode programming. In addition, the network processor is also scalable, and multiple network processors can be interconnected to form a network processor cluster to support larger and high-speed network processing, which can improve the processing efficiency of the flow table traversal service.

Claims (10)

  1. 一种流表遍历业务的实现方法,包括:A method for implementing a flow table traversal service, comprising:
    控制模块获取流表遍历业务的业务参数,并下发给协处理模块;The control module obtains the service parameters of the flow table traversal service, and sends the service parameters to the co-processing module;
    协处理模块控制其内各网络处理器根据所述业务参数处理所述流表遍历业务。The co-processing module controls each of the network processors to process the flow table traversal service according to the service parameters.
  2. 如权利要求1所述的实现方法,其中:The implementation method of claim 1 wherein:
    所述方法还包括:所述控制模块获取所述协处理模块的处理结果。The method further includes: the control module acquiring a processing result of the co-processing module.
  3. 如权利要求2所述的实现方法,其中:The implementation method of claim 2 wherein:
    所述控制模块获取所述协处理模块的处理结果,包括:The control module acquires processing results of the coprocessing module, including:
    所述协处理模块将所述处理结果发送至存储模块,所述控制模块从所述存储模块获取所述处理结果。The coprocessing module sends the processing result to a storage module, and the control module acquires the processing result from the storage module.
  4. 如权利要求2所述的实现方法,其中:The implementation method of claim 2 wherein:
    所述方法还包括:所述控制模块根据所述处理结果进行统计,并向用户展示统计结果。The method further includes: the control module performing statistics according to the processing result, and displaying the statistical result to the user.
  5. 如权利要求1至4任一项所述的实现方法,其中:The implementation method according to any one of claims 1 to 4, wherein:
    所述协处理模块控制其内各网络处理器根据所述业务参数处理所述流表遍历业务,包括:The coprocessing module controls each of the network processors to process the flow table traversal service according to the service parameter, including:
    网络处理器中的调度核根据所述业务参数将流表调度给各业务核,所述业务核处理为其调度的流表。The scheduling core in the network processor schedules the flow table to each service core according to the service parameter, and the service core processes the flow table for its scheduling.
  6. 一种流表遍历业务的实现装置,包括:An implementation device for a flow table traversal service, comprising:
    控制模块,设置为获取流表遍历业务的业务参数,并下发给协处理模块;The control module is configured to obtain a service parameter of the flow table traversal service, and send the service parameter to the co-processing module;
    协处理模块,设置为控制其内各网络处理器根据所述业务参数处理所述流表遍历业务。The co-processing module is configured to control each network processor therein to process the flow table traversal service according to the service parameter.
  7. 如权利要求6所述的实现装置,其中: The implementation device of claim 6 wherein:
    所述控制模块,还设置为获取所述协处理模块的处理结果。The control module is further configured to obtain a processing result of the coprocessing module.
  8. 如权利要求7所述的实现装置,其中:The implementation device of claim 7 wherein:
    所述实现装置还包括存储模块;The implementation device further includes a storage module;
    所述协处理模块,还设置为将所述处理结果发送至所述存储模块;The coprocessing module is further configured to send the processing result to the storage module;
    所述控制模块,还设置为从所述存储模块获取所述处理结果。The control module is further configured to acquire the processing result from the storage module.
  9. 如权利要求7所述的实现装置,其中:The implementation device of claim 7 wherein:
    所述控制模块,还设置为根据所述处理结果进行统计,并向用户展示统计结果。The control module is further configured to perform statistics according to the processing result and display the statistical result to the user.
  10. 如权利要求6至9任一项所述的实现装置,其中:The implementation device according to any one of claims 6 to 9, wherein:
    所述协处理模块是设置为通过以下方式控制其内各网络处理器根据业务参数处理流表遍历业务:网络处理器中的调度核根据所述业务参数将流表调度给各业务核,所述业务核处理为其调度的流表。 The coprocessing module is configured to control, within the following manner, each network processor to process the flow table according to the service parameter traversal service: the scheduling core in the network processor schedules the flow table to each service core according to the service parameter, The business core processes its scheduled flow table.
PCT/CN2016/083703 2015-06-26 2016-05-27 Method and apparatus for implementing flow table traversal service WO2016206520A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510364059.3 2015-06-26
CN201510364059.3A CN106330694A (en) 2015-06-26 2015-06-26 Method and device for realizing flow table traversal business

Publications (1)

Publication Number Publication Date
WO2016206520A1 true WO2016206520A1 (en) 2016-12-29

Family

ID=57584488

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/083703 WO2016206520A1 (en) 2015-06-26 2016-05-27 Method and apparatus for implementing flow table traversal service

Country Status (2)

Country Link
CN (1) CN106330694A (en)
WO (1) WO2016206520A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030214948A1 (en) * 2002-05-18 2003-11-20 Jin Seung-Eui Router providing differentiated quality of service (QoS) and fast internet protocol packet classifying method for the router
CN1937574A (en) * 2005-09-19 2007-03-28 北京大学 Network flow classifying, state tracking and message processing device and method
CN101282303A (en) * 2008-05-19 2008-10-08 杭州华三通信技术有限公司 Method and apparatus for processing service packet
CN102938000A (en) * 2012-12-06 2013-02-20 武汉烽火网络有限责任公司 Unlocked flow table routing lookup algorithm adopting high-speed parallel execution manner
CN103401777A (en) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Parallel search method and system of Openflow
US20140241356A1 (en) * 2013-02-25 2014-08-28 Telefonaktiebolaget L M Ericsson (Publ) Method and system for flow table lookup parallelization in a software defined networking (sdn) system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030214948A1 (en) * 2002-05-18 2003-11-20 Jin Seung-Eui Router providing differentiated quality of service (QoS) and fast internet protocol packet classifying method for the router
CN1937574A (en) * 2005-09-19 2007-03-28 北京大学 Network flow classifying, state tracking and message processing device and method
CN101282303A (en) * 2008-05-19 2008-10-08 杭州华三通信技术有限公司 Method and apparatus for processing service packet
CN102938000A (en) * 2012-12-06 2013-02-20 武汉烽火网络有限责任公司 Unlocked flow table routing lookup algorithm adopting high-speed parallel execution manner
US20140241356A1 (en) * 2013-02-25 2014-08-28 Telefonaktiebolaget L M Ericsson (Publ) Method and system for flow table lookup parallelization in a software defined networking (sdn) system
CN103401777A (en) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Parallel search method and system of Openflow

Also Published As

Publication number Publication date
CN106330694A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
US20200195740A1 (en) Subscribe and publish method and server
JP7463544B2 (en) Blockchain message processing method, apparatus, computer device, and computer program
US20230281041A1 (en) File operation task optimization
TWI430102B (en) Network adapter resources allocating method,storage medium,and computer
US9699276B2 (en) Data distribution method and system and data receiving apparatus
JP2021511588A (en) Data query methods, devices and devices
US20190196875A1 (en) Method, system and computer program product for processing computing task
CN107135268B (en) Distributed task computing method based on information center network
US9712612B2 (en) Method for improving mobile network performance via ad-hoc peer-to-peer request partitioning
CN109726004B (en) Data processing method and device
JP2004172917A (en) Packet retrieving device, packet process retrieving method, and program
CN111158909B (en) Cluster resource allocation processing method, device, equipment and storage medium
DE112017003294T5 (en) Technologies for scalable sending and receiving of packets
CN105472291A (en) Digital video recorder with multiprocessor cluster and realization method of digital video recorder
US20230275976A1 (en) Data processing method and apparatus, and computer-readable storage medium
US11947534B2 (en) Connection pools for parallel processing applications accessing distributed databases
Tseng et al. Accelerating open vSwitch with integrated GPU
Gao et al. OVS-CAB: Efficient rule-caching for Open vSwitch hardware offloading
Xu et al. Building a high-performance key–value cache as an energy-efficient appliance
WO2021212965A1 (en) Resource scheduling method and related device
US11381630B2 (en) Transmitting data over a network in representational state transfer (REST) applications
EP2622499B1 (en) Techniques to support large numbers of subscribers to a real-time event
US9705698B1 (en) Apparatus and method for network traffic classification and policy enforcement
WO2016206520A1 (en) Method and apparatus for implementing flow table traversal service
CN102902593A (en) Protocol distribution processing system based on cache mechanism

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16813636

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16813636

Country of ref document: EP

Kind code of ref document: A1