WO2016206520A1 - Procédé et appareil pour mettre en œuvre un service de traversée de table de flux - Google Patents

Procédé et appareil pour mettre en œuvre un service de traversée de table de flux Download PDF

Info

Publication number
WO2016206520A1
WO2016206520A1 PCT/CN2016/083703 CN2016083703W WO2016206520A1 WO 2016206520 A1 WO2016206520 A1 WO 2016206520A1 CN 2016083703 W CN2016083703 W CN 2016083703W WO 2016206520 A1 WO2016206520 A1 WO 2016206520A1
Authority
WO
WIPO (PCT)
Prior art keywords
service
flow table
module
processing
control module
Prior art date
Application number
PCT/CN2016/083703
Other languages
English (en)
Chinese (zh)
Inventor
路鹏
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016206520A1 publication Critical patent/WO2016206520A1/fr

Links

Images

Definitions

  • This document relates to, but is not limited to, the field of data communication technology, and in particular, to a method and device for implementing a flow table traversal service.
  • the storage device that directly accesses the storage flow table through the CPU has lower efficiency when there are more entries due to CPU processing capability and PCI (Peripheral Component Interconnect) bus speed.
  • PCI Peripheral Component Interconnect
  • This paper provides a method and device for implementing a flow table traversal service to improve the processing efficiency of the flow table traversal service.
  • the method for implementing the flow table traversal service includes: the control module obtains the service parameters of the flow table traversal service, and sends the service parameters to the co-processing module; the co-processing module controls each network processor to process the flow table according to the service parameters. Traverse the business.
  • the method further includes: the control module acquiring a processing result of the coprocessing module.
  • the processing result of the control module acquiring the coprocessing module includes: the coprocessing module sends the processing result to the storage module, and the control module obtains the processing result from the storage module.
  • the method further includes: the control module performs statistics according to the processing result, and displays the statistical result to the user.
  • the coprocessing module controls each network processor to process the flow table according to the service parameter
  • the traversal service includes: the scheduling core in the network processor schedules the flow table to each service core according to the service parameter, and the service core processes the scheduled Flow table.
  • the present invention provides a device for implementing a flow table traversal service, comprising: a control module, configured to acquire a service parameter of a flow table traversal service, and send the service parameter to a co-processing module; the co-processing module is configured to control each network processor therein The flow table is traversed according to the business parameters.
  • control module is further configured to obtain a processing result of the coprocessing module.
  • the device further includes a storage module
  • the coprocessing module is further configured to send the processing result to the storage module;
  • the control module is further configured to obtain a processing result from the storage module.
  • control module is further configured to perform statistics according to the processing result, and display the statistical result to the user.
  • the coprocessing module is configured to control, in the following manner, each network processor to process the flow table according to the service parameter traversal service: the scheduling core in the network processor schedules the flow table to each service core according to the service parameter.
  • the business core processes its scheduled flow table.
  • the embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented when executed by a processor.
  • the embodiment of the invention provides a method for implementing a flow table traversal service, which uses a network processor to perform a flow table traversal service, and various algorithms can be implemented by hardware, and implement complex congestion management, queue scheduling, flow classification, and QoS functions. Under the premise, it can also achieve extremely high search and forwarding performance and achieve "hard forwarding". Compared with pure hardware chips, the network processor fully supports programming, and the programming mode is simple. Once new technologies or requirements appear, it is convenient. The ground is implemented by microcode programming; in addition, the network processor is also scalable, and multiple network processors can be interconnected to form a network processor cluster to support larger and high-speed network processing, which can effectively solve A less efficient problem exists with related art solutions.
  • FIG. 1 is a schematic structural diagram of an apparatus for implementing a flow table traversal service according to a first embodiment of the present invention.
  • FIG. 2 is a flowchart of a method for implementing a flow table traversal service according to a second embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for implementing a flow table traversal service according to a third embodiment of the present invention.
  • the apparatus 1 for implementing a flow table traversal service includes:
  • the control module 11 is configured to acquire the service parameters of the flow table traversal service, and send it to the co-processing module 12;
  • the co-processing module 12 is configured to control each of the network processors to process the flow table to traverse the service according to the service parameters.
  • control module 11 in the above embodiment is further configured to obtain the processing result of the coprocessing module 12.
  • the implementation apparatus 1 in the above embodiment further includes a storage module 13 that is further configured to send the processing result to the storage module 13, and the control module 11 is further configured to be stored.
  • the module gets 13 processing results.
  • control module 11 in the above embodiment is further configured to perform statistics according to the processing result and display the statistical result to the user.
  • the co-processing module 12 in the above embodiment is configured to control each of the network processors in the following manner to process the flow table according to the service parameter traversal service: the scheduling core in the network processor will flow the table according to the service parameters. It is scheduled to each service core, and the service core processes its scheduled flow table.
  • FIG. 2 is a flowchart of an implementation method according to a second embodiment of the present invention. As shown in FIG. 2, in this embodiment, a method for implementing a flow table traversal service includes the following steps:
  • control module obtains the service parameter of the flow table traversal service, and sends the service parameter to the co-processing module;
  • the co-processing module controls each network processor to process the flow table according to the service parameter to traverse the service.
  • the above embodiment further includes: the control module acquiring a processing result of the co-processing module.
  • the obtaining, by the control module in the foregoing embodiment, the processing result of the co-processing module includes: the co-processing module sends the processing result to the storage module, and the control module acquires the processing result from the storage module.
  • the above embodiment further includes: the control module performs statistics according to the processing result, and displays the statistical result to the user.
  • the coprocessing module in the foregoing embodiment controls each network processor to process the flow table according to the service parameter to traverse the service, and the scheduling core in the network processor schedules the flow table to each service core according to the service parameter.
  • the business core processes its scheduled flow table.
  • FIG. 3 is a flowchart of an implementation method according to a third embodiment of the present invention. As shown in FIG. 3, in the embodiment, the method for implementing a flow table traversal service includes the following steps:
  • control module obtains the service parameter and sends it to the coprocessing module.
  • the user sends a service request
  • the control module receives the service request sent by the user, and sends the obtained parameter to the underlying co-processing module in the form of a message, and simultaneously listens to the processing process of the co-processing module.
  • the coprocessing module processes the flow table traversal service.
  • the two network processors of the co-processing module respectively receive the statistics messages sent by the control module, parse the parameters and perform corresponding service processing according to the user requirements.
  • the flow table service processing is directly traversed with respect to the traditional CPU.
  • Multi-core parallel processing one core is responsible for command scheduling, other cores are responsible for traversing the flow table to handle the corresponding business, rather than one kernel traversing all flow tables, but the entire kernel is divided into other business processing kernel processing to improve efficiency, Finally, the result is either written to the peripheral storage device or sent directly to the control module in the form of a message.
  • the flow table traversal process is placed in the co-processing module processing, and the total flow table is set to M.
  • the design uses multiple cores to process the entire flow table traversal process in parallel, one of which is used for command scheduling, and the rest is left.
  • the L cores are used for specific service processing.
  • the core of the command scheduling receives the service message sent by the upper layer control module, it is initialized first, including setting a global control variable index_cnt.
  • index_cnt can be used to know the service processing. schedule.
  • index_cnt is equal to L, the other cores are notified to start traversing the flow table.
  • the core for doing business processing is not a core traversing all the flow tables, but dividing the entire flow table into L blocks, where the average is It is not an absolute average, but an approximate average.
  • the principle is that there is no overlap between the flow table blocks responsible for each core, and all flow tables are covered at the same time.
  • the business volume of each business core processing is as large as possible to achieve the most efficient overall efficiency. Excellent purpose.
  • the L service cores process the flow table blocks that are respectively responsible for each other.
  • the scheduling core integrates the processing results of all service cores, and directly reports the final result to the control module or to the peripheral storage device. If it is written to the peripheral storage device, it also needs to notify the upper control module service. Processing has been completed.
  • the control module monitors that the co-processing module has been statistically completed, the results of the two network processors are respectively obtained for re-processing, and the final result is fed back to the upper-layer user.
  • the specific TOP-N statistics are For example, the TOP-N statistic is based on a more complicated embodiment of the flow table traversal described above, and can quickly count traffic ranking information based on ip addresses, protocols, etc., and display the top 10 and top 20, respectively. Top 50.
  • the TOP-N statistics include the following steps:
  • the user sends the flow table service to the control module through command line parameter configuration.
  • control module When the control module receives the user statistics request, the relevant parameters are parsed, and then the parameters are configured into messages, which are respectively sent to the two network processors NP of the coprocessing module in the form of messages.
  • the TOP-N statistics message sent by the upper layer is obtained by the core for command scheduling in each NP, and the initialization process is performed.
  • the global variable index_cnt is set to L (the number of service cores), and the entire block flow table is divided into L shares.
  • the service core starts to traverse the flow table blocks that are responsible for each.
  • a hash list is created based on each flow table and the delivered parameters, and all flow tables are counted based on the delivered parameters. Before creating a HASH table, the HASH table specification is specified.
  • the M-th square bar of 2 the corresponding flow table information in the flow table is taken out as a keyword according to the parameters obtained from the message, and the HASH value is obtained by HASH operation, and then, The 0-(M-1) bit of the value H is used as the INDEX table index IDX corresponding to the HASH entry. If there is no matching INDEX entry, a new INDEX table and the corresponding ENTRY table need to be created. If there is a matching INDEX table, the corresponding ENTRY table is mapped by the value in the INDEX table.
  • the keyword is also searched for an exact match to confirm whether it is actually indexed.
  • index_cnt When the core responsible for command scheduling monitors that index_cnt is 0, it represents the completion of the business core statistics service. Because the HASH table is relatively large after the statistics are completed, the statistics cannot be counted. The result is immediately fed back to the upper control module, but all the HASH tables are saved in the peripheral storage device, and the underlying statistical service completion is notified to the upper control module.
  • the peripheral storage devices that are suspended from the NP are directly read from the respective HASH statistical linked list. Because the statistical results of the two network processors are integrated into one, the HASH chain established by one of the network processors in the peripheral storage device can be read once in a single block, and another network processing is traversed on this basis.
  • the HASH statistics chain of the device, through the same HASH establishment method as the bottom layer or cumulative or new, will eventually be the two network processors
  • the statistical results are integrated into a HASH chain and stored in a dynamically requested memory. Finally, the HASH chain is traversed, and the statistical result of the previous TOP-N is discharged by the binary sorting method to the upper layer user.
  • an embodiment of the present invention further provides a computer readable storage medium storing computer executable instructions, which are implemented when executed by a processor.
  • the network processor is used to perform the flow table traversal service, and various algorithms can be implemented by hardware. Under the premise of implementing complex congestion management, queue scheduling, flow classification, and QoS functions, the algorithm can also achieve extremely high search and forwarding performance. Hard forwarding"; compared to pure hardware chips, the network processor fully supports programming, the programming mode is simple, once new technologies or requirements appear, it can be easily implemented by microcode programming; in addition, the network processor also has Scalability, multiple network processors can be interconnected to form a network processor cluster to support larger and high-speed network processing, which can improve the processing efficiency of the flow table traversal service.
  • each module/unit in the above embodiment may be implemented in the form of hardware, for example, by implementing an integrated circuit to implement its corresponding function, or may be implemented in the form of a software function module, for example, executing a program stored in the memory by a processor. Instructions to achieve their corresponding functions. This application is not limited to any specific combination of hardware and software.
  • the technical solution provided by the embodiment of the present invention uses a network processor to perform a flow table traversal service, and various algorithms can be implemented by hardware.
  • the network processor Under the premise of implementing complex congestion management, queue scheduling, flow classification, and QoS functions, High search and forwarding performance, achieving "hard forwarding"; compared to pure hardware chips, the network processor fully supports programming, and the programming mode is simple. Once new technologies or requirements appear, it can be easily implemented by microcode programming.
  • the network processor is also scalable, and multiple network processors can be interconnected to form a network processor cluster to support larger and high-speed network processing, which can improve the processing efficiency of the flow table traversal service.

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne un procédé pour mettre en œuvre un service de traversée de table de flux. Le procédé comprend les opérations suivantes : un module de commande acquiert un paramètre de service d'un service de traversée de table de flux, et distribue le paramètre de service à un module de coprocesseur ; et le module de coprocesseur amène des processeurs de réseau dans le module de coprocesseur à traiter le service de traversée de table de flux selon le paramètre de service.
PCT/CN2016/083703 2015-06-26 2016-05-27 Procédé et appareil pour mettre en œuvre un service de traversée de table de flux WO2016206520A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510364059.3 2015-06-26
CN201510364059.3A CN106330694A (zh) 2015-06-26 2015-06-26 一种流表遍历业务的实现方法及装置

Publications (1)

Publication Number Publication Date
WO2016206520A1 true WO2016206520A1 (fr) 2016-12-29

Family

ID=57584488

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/083703 WO2016206520A1 (fr) 2015-06-26 2016-05-27 Procédé et appareil pour mettre en œuvre un service de traversée de table de flux

Country Status (2)

Country Link
CN (1) CN106330694A (fr)
WO (1) WO2016206520A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030214948A1 (en) * 2002-05-18 2003-11-20 Jin Seung-Eui Router providing differentiated quality of service (QoS) and fast internet protocol packet classifying method for the router
CN1937574A (zh) * 2005-09-19 2007-03-28 北京大学 对网络流进行分类、状态跟踪和报文处理的装置和方法
CN101282303A (zh) * 2008-05-19 2008-10-08 杭州华三通信技术有限公司 业务报文处理方法和装置
CN102938000A (zh) * 2012-12-06 2013-02-20 武汉烽火网络有限责任公司 一种高速并行的无锁流表路由查找方法
CN103401777A (zh) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Openflow的并行查找方法和系统
US20140241356A1 (en) * 2013-02-25 2014-08-28 Telefonaktiebolaget L M Ericsson (Publ) Method and system for flow table lookup parallelization in a software defined networking (sdn) system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030214948A1 (en) * 2002-05-18 2003-11-20 Jin Seung-Eui Router providing differentiated quality of service (QoS) and fast internet protocol packet classifying method for the router
CN1937574A (zh) * 2005-09-19 2007-03-28 北京大学 对网络流进行分类、状态跟踪和报文处理的装置和方法
CN101282303A (zh) * 2008-05-19 2008-10-08 杭州华三通信技术有限公司 业务报文处理方法和装置
CN102938000A (zh) * 2012-12-06 2013-02-20 武汉烽火网络有限责任公司 一种高速并行的无锁流表路由查找方法
US20140241356A1 (en) * 2013-02-25 2014-08-28 Telefonaktiebolaget L M Ericsson (Publ) Method and system for flow table lookup parallelization in a software defined networking (sdn) system
CN103401777A (zh) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Openflow的并行查找方法和系统

Also Published As

Publication number Publication date
CN106330694A (zh) 2017-01-11

Similar Documents

Publication Publication Date Title
US20200195740A1 (en) Subscribe and publish method and server
WO2021244211A1 (fr) Procédé et appareil de traitement de message de chaîne de blocs, ordinateur, et support de stockage lisible
US20230281041A1 (en) File operation task optimization
TWI430102B (zh) 網路卡資源配置方法、儲存媒體、及電腦
US9699276B2 (en) Data distribution method and system and data receiving apparatus
JP2021511588A (ja) データクエリ方法、装置およびデバイス
US20140095748A1 (en) Reconfigurable hardware structures for functional pipelining of on-chip special purpose functions
US20190196875A1 (en) Method, system and computer program product for processing computing task
CN107135268B (zh) 基于信息中心网络的分布式任务计算方法
WO2013078583A1 (fr) Procédé et appareil permettant d'optimiser l'accès à des données, et procédé et appareil permettant d'optimiser le stockage de données
JP2004172917A (ja) パケット検索装置及びそれに用いるパケット処理検索方法並びにそのプログラム
US20140047059A1 (en) Method for improving mobile network performance via ad-hoc peer-to-peer request partitioning
WO2023103419A1 (fr) Procédé et appareil basés sur une file d'attente de messages permettant d'envoyer des messages 5g en lots, et dispositif électronique
DE112017003294T5 (de) Technologien für ein skalierbares Senden und Empfangen von Paketen
CN111158909B (zh) 集群资源分配处理方法、装置、设备及存储介质
US20230275976A1 (en) Data processing method and apparatus, and computer-readable storage medium
US11316916B2 (en) Packet processing method, related device, and computer storage medium
CN105472291A (zh) 多处理器集群的数字硬盘录像机及其实现方法
US20130054857A1 (en) Reducing latency at a network interface card
US11947534B2 (en) Connection pools for parallel processing applications accessing distributed databases
Hasan et al. Dynamic load balancing model based on server status (DLBS) for green computing
Gao et al. OVS-CAB: Efficient rule-caching for Open vSwitch hardware offloading
CN112769788B (zh) 计费业务数据处理方法、装置、电子设备及存储介质
Xu et al. Building a high-performance key–value cache as an energy-efficient appliance
WO2021212965A1 (fr) Procédé de planification de ressources et dispositif associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16813636

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16813636

Country of ref document: EP

Kind code of ref document: A1