CN111404818B - Routing protocol optimization method for general multi-core network processor - Google Patents

Routing protocol optimization method for general multi-core network processor Download PDF

Info

Publication number
CN111404818B
CN111404818B CN202010168876.2A CN202010168876A CN111404818B CN 111404818 B CN111404818 B CN 111404818B CN 202010168876 A CN202010168876 A CN 202010168876A CN 111404818 B CN111404818 B CN 111404818B
Authority
CN
China
Prior art keywords
thread
routing
routing protocol
management
protocol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010168876.2A
Other languages
Chinese (zh)
Other versions
CN111404818A (en
Inventor
刘赫
贾汮
王琼
李振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Forward Industrial Co Ltd
Original Assignee
Shenzhen Forward Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Forward Industrial Co Ltd filed Critical Shenzhen Forward Industrial Co Ltd
Priority to CN202010168876.2A priority Critical patent/CN111404818B/en
Publication of CN111404818A publication Critical patent/CN111404818A/en
Application granted granted Critical
Publication of CN111404818B publication Critical patent/CN111404818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • H04L45/306Route determination based on the nature of the carried application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/04Interdomain routing, e.g. hierarchical routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • H04L47/2433Allocation of priorities to traffic types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Abstract

The invention discloses a routing protocol optimization method facing a general multi-core network processor, which comprises the steps that S1, a network platform service layer provides basic routing common service module functions in a parallelization mode, and provides uniform services of different protocol modules; s2, the IGP routing protocol carries out multithreading parallelization module division and uses a shared interface provided by a network platform service layer; s3, the BGP routing protocol carries out multi-instance parallel optimization processing based on a neighbor session set division mode, and calls a shared interface provided by a network platform service layer; and S4, dynamically optimizing the routing protocol thread scheduling attribute of the general multi-core network processor. The invention can give full play to the multi-core advantages of the general multi-core network processor and the network acceleration engine, and effectively improve the processing efficiency of the routing protocol by parallelizing the routing protocol thread and dynamically adjusting the scheduling attribute of the routing protocol thread.

Description

Routing protocol optimization method for general multi-core network processor
Technical Field
The invention belongs to the technical field of network communication, and particularly relates to a routing protocol optimization method for a general multi-core network processor.
Background
With the rapid development of the internet technology, the network scale is continuously enlarged, the processing overhead of a network protocol is continuously improved, a serial protocol design based on a single-core processor system structure cannot meet the requirement, a routing update message processing process is used as a protocol core function and is very easy to become a program operation bottleneck, so that the important improvement of the routing update message processing speed is crucial to the routing protocol efficiency, the improvement of the speed of message data processing of a routing protocol processing model based on a single-core processor single-thread programming model is limited, the performance improvement of a single-core processor mainly results from the improvement of frequency and the improvement of architecture, the frequency improvement of the current single-core processor reaches the bottleneck, and the architecture is gradually stable without too large improvement space.
The general multi-core network processor generally comprises a general multi-core processor and a hardware network acceleration engine, wherein the general multi-core processor is formed by integrating two or more complete computing units based on a standard general RISC instruction set in one processor. At present, the multi-core technology becomes the most concerned topic and research direction, and with the coming and the popularity of the multi-core era, the multi-thread parallel programming model gradually replaces the programming mode of the traditional single-thread serial program, and the performance of the software is greatly improved. In addition, the hardware network acceleration engine is generally realized based on an FPGA or an ASIC, and is used for solving the problem that the general multi-core processor is insufficient in network data message processing efficiency, and providing accelerated processing for the network data message.
The characteristics of the general multi-core network processor in terms of programmability, message processing performance and research and development are a research hotspot in the current network communication equipment. The general multi-core network processor can provide a parallel programming model of routing protocol software and accelerated processing of network data messages.
A routing protocol multi-instance parallel execution system and parallel execution method thereof are provided in patent 201510436410.5. The method splits the routing protocol execution unit to execute in parallel, and carries out centralized control on the routing table information, thereby ensuring that the routing and forwarding can work normally when a single routing protocol execution unit fails. The method does not relate to how the routing protocol uses the parallel programming model of the general multi-core network processor to improve the processing efficiency of the routing protocol.
Patent 200810181193.X provides a method and a device for parallel processing of routing update messages. The method carries out multithreading transformation on the BGP routing protocol and improves the working efficiency by utilizing the multi-core processor. The method only carries out internal transformation on a single routing protocol and does not carry out parallelization optimization on the whole implementation architecture of routing protocol software.
Patent 201410764673.4 discloses a message processing method based on a multi-core processor and the multi-core processor. The method divides the receiving and sending of the message into the cache pools according to the flow line, realizes the lock-free message forwarding, and improves the parallel processing capability of the multi-core processor. The method only aims at the characteristic that a message receiving and sending bottom layer module of the multi-core processor system is adapted to the parallel processing of the multi-core processor, and does not carry out optimization processing on a routing protocol.
Disclosure of Invention
The invention aims to provide a routing protocol optimization method facing a general multi-core network processor aiming at the defects in the prior art, so as to solve the problems that the existing parallel programming method using a multi-core processor cannot fully utilize the parallel processing advantages of a general multi-core network processor system, and cannot achieve the purpose of efficiently processing various routing protocols.
In order to achieve the purpose, the invention adopts the technical scheme that:
a routing protocol optimization method facing a general multi-core network processor comprises the following steps:
s1, the network platform service layer provides basic routing common service module function in a parallelization mode and provides uniform service of different protocol modules;
s2, the IGP routing protocol carries out multithreading parallelization module division and uses a shared interface provided by a network platform service layer;
s3, the BGP routing protocol carries out multi-instance parallel optimization processing based on a neighbor session set division mode, and calls a shared interface provided by a network platform service layer;
and S4, dynamically optimizing the routing protocol thread scheduling attribute of the general multi-core network processor.
Preferably, the routing common service module function based in S1 includes: management service, routing reissue management, protocol stack function service, message queue management and interface event management.
Preferably, the specific step of S1 includes:
s1.1, a message management thread rapidly receives and sends a message through a network acceleration engine, and the message is distributed to each routing protocol process through a message queue after basic message analysis is completed;
s1.2, an interface management thread maintains a global interface information base and triggers the response of the interface event of each routing protocol process through an interface event queue;
and S1.3, managing the global routing table by the routing management thread, and finishing interaction and updating of each routing protocol process and the global routing table through the routing event queue.
Preferably, the service logic of the routing protocol is divided into a neighbor management module, an LSDB management module, and an SPF calculation module in S2, and is executed in a multi-thread manner, and the packet queue management module, the interface event management module, and the routing management module reduce thread overhead by using a common interface provided by the network platform service layer.
Preferably, the specific step of S2 includes:
s2.1, the main thread finishes starting and initializing the process and derives a specific auxiliary thread according to the running condition;
s2.2, neighbor management slave threads are used for establishing and maintaining neighbor states; meanwhile, the LSA is extracted and announced to the LSDB management slave thread;
s2.3, the LSDB manages the secondary thread, performs unified management on the LSAs collected from the neighbor management secondary thread, and diffuses the LSA information through flooding to achieve the synchronization of OSPF or ISIS protocol whole network LSDB;
s2.4, the SPF calculates the slave thread, manages data in the slave thread by using the LSDB, calculates a shortest path tree according to an SPF algorithm, thereby forming a routing table, and notifies a routing updating and storage module to update a synchronous global routing table.
Preferably, the specific step of S3 includes:
s3.1, the main thread finishes the starting and the initialization of the process, the derivation of the slave thread and finishes the response and the distribution work of the neighbor conversation through the slave thread communication interface;
s3.2, the slave threads are divided based on a neighbor session set mode, each instance thread is equivalent to the task of a single traditional BGP instance process and is responsible for completing the execution of all protocol functions of a neighbor set connected with the instance thread, and the route updating calculation tasks of the whole BGP protocol are naturally distributed on a plurality of threads for processing;
and S3.3, the slave threads access the global policy library through the policy library module to maintain the input and output policies of the neighbor sessions in charge of the threads, and the mutual notification of the updated routes among the slave threads is completed through the data consistency when the route updating and storage module maintains the global routing table accessed by the threads.
Preferably, the specific step of S4 includes:
s4.1, starting and maintaining a routing protocol thread scheduling attribute optimization module by a network service platform process, and starting a routing protocol thread scheduling attribute after the network service platform process completes the initialization of a main transaction;
s4.2, setting a message management thread, an interface management thread and a routing management thread of a network service process, and performing affinity binding on service threads common to all routing protocols and an appointed CPU core;
s4.3, if the service load of the routing protocol process exceeds a set threshold, adjusting the scheduling priority attribute of the corresponding routing protocol, and obtaining high-priority scheduling by the routing protocol with higher service load;
s4.4, the routing protocol process judges that the service load falls back to the set threshold value, and then the scheduling priority of the corresponding routing protocol is reduced.
The routing protocol optimization method for the general multi-core network processor has the following beneficial effects:
the invention can give full play to the multi-core advantages of the general multi-core network processor and the network acceleration engine, and effectively improve the processing efficiency of the routing protocol by parallelizing the routing protocol thread and dynamically adjusting the scheduling attribute of the routing protocol thread.
Drawings
FIG. 1 is a diagram of a general multi-core network processor system architecture.
Fig. 2 is a routing protocol software architecture diagram in an embodiment.
FIG. 3 is a block diagram of the OSPF protocol process in an embodiment.
Fig. 4 is a BGP routing protocol process internal module division diagram in an embodiment.
FIG. 5 is a flowchart of optimization of thread scheduling attributes in an embodiment of a routing protocol.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
According to an embodiment of the present application, referring to fig. 1, a method for optimizing a routing protocol for a general multi-core network processor according to the present solution includes:
s1, the network platform service layer provides basic routing common service module function in a parallelization mode and provides uniform service of different protocol modules;
s2, the IGP routing protocol carries out multithreading parallelization module division, and reduces the thread overhead through a shared interface provided by a network platform service layer;
s3, the BGP routing protocol carries out multi-instance parallel optimization processing based on a neighbor session set division mode, and calls a shared interface provided by a network platform service layer to reduce thread overhead;
s4, dynamically optimizing the routing protocol thread scheduling attribute of the general multi-core network processor, and improving the routing protocol operation efficiency of the general multi-core network processor.
The above steps will be described in detail below according to one embodiment of the present application.
Referring to fig. 1, the present embodiment includes a routing service board card (102) and a service daughter card (104) formed by a general multi-core network processor, and a routing protocol program (101) running on the general multi-core network processor.
In this embodiment, the key point is a routing service board card, and a routing protocol program and a network platform service program are run on the routing service board card. The routing service board card is a general multi-core network processor system, and a plurality of CPU cores (103) participate in the execution of a routing service program at the same time. The routing service board card can be connected with different types of service daughter cards and interfaces, and receives routing protocol messages of different interfaces through the network acceleration engine.
Step S1, referring to fig. 2, the NPSM provides basic functions such as route management service, route reissue management, protocol stack function service, packet queue management, and interface event management, and provides uniform services to different protocol modules.
The functions are modularized and realized in a multithreading mode, are divided according to the principle of independent thread functions, reduce competitive resources among threads, and fully utilize the characteristics of the multi-core processor to optimize the parallel service capability of the NPMS.
The method comprises the following specific steps:
s1.1, all functional modules are executed in parallel in a multithreading mode, a message management thread (201) rapidly receives and transmits a message through a network acceleration engine (202), and the message is distributed to all routing protocol processes through a message queue (203) after basic message analysis is completed;
s1.2, an interface management thread (204) maintains a global interface information base (205) and triggers the response of each routing protocol process interface event through an interface event queue (206);
s1.3, the route management thread (207) manages the global routing table (209), and interaction and updating of each routing protocol process and the global routing table (209) are completed through the route event queue (208).
Each routing protocol process carries out module division according to the service logic of the routing protocol process per se to carry out multithreading parallel execution, for example, the OSPF routing protocol process (210) divides the main service logic into modules of neighbor management, LSDB management, SPF calculation and the like and runs in a multithreading mode, and a shared interface provided by a network flat service layer is used for completing functions of message receiving and sending, routing updating and the like, so that the thread overhead is reduced.
Step S2, referring to fig. 3, the IGP routing protocol performs multi-thread parallelization module division, such as the routing protocols commonly used in OSPF, ISIS, and the like. The OSPF divides main service logic into modules of neighbor management, LSDB management, SPF calculation and the like, and runs in a multithreading mode, fully utilizes the characteristics of a multi-core processor to optimize the parallel processing capability of the OSPF, and reduces thread overhead by using a shared interface provided by a network platform service layer for the modules of message management, interface management, route management and the like.
The method comprises the following specific steps:
s2.1, the main thread (301) completes command processing (303), and completes functions such as thread derivation allocation (304) and conditions for starting operation of the auxiliary thread according to the global policy base (302).
S2.2, the neighbor management thread (305) receives protocol messages through the message transceiving interface (306) to maintain neighbor state information, extracts LSA information and informs the LSDB management thread (307),
s2.3, LSDB management thread (307) carries out unified management on LSAs collected from neighbor management thread, and simultaneously, LSA information is diffused through flooding, so that OSPF or ISIS protocol whole network LSDB synchronization is achieved.
S2.4, the SPF calculation thread (308) manages data in the slave thread by using the LSDB, calculates a shortest path tree according to an SPF algorithm to form a routing table, and updates the synchronous global routing table (310) through a routing management module (309).
Step S3, referring to fig. 4, the BGP routing protocol performs multi-instance parallel optimization based on the neighbor session set partitioning manner, and partitions neighbor management and routing management of different instances to different threads, where each instance set runs in a multithreading manner to fully utilize parallel processing capability of the multi-core processor, and calls a shared interface provided by the network platform service layer to reduce thread overhead.
And S3.1, completing the starting and initialization of the process by the main thread (401), deriving the slave thread, and completing the work of responding and distributing neighbor sessions through the thread communication interface (402).
S3.2, the main thread completes the distribution of the neighbor session to the slave thread through a neighbor mapping management module (403), is responsible for the management and operation of a neighbor-slave thread mapping table, carries out timing scanning on the routing table, and checks whether prefix addresses overflow and the reachability of the next hop of each routing item.
The slave threads (404) are divided based on a neighbor session set mode, each instance thread is equivalent to the task of a single traditional BGP instance process and is responsible for completing the execution of all protocol functions of a neighbor set connected with the slave threads, and the route updating calculation tasks of the whole BGP protocol are naturally distributed on a plurality of threads to be processed, so that the parallel route calculation is realized.
S3.3, the slave threads (404) access the global policy library (406) through the thread communication interface (402) to maintain the input and output policies of the neighbor sessions which are responsible for the threads, and the route processing module (407) maintains the data consistency when each thread accesses the global routing table (408), thereby completing the mutual notification of the updated routes among the slave threads.
It should be noted that other routing protocols and the subsequent new routing can be expanded by referring to the methods similar to step S2 and step S3, and different modules are divided into service logics according to their own characteristics and run in a multithreading manner, so that the parallel processing capability of the protocol is optimized by fully utilizing the characteristics of the multi-core processor.
Step S4, referring to fig. 5, the routing protocol thread scheduling attribute of the generic multi-core network processor is dynamically optimized, so as to improve the routing protocol operating efficiency of the generic multi-core network processor, the public module thread of the network platform service layer NPSM needs to increase the scheduling priority because it provides the public service of all protocols, and the thread of each protocol module dynamically adjusts the scheduling priority according to the currently configured neighbor number and the maintenance routing number. The network acceleration engine of the general multi-core network processor improves the message receiving and sending capability.
The method comprises the following specific steps:
s4.1, starting and maintaining a routing protocol thread scheduling attribute optimization module by a network service platform process, and starting a routing protocol thread scheduling attribute after the network service platform process completes main transaction initialization in 501;
s4.2, 502, setting service threads common to all routing protocols, such as a message management thread, an interface management thread, a routing management thread and the like of a network service process to perform affinity binding with an appointed CPU core, and ensuring that the threads run on the appointed CPU core for as long as possible without being migrated to other CPU cores;
s4.3, judging whether the OSPF process maintenance neighbor and route number exceed the set threshold in 503; in 504, judging whether the BGP process maintenance neighbor and route number exceeds the set threshold; in 505, judging whether the traffic load of other protocols exceeds a set threshold; if the routing protocol processes reach the set threshold, the process proceeds to 506 to adjust the scheduling priority attribute of the corresponding routing protocol, and the routing protocol with higher service load is guaranteed to obtain high-priority scheduling;
s4.4, if the service load of each routing protocol process is judged to fall back to the set threshold value, 507 is carried out to reduce the scheduling priority of the corresponding routing protocol.
The routing protocol optimization method provided by the invention is suitable for a system of a general multi-core network processor, in network communication equipment, a routing protocol module usually works on a main control board card, and the general multi-core network processor in the main control board card has wide application.
The method can give full play to the parallel processing advantages of the general multi-core network processor, abstract the common service modules of various routing protocols and then execute the common service modules in a multi-thread mode in parallel, and bind the affinity of the CPU core to provide stable and efficient service interfaces for each routing protocol; routing protocol modules, such as an OSPF routing protocol, a BGP routing protocol and other routing protocol modules, are subjected to module division according to self service characteristics and run in a multithreading mode, the parallel processing capacity of the protocols is optimized by fully utilizing the characteristics of a multi-core processor, and the scheduling priority attribute of each routing protocol module is dynamically adjusted according to the service load of the routing protocol, so that the routing protocol module with busy service obtains high-priority scheduling, and the overall efficiency of the routing protocol is improved.
While the embodiments of the invention have been described in detail in connection with the accompanying drawings, it is not intended to limit the scope of the invention. Various modifications and changes may be made by those skilled in the art without inventive step within the scope of the appended claims.

Claims (5)

1. A routing protocol optimization method facing a general multi-core network processor is characterized by comprising the following steps:
s1, the network platform service layer provides basic routing common service module functions in a parallelization mode, wherein the basic routing common service module functions comprise routing management service, routing reissue management, protocol stack function service, message queue management and interface event management; and provide the unified service of different protocol modules;
s2, the IGP routing protocol performs multi-thread parallelization module division, and uses a common interface provided by the network platform service layer, which specifically includes: the IGP routing protocol comprises OSPF and ISIS routing protocols and is used for carrying out multithreading parallelization module division; the OSPF divides service logic into a neighbor management module, an LSDB management module and an SPF calculation module and runs in a multithreading mode, optimizes the parallel processing capability of the OSPF by utilizing the characteristics of a multi-core processor, and reduces thread overhead by using a shared interface provided by a network platform service layer through a message management module, an interface management module and a routing management module;
s3, the BGP routing protocol carries out multi-instance parallel optimization processing based on a neighbor session set division mode, and calls a shared interface provided by a network platform service layer;
s4, dynamically optimizing the routing protocol thread scheduling attribute of the general multi-core network processor; the method comprises the following steps: the routing protocol thread scheduling attribute of the general multi-core network processor is dynamically optimized, the routing protocol operation efficiency of the general multi-core network processor is improved, the scheduling priority needs to be increased because the public module thread of a network platform service layer NPSM provides public service of all protocols, and the scheduling priority is dynamically adjusted by the thread of each protocol module according to the number of currently configured neighbors and the number of maintenance routes; the network acceleration engine of the general multi-core network processor improves the message receiving and sending capability.
2. The generic multi-core network processor-oriented routing protocol optimization method according to claim 1, wherein the specific step of S1 includes:
s1.1, a message management thread rapidly receives and sends a message through a network acceleration engine, and the message is distributed to each routing protocol process through a message queue after basic message analysis is completed;
s1.2, an interface management thread maintains a global interface information base and triggers the response of the interface event of each routing protocol process through an interface event queue;
and S1.3, managing the global routing table by the routing management thread, and finishing interaction and updating of each routing protocol process and the global routing table through the routing event queue.
3. The generic multi-core network processor-oriented routing protocol optimization method according to claim 1, wherein the specific step of S2 includes:
s2.1, the main thread finishes starting and initializing the process and derives a specific auxiliary thread according to the running condition;
s2.2, neighbor management slave threads are used for establishing and maintaining neighbor states; meanwhile, the LSA is extracted and announced to the LSDB management slave thread;
s2.3, the LSDB manages the secondary thread, performs unified management on the LSAs collected from the neighbor management secondary thread, and diffuses the LSA information through flooding to achieve the synchronization of OSPF or ISIS protocol whole network LSDB;
s2.4, the SPF calculates the slave thread, manages data in the slave thread by using the LSDB, calculates a shortest path tree according to an SPF algorithm, thereby forming a routing table, and notifies a routing updating and storage module to update a synchronous global routing table.
4. The generic multi-core network processor-oriented routing protocol optimization method according to claim 1, wherein the specific step of S3 includes:
s3.1, the main thread finishes the starting and the initialization of the process, the derivation of the slave thread and finishes the response and the distribution work of the neighbor conversation through the slave thread communication interface;
s3.2, the slave threads are divided based on a neighbor session set mode, each instance thread is equivalent to the task of a single traditional BGP instance process and is responsible for completing the execution of all protocol functions of a neighbor set connected with the instance thread, and the route updating calculation tasks of the whole BGP protocol are naturally distributed on a plurality of threads for processing;
and S3.3, the slave threads access the global policy library through the policy library module to maintain the input and output policies of the neighbor sessions in charge of the threads, and the mutual notification of the updated routes among the slave threads is completed through the data consistency when the route updating and storage module maintains the global routing table accessed by the threads.
5. The generic multi-core network processor-oriented routing protocol optimization method according to claim 1, wherein the specific step of S4 includes:
s4.1, starting and maintaining a routing protocol thread scheduling attribute optimization module by a network service platform process, and starting a routing protocol thread scheduling attribute after the network service platform process completes the initialization of a main transaction;
s4.2, setting a message management thread, an interface management thread and a routing management thread of a network service process, and performing affinity binding on service threads common to all routing protocols and an appointed CPU core;
s4.3, if the service load of the routing protocol process reaches or exceeds a set threshold, adjusting the scheduling priority attribute of the corresponding routing protocol, and obtaining high-priority scheduling by the routing protocol with higher service load;
s4.4, the routing protocol process judges that the service load falls back to the set threshold value, and then the scheduling priority of the corresponding routing protocol is reduced.
CN202010168876.2A 2020-03-12 2020-03-12 Routing protocol optimization method for general multi-core network processor Active CN111404818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010168876.2A CN111404818B (en) 2020-03-12 2020-03-12 Routing protocol optimization method for general multi-core network processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010168876.2A CN111404818B (en) 2020-03-12 2020-03-12 Routing protocol optimization method for general multi-core network processor

Publications (2)

Publication Number Publication Date
CN111404818A CN111404818A (en) 2020-07-10
CN111404818B true CN111404818B (en) 2022-04-15

Family

ID=71432348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010168876.2A Active CN111404818B (en) 2020-03-12 2020-03-12 Routing protocol optimization method for general multi-core network processor

Country Status (1)

Country Link
CN (1) CN111404818B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113114641B (en) * 2021-03-30 2022-10-14 烽火通信科技股份有限公司 Method and system for realizing protocol NSR (non-volatile random Access) by single CPU (Central processing Unit)
CN114201427B (en) * 2022-02-18 2022-05-17 之江实验室 Parallel deterministic data processing device and method
CN116346953B (en) * 2023-03-02 2024-02-13 杭州又拍云科技有限公司 Acceleration method and device for real-time data transmission

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102394809A (en) * 2011-10-13 2012-03-28 中国人民解放军国防科学技术大学 Multithreading parallel processing method of border gateway protocol
CN104836733A (en) * 2015-04-14 2015-08-12 中国人民解放军国防科学技术大学 Method for achieving optimal link state routing protocol
CN105119820A (en) * 2015-07-23 2015-12-02 中国人民解放军信息工程大学 Routing protocol multi-instance parallel execution system and parallel execution method thereof
CN106713131A (en) * 2016-11-18 2017-05-24 上海红阵信息科技有限公司 Multi-BGP routing instance parallel execution device
CN109921990A (en) * 2017-12-13 2019-06-21 丛林网络公司 Multithreading route processing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127691A (en) * 2006-08-17 2008-02-20 王玉鹏 A method for implementing stream-based policy routing on network processor
CN108268328B (en) * 2013-05-09 2022-04-22 华为技术有限公司 Data processing device and computer
CN103927225B (en) * 2014-04-22 2018-04-10 浪潮电子信息产业股份有限公司 A kind of internet information processing optimization method of multi-core framework
CN108037994B (en) * 2017-11-15 2020-12-22 中国电子科技集团公司第三十二研究所 Scheduling mechanism supporting multi-core parallel processing in heterogeneous environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102394809A (en) * 2011-10-13 2012-03-28 中国人民解放军国防科学技术大学 Multithreading parallel processing method of border gateway protocol
CN104836733A (en) * 2015-04-14 2015-08-12 中国人民解放军国防科学技术大学 Method for achieving optimal link state routing protocol
CN105119820A (en) * 2015-07-23 2015-12-02 中国人民解放军信息工程大学 Routing protocol multi-instance parallel execution system and parallel execution method thereof
CN106713131A (en) * 2016-11-18 2017-05-24 上海红阵信息科技有限公司 Multi-BGP routing instance parallel execution device
CN109921990A (en) * 2017-12-13 2019-06-21 丛林网络公司 Multithreading route processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
网络数据包高性能并行处理技术研究;黄益彬等;《计算机与现代化》;20161231(第12期);60-64 *

Also Published As

Publication number Publication date
CN111404818A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN111404818B (en) Routing protocol optimization method for general multi-core network processor
CN110413392B (en) Method for formulating single task migration strategy in mobile edge computing scene
CN110619595B (en) Graph calculation optimization method based on interconnection of multiple FPGA accelerators
CN107087019B (en) Task scheduling method and device based on end cloud cooperative computing architecture
Castilhos et al. Distributed resource management in NoC-based MPSoCs with dynamic cluster sizes
WO2015096656A1 (en) Thread creation method, service request processing method and related device
CN112328378B (en) Task scheduling method, computer device and storage medium
CN107357661A (en) A kind of fine granularity GPU resource management method for mixed load
CN106293950A (en) A kind of resource optimization management method towards group system
US11689646B2 (en) Network packet processing method and apparatus and network server
CN110427270B (en) Dynamic load balancing method for distributed connection operator in RDMA (remote direct memory Access) network
CN110990154B (en) Big data application optimization method, device and storage medium
CN108737268B (en) Software-defined industrial Internet of things resource scheduling method
Liu et al. Service resource management in edge computing based on microservices
CN108170417B (en) Method and device for integrating high-performance job scheduling framework in MESOS cluster
CN113672391B (en) Parallel computing task scheduling method and system based on Kubernetes
CN114465899A (en) Network acceleration method, system and device under complex cloud computing environment
Lu et al. An efficient load balancing algorithm for heterogeneous grid systems considering desirability of grid sites
CN113360245A (en) Internet of things equipment task downloading method based on mobile cloud computing deep reinforcement learning
CN115374949A (en) Distributed quantum computing system and resource management method
CN113742073B (en) LSB interface-based cluster control method
CN113347430B (en) Distributed scheduling device of hardware transcoding acceleration equipment and use method thereof
Xia et al. Distributed resource management and admission control of stream processing systems with max utility
Bustos-Jimenez et al. Balancing active objects on a peer to peer infrastructure
CN112799811B (en) High concurrency thread pool task scheduling method for edge gateway

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant