WO2016082603A1 - Programmateur et procédé de multiplexage dynamique pour programmateur - Google Patents

Programmateur et procédé de multiplexage dynamique pour programmateur Download PDF

Info

Publication number
WO2016082603A1
WO2016082603A1 PCT/CN2015/089663 CN2015089663W WO2016082603A1 WO 2016082603 A1 WO2016082603 A1 WO 2016082603A1 CN 2015089663 W CN2015089663 W CN 2015089663W WO 2016082603 A1 WO2016082603 A1 WO 2016082603A1
Authority
WO
WIPO (PCT)
Prior art keywords
scheduling
module
algorithm
scheduler
submodule
Prior art date
Application number
PCT/CN2015/089663
Other languages
English (en)
Chinese (zh)
Inventor
晏雷
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2016082603A1 publication Critical patent/WO2016082603A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt

Definitions

  • the present invention relates to scheduling techniques, and in particular, to a dynamic multiplexing method of a scheduler and a scheduler.
  • the combination of computer technology and communication technology has produced data communication technology.
  • the rapid development of data communication technology has accelerated access devices, transmission devices, switching devices, routing devices, core network devices, base station devices, and bearer network devices in data communication networks. Development of data center equipment, server equipment, and the like.
  • the typical networking structure of the data communication network is shown in Figure 1.
  • various network devices and applications in the data communication network are increasing, especially for multicast technology, differentiated services, traffic engineering, and security protection.
  • the introduction of functions makes the network topology, protocol standards, algorithm structure and control management in the data communication network more complicated. Therefore, network resource utilization, forwarding performance, maintenance cost, service deployment, service integration and flexibility are inevitable. The aspect puts forward higher requirements.
  • the software defined network (SDN) architecture has been modified in the existing data communication network, but the scheduler used in the current network is not able to adapt to these new demands.
  • the composition of the scheduler in the network is as shown in Figure 2.
  • the schedule buffer module of the scheduler in the current network receives the scheduling request and then caches it.
  • the scheduling information decision module passes one or several fixed schedules preset by the algorithm processing module.
  • the algorithm implements the decision of different traffic types, and then the scheduling output module outputs the scheduling result. It can be seen that the type of the current network scheduler is fixed, and the scheduling mode is relatively simple, and cannot adapt to the changing traffic type.
  • the parameter configuration and hierarchical relationship of the relevant network scheduler need to be changed to adapt to the new network topology, which increases the complexity of operation and maintenance of the data communication network.
  • the embodiment of the present invention is to provide a dynamic multiplexing method for a scheduler and a scheduler, which can adapt different scheduling types to different types of scheduling algorithms, scheduling performance, and scheduling flexibility, thereby realizing flexibility of various scheduling modes. Sharing and dynamic multiplexing, which reduces the complexity of network operation and maintenance.
  • An embodiment of the present invention provides a scheduler, where the scheduler includes: a linked list management module, an information storage module, a central control module, and an algorithm initiation module;
  • the linked list management module is configured to cache the received queue scheduling request, and determine whether the queue scheduling request needs to be stored in the storage engine resource; if the storage is required, send the write instruction to the information storage module, and the storage instruction is not required. Sending a scheduling mode update request to the central control module;
  • the information storage module is configured to, when receiving the write command, store the queue scheduling request in a storage engine resource, and send a scheduling mode update request to the central control module; according to the calculation result returned by the central control module Reusing to a corresponding scheduling mode, and storing the calculation result in a storage engine resource;
  • the central control module is configured to: when receiving the scheduling mode update request, send an algorithm scheduling request to the algorithm initiating module; and when receiving the computing result returned by the algorithm initiating module, complete the scheduling processing operation according to the calculating result, And returning the calculation result to the information storage module;
  • the algorithm initiating module is configured to: when receiving the algorithm scheduling request sent by the central control module, access the corresponding scheduling algorithm in the algorithm engine resource according to the acquired scheduling information, and return the calculation result obtained according to the scheduling algorithm to The central control module.
  • the information storage module is further configured to read a queue scheduling request in the storage engine resource, and a scheduler scheduling request, a token bucket scheduling request, and a cache in the calculation result. Go to the linked list management module;
  • the central control module is further configured to determine, according to the updated scheduling mode, a queue scheduling request, a scheduler scheduling request, and a token bucket scheduling request cached in the linked list management module, and output a scheduling result.
  • the linked list management module includes a first arbitration submodule, a queue cache submodule, a scheduler cache submodule, and a token bucket cache submodule;
  • the first arbitration submodule is configured to arbitrate output of cache data in one of the queue buffer submodule, the scheduler buffer submodule, or the token bucket cache submodule;
  • the queue buffer submodule is configured to buffer the received queue scheduling request
  • the scheduler cache submodule is configured to cache a scheduler scheduling request in a calculation result
  • the token bucket cache submodule is configured to cache a token bucket scheduling request in the calculation result.
  • the information storage module includes a second arbitration sub-module, a linked list update sub-module, a parameter update sub-module, and a status update sub-module;
  • the second arbitration sub-module is configured to be multiplexed into a corresponding scheduling mode according to the calculation result, and instruct the linked list update submodule, the parameter update submodule, and the state update submodule to update a scheduler list of the corresponding scheduling mode, Parameters and status;
  • the linked list update submodule is configured to update a linked list of the scheduler according to the indication of the second arbitration submodule;
  • the parameter update submodule is configured to update a parameter of the scheduler according to the indication of the second arbitration submodule
  • the status update submodule is configured to update a status of the scheduler according to the indication of the second arbitration submodule.
  • the central control module includes a control submodule, a parameter processing submodule, a linked list processing submodule, and a state processing submodule;
  • the control submodule is configured to initiate the scheduling mode update request to the algorithm
  • the module sends an algorithm scheduling request; when receiving the calculation result returned by the algorithm initiating module, the chain processing submodule, the parameter processing submodule, and the state processing submodule are instructed to complete the scheduling related processing operation according to the calculation result. And returning the calculation result to the information storage module;
  • the linked list processing submodule is configured to complete processing of a linked list in the scheduler according to the indication of the control submodule;
  • the parameter processing submodule is configured to complete processing of parameters in the scheduler according to the indication of the control submodule;
  • the state processing submodule is configured to complete generation of a scheduler intermediate state according to the indication of the control submodule.
  • the algorithm initiating module is specifically configured as:
  • the calculation result is returned to the central control module.
  • the algorithm initiating module is further configured to return the scheduling factor to the central control module
  • the central control module is further configured to send the scheduling factor to a scheduler of the next hop node, so that the scheduler of the next hop node is dynamically multiplexed to the corresponding scheduling mode according to the scheduling factor.
  • the scheduling information includes data packet parsing result, traffic configuration parameter, real-time or historical traffic statistics, scheduler configuration parameters, scheduler current status, and scheduler status change information.
  • the parameter update submodule, the state update submodule, the control submodule, the parameter processing submodule, the linked list processing submodule, and the state processing submodule may use a central processing unit when performing processing ( CPU, Central Processing Unit), Digital Signal Processor (DSP), or Field-Programmable Gate Array (FPGA).
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • FPGA Field-Programmable Gate Array
  • an embodiment of the present invention provides a dynamic multiplexing method of a scheduler, where the method includes:
  • the linked list management module caches the received queue scheduling request, and determines whether the queue scheduling request needs to be stored in the storage engine resource; if the storage is required, sends a write instruction to the information storage module, and the information storage module receives the write And instructing, storing the queue scheduling request in the storage engine resource, and sending a scheduling mode update request to the central control module; and sending the scheduling mode update request to the central control module without storing;
  • the central control module When receiving the scheduling mode update request, the central control module sends an algorithm scheduling request to the algorithm initiating module; when receiving the algorithm scheduling request, the algorithm initiating module accesses the corresponding scheduling algorithm in the algorithm engine resource according to the acquired scheduling information, and The calculation result obtained by the scheduling algorithm is returned to the central control module;
  • the central control module When receiving the calculation result, the central control module completes the scheduling processing operation according to the calculation result, and returns the calculation result to the information storage module; the information storage module multiplexes to the corresponding scheduling mode according to the returned calculation result, and The calculation result is stored in a storage engine resource.
  • the information storage module is multiplexed to the corresponding scheduling mode according to the returned calculation result, and the storage result is stored in the storage engine resource, the method further includes:
  • the information storage module reads a queue scheduling request in the storage engine resource, and a scheduler scheduling request and a token bucket scheduling request in the calculation result, and caches the result to the linked list management module;
  • the central control module determines a queue scheduling request, a scheduler scheduling request, and a token bucket scheduling request cached in the linked list management module according to the updated scheduling mode, and outputs a scheduling result.
  • the algorithm initiating module accesses the corresponding scheduling algorithm in the algorithm engine resource according to the acquired scheduling information, and returns the calculation result obtained according to the scheduling algorithm to the central control module, including:
  • the algorithm initiating module generates a scheduling factor according to the acquired scheduling information, and sends a corresponding scheduling algorithm request to the algorithm engine resource according to the scheduling factor, where the algorithm engine resource requests the scheduling algorithm according to the scheduling algorithm, and according to the determined
  • the scheduling algorithm obtains the calculation result and returns to the algorithm initiation module;
  • the algorithm initiation module returns the calculation result to the central control module.
  • the method further includes:
  • the algorithm initiating module returns the scheduling factor to the central control module
  • the central control module sends the scheduling factor to a scheduler of the next hop node, so that the scheduler of the next hop node is dynamically multiplexed to the corresponding scheduling mode according to the scheduling factor.
  • the scheduling information includes data packet parsing result, traffic configuration parameter, real-time or historical traffic statistics, scheduler configuration parameters, scheduler current status, and scheduler status change information.
  • the linked list management module buffers the received queue scheduling request, and determines whether the queue scheduling request needs to be stored in the storage engine resource;
  • the module sends a write command, when the information storage module receives the write command, stores the queue scheduling request in the storage engine resource, and sends a scheduling mode update request to the central control module; if not, sends a schedule to the central control module a mode update request; when receiving the scheduling mode update request, the central control module sends an algorithm scheduling request to the algorithm initiating module; when the algorithm initiating module receives the algorithm scheduling request, according to the acquired scheduling information Asking a corresponding scheduling algorithm in the algorithm engine resource, and returning the obtained calculation result to the central control module; when receiving the calculation result, the central control module completes the scheduling processing operation according to the calculation result, and returns the calculation result And the information storage module is multiplexed according to the returned calculation result into a corresponding scheduling mode, and the calculation result is stored in the storage engine resource.
  • the embodiment of the present invention accesses the corresponding scheduling algorithm in the algorithm engine resource according to the scheduling information, and multiplexes the corresponding scheduling mode according to the obtained calculation result; and outputs the scheduling result according to the corresponding scheduling mode. It can adapt to different scheduling types of scheduling types, scheduling performance, and scheduling flexibility of different traffic types, thereby achieving flexible sharing and dynamic multiplexing of various scheduling modes, thereby reducing the complexity of network operation and maintenance.
  • FIG. 1 is a schematic structural diagram of a typical networking of an existing data communication network
  • FIG. 2 is a schematic structural diagram of a scheduler in an existing data communication network
  • FIG. 3 is a schematic structural diagram of a scheduler of an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a structure of a linked list management module according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an information storage module according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a central control module according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of an implementation process of a dynamic multiplexing method of a scheduler according to an embodiment of the present invention.
  • the corresponding scheduling algorithm in the algorithm engine resource is accessed according to the scheduling information, and the calculation result obtained according to the scheduling algorithm is multiplexed into a corresponding scheduling mode; according to the corresponding scheduling mode
  • the decision and output of the scheduling result can adapt to the different scheduling types of different scheduling types, scheduling performance, and scheduling flexibility, thereby achieving flexible sharing and dynamic multiplexing of various scheduling modes, thereby reducing the complexity of network operation and maintenance. degree.
  • the scheduler includes: a linked list management module 300, an information storage module 301, a central control module 302, and an algorithm initiation module 303;
  • the linked list management module 300 is configured to cache the received queue scheduling request, and determine whether the queue scheduling request needs to be stored in the storage engine resource; if the storage is required, send a write instruction to the information storage module 301. Sending a scheduling mode update request to the central control module 302 without storing;
  • the information storage module 301 is configured to, when receiving the write command, store the queue scheduling request in the storage engine resource, and send a scheduling mode update request to the central control module 302; and according to the central control module 302 The returned calculation result is multiplexed into a corresponding scheduling mode, and the calculation result is stored in a storage engine resource;
  • the central control module 302 is configured to send an algorithm scheduling request to the algorithm initiating module 303 when receiving the scheduling mode update request, and complete the scheduling according to the calculation result when receiving the calculation result returned by the algorithm initiating module 303. Processing the operation, and returning the calculation result to the information storage module 301;
  • the algorithm initiating module 303 is configured to: when receiving the algorithm scheduling request sent by the central control module 302, access the corresponding scheduling algorithm in the algorithm engine resource according to the acquired scheduling information, and obtain the calculation result according to the scheduling algorithm. Returned to the central control module 302;
  • the storage engine resource includes different kinds of storage units and storage access modes, wherein the storage unit includes a random access memory (RAM), a read only memory (ROM), a first in first out (First Input First Output, FIFO) memory and buffer and other standardized storage devices; storage access methods include read and write sharing 1 port, 1 read port 1 write port, 2 read ports 1 write port, 2 A variety of standardized read and write modes, such as reading port 2 write ports and 1 read port 2 write ports; first, The storage mode of the queue scheduling request is determined according to the actual read and write bandwidth. Since the queue scheduling request has only one write and one read operation, one port can be used for reading and writing, or one read port and one write port.
  • RAM random access memory
  • ROM read only memory
  • FIFO First Input First Output
  • the storage mode of the 1W1R can also be stored in the corresponding storage unit according to the determined storage manner when the storage unit of the 1W1R is insufficient, and the storage mode is 1W2R, 2W1R, 2W2R, etc.;
  • the scheduler scheduling request and the token bucket scheduling request in the calculation result are stored in the storage engine resource according to the storage mode determined by the actual read and write bandwidth and the scheduling algorithm, due to the scheduler scheduling request and the token bucket scheduling request. It involves the pre-calculation write and post-computation write operations. Therefore, you can select the storage mode of two write ports for two read ports.
  • the standardized storage engine resources can meet the number of queues, the number of schedulers, and the number of token buckets for different devices. Differentiated requirements, thereby increasing resource utilization;
  • the algorithm engine resources include various different kinds of standardized scheduling algorithms, for example, Strict Priority (SP) scheduling algorithm, Round Robin (RR) scheduling algorithm, Weighted Round Robin (Weighted Round Robin) , WRR), Weighted Deficit Round Robin (WDRR), Weighted Fair Queuing (WFQ) scheduling algorithm, and standardized algorithm combinations, for example, SP+WRR, SP+WDRR, SP+WFQ, It includes known standardized leaky bucket algorithm, hash algorithm, matching algorithm and check algorithm, and other standardized algorithm resources that may be used for scheduling in the future.
  • the algorithm engine resource can be implemented as specific high-performance hard core firmware.
  • the algorithm engine resources can smooth the different requirements of different network traffic types on scheduling algorithms, scheduling performance and scheduling flexibility, in software resources.
  • Autonomous migration between hardware resources and hardware resources greatly reduces the complexity of the device.
  • the information storage module is further configured to read a queue in the storage engine resource Dispatching the request, and the scheduler scheduling request and the token bucket scheduling request in the calculation result, and buffering to the linked list management module;
  • the central control module is further configured to determine, according to the updated scheduling mode, a queue scheduling request, a scheduler scheduling request, and a token bucket scheduling request cached in the linked list management module, and output a scheduling result.
  • the linked list management module 300 includes a first arbitration submodule 400, a queue cache submodule 401, a scheduler cache submodule 402, and a token bucket cache submodule. 403; among them,
  • the first arbitration sub-module 400 is configured to arbitrate output of cache data in one of the queue buffer sub-module, the scheduler buffer sub-module or the token bucket cache sub-module;
  • the queue buffer submodule 401 is configured to buffer the received queue scheduling request.
  • the scheduler buffer submodule 402 is configured to cache a scheduler scheduling request in the calculation result
  • the token bucket buffer submodule 403 is configured to cache a token bucket scheduling request in the calculation result.
  • the queue buffer sub-module 401 buffers the received queue scheduling request, and when the first arbitration sub-module 400 determines that the storage engine resource needs to be written, sends a write instruction to the information storage module 301, The information storage module 301 writes the queue scheduling request to the storage engine resource; and the scheduler scheduling request and the token bucket scheduling request are calculation results obtained according to the scheduling algorithm, and therefore, the scheduler cacher The scheduler scheduling request and the token bucket scheduling request buffered by the module 402 and the token bucket buffer submodule 403 are read from the storage engine resource after the scheduling mode is updated.
  • the information storage module is configured as shown in FIG. 5.
  • the information storage module includes a second arbitration submodule 500, a linked list update submodule 501, a parameter update submodule 502, and a status update submodule 503.
  • the second arbitration sub-module 500 is configured to be multiplexed into a corresponding scheduling mode according to the calculation result, and instruct the linked list update submodule, the parameter update submodule, and the state update submodule to update a scheduler list of the corresponding scheduling mode. , parameters and status;
  • the linked list update submodule 501 is configured to update a linked list of the scheduler according to the indication of the second arbitration submodule 500;
  • the parameter update submodule 502 is configured to update a parameter of the scheduler according to the indication of the second arbitration submodule 500;
  • the status update submodule 503 is configured to update the status of the scheduler according to the indication of the second arbitration submodule 500;
  • the linked list update submodule 501 includes a tail flag cache submodule, an upper pointer cache submodule, and a lower pointer cache submodule
  • the state update submodule 502 includes a queue empty flag cache submodule and a scheduler empty flag cache submodule.
  • the parameter update submodule 503 includes a counter cache submodule and an active linked list cache submodule;
  • the information storage module 301 adopts a generalized scheduling sub-module, so that the scheduler can randomly combine the scheduling sub-module to update the scheduling mode.
  • the combination of the parameter update sub-module 502 and the state update sub-module 503 can complete the update of the SP scheduling mode.
  • the combination of the linked list update sub-module 501, the parameter update sub-module 502, and the status update sub-module 503 may complete the update of the WFQ scheduling mode, or may change the scheduling mode at any time, for example, the second arbitration sub-module 500 calculates according to the scheduling algorithm.
  • the calculation result can complete the conversion of the RR to WRR scheduling mode, the WRR to WFQ scheduling mode, and the internal parameters of the scheduler, for example, the counter value, so that the counter value is expected.
  • the state starts working, which increases the integration of the device.
  • the central control module is configured as shown in FIG. 6.
  • the central control module includes a control submodule 600, a linked list processing submodule 601, a parameter processing submodule 602, and a state processing submodule 603.
  • the control sub-module 600 is configured to: when receiving the scheduling mode update request, send an algorithm scheduling request to the algorithm initiating module; and when receiving the computing result returned by the algorithm initiating module, indicating the linked list according to the calculating result
  • the processing sub-module 601, the parameter processing sub-module 602, the state processing sub-module 603 completes the scheduling processing operation; and returns the calculation result to the information storage module 301;
  • the linked list processing sub-module 601 is configured to complete processing of the linked list in the scheduler according to the instruction of the control sub-module 600;
  • the parameter processing sub-module 602 is configured to complete processing of parameters in the scheduler according to the indication of the control sub-module 600;
  • the state processing sub-module 603 is configured to complete generation of a scheduler intermediate state according to the indication of the control sub-module 600;
  • the linked list processing sub-module 601 includes an in-chain processing sub-module, an egress processing sub-module, and a migration processing sub-module; wherein the in-chain processing sub-module completes a node in-link related processing, and the egress processing sub-module The module completes the related processing of the node out chain, and the migration processing sub-module completes the position moving operation of the node on the linked list.
  • the algorithm initiating module 303 is specifically configured as:
  • the calculation result is returned to the central control module 302.
  • the scheduling information includes information such as a data packet parsing result, a traffic configuration parameter, and real-time or historical traffic statistics.
  • the data packet parsing result is an analysis result obtained by analyzing the data packet, for example, The Internet Protocol (IP) data packet is parsed, and the obtained parsing result may include the determined traffic analysis control of the IP data packet.
  • IP Internet Protocol
  • the Class of Service (Cos) domain may also include a judgment flag for each domain, for example, whether it is a double tag (TAG), a Media Access Control (MAC) address, a multicast IP packet, or the like;
  • the traffic configuration parameter is configuration data configured by the central processing unit (CPU) for scheduling data packets, for example, scheduling time interval, scheduling particle size, shaping rate, shaping bucket depth, port weight allocation, scheduler hook relationship, and the like;
  • the real-time traffic statistics are for enrolling and dequeuing during the current time period. The count of data packets; the historical traffic statistics is the total count of incoming and outgoing data packets after the device starts working.
  • information such as the traffic type can be obtained, and then the traffic configuration parameters, real-time or historical traffic statistics, scheduler configuration parameters, scheduler current state, and scheduler state change are comprehensively analyzed.
  • the scheduling information is generated by the information such as the regular information, and the scheduling algorithm is determined according to the scheduling factor.
  • the SP scheduling algorithm may be selected for the high priority communication signaling flow
  • the RR scheduling algorithm may be selected for the large bandwidth point-to-multipoint multicast video stream.
  • the WFQ scheduling algorithm is selected for delay-sensitive voice streams and bursty network data streams, which can adapt to different scheduling types of scheduling algorithms, scheduling performance, and scheduling flexibility for different data traffic types, thereby implementing various scheduling modes.
  • the flexible sharing and dynamic multiplexing can effectively reduce the operating cost of the device.
  • the scheduling factor is defined to be that the scheduler in the data communication network can be self-starting, self-exciting, and self-tuning. Since the scheduling factor needs to be propagated in the entire network device, the scheduling factor can be carried in the data packet header. ETYPE domain, using the format with a virtual locale The format of the virtual local area network (VLAN) TAG is consistent, and is transmitted to the scheduler of the next hop node in the data communication network, so that the scheduler of the next hop node is in the network according to the scheduling factor parsed from the data packet. In the case of management failure, the scheduling mode is automatically updated, and the scheduling result is output according to the corresponding scheduling mode, which ensures the normal operation of the network device, thereby improving the reliability in the network operation process.
  • VLAN virtual local area network
  • the linked list management module 300, the information storage module 301, the central control module 302, and the algorithm initiation module 303 may all be configured by a central processing unit (CPU), a microprocessor (MPU), and a digital signal processor located in the scheduler. (DSP), or field programmable gate array (FPGA) implementation.
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor located in the scheduler.
  • FPGA field programmable gate array
  • the embodiment of the invention provides a dynamic multiplexing method of the scheduler. As shown in FIG. 7, the method includes:
  • Step S700 The linked list management module caches the received queue scheduling request, and determines whether the queue scheduling request needs to be stored in the storage engine resource, and if yes, proceeds to step S701; otherwise, proceeds to step S703;
  • Step S701 Send a write instruction to the information storage module
  • Step S702 When the information storage module receives the write instruction, the queue scheduling request is stored in the storage engine resource, and the scheduling mode update request is sent to the central control module.
  • Step S703 Send a scheduling mode update request to the central control module
  • Step S704 When receiving the scheduling mode update request, the central control module sends an algorithm scheduling request to the algorithm initiating module.
  • Step S705 When receiving the algorithm scheduling request, the algorithm initiating module accesses the corresponding scheduling algorithm in the algorithm engine resource according to the acquired scheduling information, and returns the calculation result obtained according to the scheduling algorithm to the central control module;
  • Step S706 When receiving the calculation result, the central control module completes the scheduling processing operation according to the calculation result, and returns the calculation result to the information storage module;
  • Step S707 The information storage module multiplexes to the corresponding scheduling mode according to the returned calculation result, and stores the calculation result in the storage engine resource.
  • the linked list management module performs legality verification on the received queue scheduling request, and the verification succeeds to determine that the queue scheduling request needs to be stored in the storage engine resource, and if the verification fails, it is determined that the queue scheduling request does not need to be stored in the storage engine. Resources.
  • the algorithm initiating module accesses the corresponding scheduling algorithm in the algorithm engine resource according to the acquired scheduling information, and returns the calculation result obtained according to the scheduling algorithm to the central control module, including:
  • the algorithm initiating module generates a scheduling factor according to the acquired scheduling information
  • the algorithm initiating module sends a corresponding scheduling algorithm request to the algorithm engine resource according to the scheduling factor, and the algorithm engine resource determines the scheduling algorithm according to the scheduling algorithm request, and obtains the calculation result according to the determined scheduling algorithm and returns to the Algorithm initiation module;
  • the algorithm initiation module returns the calculation result to the central control module.
  • the scheduling information includes data packet parsing result, traffic configuration parameter, real-time or historical traffic statistics, scheduler configuration parameters, scheduler current state, and scheduler state change information.
  • the method further includes:
  • the algorithm initiating module returns the scheduling factor to the central control module
  • the central control module sends the scheduling factor to a scheduler of the next hop node, so that the scheduler of the next hop node is dynamically multiplexed to the corresponding scheduling mode according to the scheduling factor.
  • the scheduling factor is defined to be that the scheduler in the data communication network can be self-starting, self-exciting, and self-tuning. Since the scheduling factor needs to be propagated in the entire network device, the scheduling factor can be carried in the data packet header. ETYPE domain, using the format with a virtual locale The format of the virtual local area network (VLAN) TAG is consistent, and is transmitted to the scheduler of the next hop node in the data communication network, so that the scheduler of the next hop node is in the network according to the scheduling factor parsed from the data packet. In the case of management failure, the scheduling mode is automatically updated, and the scheduling result is output according to the corresponding scheduling mode, which ensures the normal operation of the network device, thereby improving the reliability in the network operation process.
  • VLAN virtual local area network
  • step S707 the method further includes:
  • Step S708 The information storage module 301 reads the queue scheduling request in the storage engine resource, and the scheduler scheduling request and the token bucket scheduling request in the calculation result, and caches the result to the linked list management module 300;
  • Step S709 The central control module 302 determines a queue scheduling request, a scheduler scheduling request, and a token bucket scheduling request buffered in the linked list management module 300 according to the updated scheduling mode, and outputs a scheduling result.
  • step S709 the process returns to step S700 to implement a loop decision and an output process of the scheduling result, so that when a new queue scheduling request is received, the corresponding scheduling algorithm in the algorithm engine resource is accessed according to the acquired scheduling information.
  • the calculation result obtained according to the scheduling algorithm is multiplexed into a corresponding scheduling mode; the scheduling result is determined according to the corresponding scheduling mode, and the dynamic multiplexing of the scheduler is implemented.
  • the dynamic multiplexing method of the scheduler and the scheduler according to the embodiment of the present invention has the following significant progress:
  • a generalized scheduling sub-module is used, and combined with standardized storage engine resources and algorithm engine resources, the hardware resources in the existing scheduler can be fully integrated, thereby improving the efficiency of hardware development;
  • the embodiment of the present invention accesses the corresponding scheduling algorithm in the algorithm engine resource according to the scheduling information, and the calculation result obtained according to the scheduling algorithm is multiplexed into a corresponding scheduling mode; according to the corresponding scheduling mode Output scheduling results to be able to adapt to different data streams
  • the quantity type has different scheduling requirements for scheduling algorithm types, scheduling performance, and scheduling flexibility, thereby effectively reducing the complexity of network operation and maintenance;
  • the scheduling factor may be delivered to the scheduler of the next hop node in the data communication network, so that the scheduler of the next hop node automatically updates the scheduling mode according to the scheduling factor, and outputs the scheduling result according to the corresponding scheduling mode. It can ensure the normal operation of the network equipment, thereby improving the reliability during the operation of the network.
  • the queue scheduling management module caches the received queue scheduling request, and determines whether the queue scheduling request needs to be stored in the storage engine resource; if necessary, sends a write instruction to the information storage module, where the information storage is performed.
  • the module stores the queue scheduling request in the storage engine resource, and sends a scheduling mode update request to the central control module; if not, sends a scheduling mode update request to the central control module; the central control module receives the scheduling
  • the algorithm scheduling request is sent to the algorithm initiating module; when receiving the algorithm scheduling request, the algorithm initiating module accesses the corresponding scheduling algorithm in the algorithm engine resource according to the acquired scheduling information, and returns the obtained calculation result to the central control module.
  • the central control module When receiving the calculation result, completes the scheduling processing operation according to the calculation result, and returns the calculation result to the information storage module; the information storage module multiplexes to the corresponding scheduling mode according to the returned calculation result, And storing the calculation result in the storage Engine resources.
  • the embodiment of the present invention accesses the corresponding scheduling algorithm in the algorithm engine resource according to the scheduling information, and According to the obtained calculation result, the result is multiplexed into a corresponding scheduling mode; the scheduling result is output according to the corresponding scheduling mode, and different scheduling types of scheduling type, scheduling performance, and scheduling flexibility can be adaptively adapted to different scheduling types, thereby implementing various schedulings.
  • the flexible sharing of modes and dynamic multiplexing reduce the complexity of network operation and maintenance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Programmateur comprenant : un module de gestion de liste liée (300), configuré pour : mettre en tampon une demande de programmation de file d'attente reçue, déterminer si la demande de programmation de file d'attente doit être stockée dans une ressource de moteur de stockage, si la demande de programmation de file d'attente doit être stockée, envoyer une instruction d'écriture à un module de stockage d'informations (301), et si la demande de programmation de file d'attente n'a pas besoin d'être stockée, envoyer une demande de mise à jour de mode de programmation à un module de commande central (302); le module de stockage d'informations (301), configuré pour : lorsque l'instruction d'écriture est reçue, stocker la demande de programmation dans la ressource de moteur de stockage, et envoyer la demande de mise à jour de mode de programmation au module de commande central (302); et exécuter un multiplexage vers un mode de programmation correspondant conformément à un résultat de calcul renvoyé; et le module de commande central (302), configuré pour : lorsque la demande de mise à jour de mode de programmation est reçue, envoyer une demande de programmation d'algorithme à un module d'initiation d'algorithme (303). L'invention concerne également un procédé de multiplexage dynamique pour un programmateur.
PCT/CN2015/089663 2014-11-25 2015-09-15 Programmateur et procédé de multiplexage dynamique pour programmateur WO2016082603A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410691247.2 2014-11-25
CN201410691247.2A CN105700940B (zh) 2014-11-25 2014-11-25 一种调度器及调度器的动态复用方法

Publications (1)

Publication Number Publication Date
WO2016082603A1 true WO2016082603A1 (fr) 2016-06-02

Family

ID=56073553

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/089663 WO2016082603A1 (fr) 2014-11-25 2015-09-15 Programmateur et procédé de multiplexage dynamique pour programmateur

Country Status (2)

Country Link
CN (1) CN105700940B (fr)
WO (1) WO2016082603A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110703999A (zh) * 2019-09-30 2020-01-17 盛科网络(苏州)有限公司 存储器的读操作的调度方法和存储器
CN112590880A (zh) * 2020-12-21 2021-04-02 中国铁道科学研究院集团有限公司通信信号研究所 一种ats控制权切换方法
CN115801897A (zh) * 2022-12-20 2023-03-14 南京工程学院 一种边缘代理的报文动态处理方法
CN115801897B (zh) * 2022-12-20 2024-05-24 南京工程学院 一种边缘代理的报文动态处理方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107544789B (zh) * 2016-06-23 2021-06-15 中兴通讯股份有限公司 一种拓扑适配方法和装置
CN110535714B (zh) 2018-05-25 2023-04-18 华为技术有限公司 一种仲裁方法及相关装置
CN111126895A (zh) * 2019-11-18 2020-05-08 青岛海信网络科技股份有限公司 一种复杂场景下调度智能分析算法的管理仓库及调度方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101272345A (zh) * 2008-04-29 2008-09-24 杭州华三通信技术有限公司 一种流量控制的方法、系统和装置
CN101710292A (zh) * 2009-12-21 2010-05-19 中国人民解放军信息工程大学 一种可重构任务处理系统、调度器及任务调度方法
CN101958824A (zh) * 2009-07-14 2011-01-26 华为技术有限公司 一种数据交换方法及数据交换结构
CN102362257A (zh) * 2009-03-24 2012-02-22 国际商业机器公司 使用相关矩阵追踪解除分配的加载指令
US8893130B2 (en) * 2007-03-26 2014-11-18 Raytheon Company Task scheduling method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070005917A (ko) * 2003-09-30 2007-01-10 쟈루나 에스에이 운영체제
US7995597B2 (en) * 2008-10-14 2011-08-09 Nortel Networks Limited Method and system for weighted fair queuing
CN101478703A (zh) * 2008-12-12 2009-07-08 北京邮电大学 一种t-mpls光传送网络多业务节点的实现方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8893130B2 (en) * 2007-03-26 2014-11-18 Raytheon Company Task scheduling method and system
CN101272345A (zh) * 2008-04-29 2008-09-24 杭州华三通信技术有限公司 一种流量控制的方法、系统和装置
CN102362257A (zh) * 2009-03-24 2012-02-22 国际商业机器公司 使用相关矩阵追踪解除分配的加载指令
CN101958824A (zh) * 2009-07-14 2011-01-26 华为技术有限公司 一种数据交换方法及数据交换结构
CN101710292A (zh) * 2009-12-21 2010-05-19 中国人民解放军信息工程大学 一种可重构任务处理系统、调度器及任务调度方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110703999A (zh) * 2019-09-30 2020-01-17 盛科网络(苏州)有限公司 存储器的读操作的调度方法和存储器
CN112590880A (zh) * 2020-12-21 2021-04-02 中国铁道科学研究院集团有限公司通信信号研究所 一种ats控制权切换方法
CN112590880B (zh) * 2020-12-21 2023-07-18 中国铁道科学研究院集团有限公司通信信号研究所 一种ats控制权切换方法
CN115801897A (zh) * 2022-12-20 2023-03-14 南京工程学院 一种边缘代理的报文动态处理方法
CN115801897B (zh) * 2022-12-20 2024-05-24 南京工程学院 一种边缘代理的报文动态处理方法

Also Published As

Publication number Publication date
CN105700940B (zh) 2019-05-31
CN105700940A (zh) 2016-06-22

Similar Documents

Publication Publication Date Title
WO2016082603A1 (fr) Programmateur et procédé de multiplexage dynamique pour programmateur
WO2014173367A2 (fr) Procédé de mise en œuvre de qualité de service, système, dispositif et support de stockage informatique
JP2014517571A (ja) 階層型スケジューリングおよびシェーピング
Hua et al. Scheduling design and analysis for end-to-end heterogeneous flows in an avionics network
US20120155271A1 (en) Scalable resource management in distributed environment
Hua et al. Scheduling heterogeneous flows with delay-aware deduplication for avionics applications
Mustafa et al. The effect of queuing mechanisms first in first out (FIFO), priority queuing (PQ) and weighted fair queuing (WFQ) on network’s routers and applications
WO2013025703A1 (fr) Politique de programmation de paquet pouvant être mis à l'échelle pour un grand nombre de sessions
CN108667746B (zh) 一种在深空延时容忍网络中实现业务优先级的方法
Hegde et al. Experiences with a centralized scheduling approach for performance management of IEEE 802.11 wireless LANs
JP2020072336A (ja) パケット転送装置、方法、及びプログラム
JP2020022023A (ja) パケット転送装置、方法、及びプログラム
Tawk et al. Optimal scheduling and delay analysis for AFDX end-systems
Wang et al. Toward statistical QoS guarantees in a differentiated services network
WO2022068617A1 (fr) Procédé et dispositif de mise en forme de trafic
Rukmani ENHANCED LOW LATENCY QUEUING ALGORITHM FOR REAL TIME APPLICATIONS IN WIRELESS NETWORKS.
Jha et al. New Queuing Technique for Improving Computer Networks QoS
Rashid et al. Traffic intensity based efficient packet schedualing
Domżał et al. Efficient congestion control mechanism for flow‐aware networks
Zhang et al. Hard real-time communication over multi-hop switched ethernet
Kulhari et al. Traffic shaping at differentiated services enabled edge router using adaptive packet allocation to router input queue
Domżał Flow-aware networking as an architecture for the IPv6 QoS Parallel Internet
Joung et al. Effect of flow aggregation on the maximum end-to-end delay
F AL-Allaf et al. Simevents/Stateflow base Reconfigurable Scheduler in IP Internet Router
Zakariyya et al. Simulation of Class Based Weighted Fair Queue Algorithm on an IP Router Using OPNET

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15863820

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15863820

Country of ref document: EP

Kind code of ref document: A1