CN116582479A - Service flow routing method, system, storage medium and equipment - Google Patents

Service flow routing method, system, storage medium and equipment Download PDF

Info

Publication number
CN116582479A
CN116582479A CN202310679913.XA CN202310679913A CN116582479A CN 116582479 A CN116582479 A CN 116582479A CN 202310679913 A CN202310679913 A CN 202310679913A CN 116582479 A CN116582479 A CN 116582479A
Authority
CN
China
Prior art keywords
information
network
traffic
traffic flow
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310679913.XA
Other languages
Chinese (zh)
Inventor
杨国民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202310679913.XA priority Critical patent/CN116582479A/en
Publication of CN116582479A publication Critical patent/CN116582479A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/08Learning-based routing, e.g. using neural networks or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/30Routing of multiclass traffic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a service flow routing method, a system, a storage medium and equipment, wherein the invention acquires network topology information containing network resource information and service flow information in a period, adopts a reinforcement learning method, simultaneously performs strategy optimization on two aspects of service performance and network resources, performs service flow routing based on the optimization strategy, and can further improve the utilization rate of the network resources while improving the service performance.

Description

Service flow routing method, system, storage medium and equipment
Technical Field
The invention relates to a service flow routing method, a service flow routing system, a storage medium and a storage device, and belongs to the technical field of communication networks.
Background
With the continuous development of communication technology, new services with endless layers put higher and higher demands on networks, such as the ul lc (low latency high reliability communication) service in a 5G network puts very severe demands on latency and packet loss rate. How to fully utilize the existing network resources in network communication on the premise of meeting the service characteristic requirements, and guiding the service flow from the source node to the destination node becomes one of the hot spots of the research in the industry.
The traditional internet architecture adopts a decentralized control mode, each node is managed in the routing process, and the data packet selects the shortest path with the minimum node hops in the transmission process, so that the phenomena of unbalanced network flow, congestion and incapability of guaranteeing service performance occur. Under the background, a network architecture of a centralized control mode with separated data planes and control planes gradually replaces a traditional network architecture, and under a new architecture, an SDN controller is responsible for planning the route of each service flow on the whole, coordinating the flow of each link, reducing congestion as much as possible, and therefore improving service performance.
In order to solve the problem of reasonable routing of end-to-end traffic in the SDN architecture, many schemes are proposed in the industry, such as patent application numbers 201911183909.4, 202211562064.1, 202110118171.4, 201811292342.X, 202211473921.0, but no traffic routing method for optimizing both traffic performance and network resources at the same time is currently known.
Disclosure of Invention
The invention provides a service flow routing method, a service flow routing system, a storage medium and a storage device, which solve the problems disclosed in the background art.
In order to solve the technical problems, the invention adopts the following technical scheme:
a traffic flow routing method, comprising:
acquiring network topology information containing network resource information and information of all service flows in a period;
according to the network topology information and the service flow information, a reinforcement learning method is adopted to obtain a preferred strategy; wherein the preferred policy comprises a traffic flow preferred routing policy and a preferred network resource allocation policy for each node and each link in the network;
and if the preferred strategy meets the requirement, carrying out service flow routing according to the preferred strategy.
Acquiring information of all service flows in a period, wherein the information comprises:
acquiring flow information of all services in a period by adopting a time slice polling mode;
sending the business flow information into different queues according to CoS grade; wherein each queue stores service flow information of the same CoS level; in the same queue, the service flow information is enqueued in a FIFO mode according to the arrival time.
According to the network topology information and the service flow information, a reinforcement learning method is adopted to obtain a preferred strategy, which comprises the following steps:
sequentially scheduling the service flow information in the queue according to the sequence from high to low of the CoS grade, taking the scheduled service flow information and the corresponding network topology information as the state of each step of the reinforcement learning method, and acquiring a preferred strategy by adopting the reinforcement learning method;
wherein, in the network topology information corresponding to the nth schedule, the network resource is equal to the initial network resource minus the allocated network resource; the allocated network resources are the network resources to which all traffic flows before the nth schedule are allocated.
The network resources comprise computing resources and storage resources available to each node and bandwidth resources available to each link;
the actions in the reinforcement learning method comprise service flow routing, calculation resources allocated for service flows by all nodes, storage resources allocated for service flows by all nodes and bandwidth resources allocated for service flows by all links.
The service flow information comprises CoS grade of the service flow, maximum time delay allowed by the service flow, maximum jitter allowed by the service flow, maximum packet loss rate allowed by the service flow and minimum bandwidth required by the service flow;
in the reinforcement learning method, the single step reward function is:
wherein r is a single step prize value, B ij For the currently available bandwidth resources of the link between node i and node j,bandwidth resources, D, for each link being traffic flow k, respectively k 、J k 、L k 、B k 、CoS k The maximum delay allowed for the traffic flow k, the maximum jitter allowed, the maximum packet loss rate allowed, the minimum bandwidth required, and the CoS class D, J, L, B are the actual delay, jitter, packet loss rate, and minimum bandwidth of the traffic flow k, respectively.
After the service flow information is scheduled, deleting the information of the scheduled service flow in a queue, and if the reinforcement learning method of the current period is completed, remaining the service flow information of the current period in the queue, wherein the remaining information and all the service flow information in the next period are used as the service flow route of the next period together.
A traffic flow routing system, comprising:
the acquisition module acquires network topology information containing network resource information and information of all service flows in a period;
the reinforcement learning module is used for acquiring a preferred strategy by adopting a reinforcement learning method according to the network topology information and the service flow information; wherein the preferred policy comprises a traffic flow preferred routing policy and a preferred network resource allocation policy for each node and each link in the network;
and the action module is used for carrying out service flow routing according to the preferred strategy if the preferred strategy meets the requirement.
A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform a traffic flow routing method.
A computer device comprising one or more processors, and one or more memories in which one or more programs are stored and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing a traffic flow routing method.
The invention has the beneficial effects that: the invention acquires the network topology information containing the network resource information and the service flow information in the period, adopts the reinforcement learning method, simultaneously optimizes the strategy in two aspects of service performance and network resources, and carries out service flow routing based on the optimization strategy, thereby further improving the utilization rate of the network resource while improving the service performance.
Drawings
FIG. 1 is a flow chart of a traffic flow routing method;
FIG. 2 is a flow chart for obtaining traffic flow information;
FIG. 3 is a flow chart of scheduling traffic flow information;
FIG. 4 is a flow chart of reinforcement learning;
fig. 5 is a schematic diagram of an SDN architecture.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
As shown in fig. 1, a service flow routing method includes the following steps:
step 1, obtaining network topology information containing network resource information and information of all service flows in a period; wherein the period can be empirically set within a certain range;
step 2, obtaining a preferred strategy by adopting a reinforcement learning method according to the network topology information and the service flow information; wherein the preferred policy comprises a traffic flow preferred routing policy and a preferred network resource allocation policy for each node and each link in the network;
and step 3, if the preferred strategy meets the requirement, carrying out service flow routing according to the preferred strategy.
The method is implemented in an application server of the SDN architecture, and by acquiring network topology information containing network resource information and intra-period service flow information, and adopting a reinforcement learning method, policy optimization is performed simultaneously on both aspects of service performance and network resources, and service flow routing is performed based on the optimization policy, so that the service performance is improved, and meanwhile, the network resource utilization rate is further improved.
In the step 1, the network topology information may be directly obtained from the SDN controller, specifically, by a network detection function of the SDN controller, where the network topology information mainly includes each node and a link configuration, a computing resource and a storage resource available for each node, and a bandwidth resource available for each link.
The service flow can be divided into 8 grades according to CoS according to the requirements of time delay, reliability and the like, namely 0-7 (7 corresponds to the highest requirement), the highest grade 7 indicates that the service has the highest requirements of time delay and reliability, and the grade 0 indicates that the requirements are the lowest.
A plurality of service flow paths are generated in a preset period, as shown in fig. 2, the flow information of all the services in the period can be acquired in a time slice polling mode, and the service flow information is sent to different queues according to CoS grades; each queue stores service flow information of the same CoS level, namely 8 queues; in the same queue, the service flow information is enqueued in a FIFO mode according to the arrival time.
And sequentially scheduling the service flow information in the queues according to the sequence from high to low of CoS grades, namely, after the service flow information of the corresponding queues of high grades is scheduled, the service flow information of the corresponding queues of low grades can be scheduled, wherein the service flow information of the corresponding queues of 7 grades is scheduled preferentially, and the scheduling is performed in a FIFO mode, particularly shown in figure 3, after the service flow information is scheduled, the information of the scheduled service flow is deleted from the queues.
Taking the scheduled service flow information and the corresponding network topology information as the state of each step of the reinforcement learning method, and acquiring a preferred strategy by adopting the reinforcement learning method; wherein, in the network topology information corresponding to the nth schedule, the network resource is equal to the initial network resource minus the allocated network resource; the allocated network resources are the network resources to which all traffic flows before the nth schedule are allocated.
As shown in fig. 4, the reinforcement learning method adopts a DDPG algorithm, and the output of the Actor network directly selects an action instead of outputting the probability of each action.
The DDPG algorithm consists of an Actor network for determining a strategy, a Critic network for determining a state action value, a target network target Actor network and a target Critic network thereof, after the 4 networks are established, the parameters of the neural network are initialized, the parameters of the Actor network and the Critic network are assigned to the target Actor network and the target Critic network, meanwhile, the maximum number of rounds Episode for training and the maximum number of steps Step of each round are set, and the size of a playback memory is set and the data of the playback memory is cleared.
And searching all possible routes between the traffic stream source node and the destination node by traversing the adjacent nodes, selecting one of the routes by training the Actor network, and simultaneously using an epsilon-greedy strategy in the process of selecting actions.
The output of the Actor network includes, besides the selected route, the computation resources and storage resources allocated for the traffic flow by each node (except the destination node) in the route, and the bandwidth resources allocated for the traffic flow by each link, that is, the actions in the DDPG algorithm include the traffic flow route, the computation resources allocated for the traffic flow by each node, the storage resources allocated for the traffic flow by each node, and the bandwidth resources allocated for the traffic flow by each link. After these actions are implemented, the corresponding resource value is subtracted from the present value of each resource, and the updated present value of each resource is taken as input of the next state together with the source node and destination node of the service flow, the CoS level of the service flow, the arrival rate of the service flow, the maximum delay allowed by the service flow, the maximum delay jitter allowed by the service flow, the maximum packet loss rate allowed by the service flow, the minimum bandwidth required by the service flow and the like.
The output quality of the Actor network is reflected by a report function, and the report function is expressed as follows:
where r is a single step prize value,bandwidth resources for each link which are the traffic flow k are obtained by the output of the Actor network; b (B) ij For the current available bandwidth resource of the link between node i and node j, subtracting +/per step for the initial value>Obtaining; d (D) k 、J k 、L k 、B k 、CoS k The maximum time delay, the maximum jitter, the maximum packet loss rate, the minimum bandwidth and the CoS level are respectively allowed by the service flow k and defined by the service flow characteristics of the user; D. j, L, B are the actual delay, jitter, packet loss rate and minimum bandwidth of the service flow k after executing a certain step, and are obtained through network measurement.
The state S obtained in each step (including network topology information of data such as each network node and link configuration, available computing resources of each node, available bandwidth resources of each link, and traffic flow information including data such as source node, destination node, coS level, arrival rate, bandwidth demand, delay demand, jitter demand, packet loss rate demand, etc.), action a (including a route selected for the current traffic flow, including computing resources and storage resources allocated to the current traffic flow by each node except for the destination node in the route, and bandwidth resources allocated to the current traffic flow by each link), return r (i.e., reward, given by equation (1)), next state S '(including network topology information of data such as available computing resources of each network node and link, available bandwidth resources of each link after updating, and traffic flow information including data such as source node, destination node, coS level, arrival rate, bandwidth demand, delay demand, jitter demand, packet loss rate demand, etc.), form a quadruple (S, a, r, S') and store these quadruples into a memory.
When the data of the playback memory reaches a certain quantity, S and A in the playback memory are input into a Critic network to obtain a state-action value Q (S, A), S 'in the playback memory is input into a target Actor network to obtain a next action A', S 'and A' in the playback memory are input into the target Critic network together to obtain a next state-action value Q (S ', A'), then the Critic network (gamma is a discount factor) is updated by Q (S, A) =reorder+gamma×Q (S ', A'), the Actor network is updated by using a strategy gradient algorithm, and the parameters of the Actor network and the Critic network are updated by using the parameters of the Actor network and the Critic network.
Stopping learning after the maximum step number of each Epinode is reached, and restarting the learning process of the next Epinode; stopping the learning process when the set maximum number of Episedes is reached, and finally obtaining the optimal routing strategy of the service flow and the optimal network resource allocation strategy of each node and each link in the network.
If the current period reinforcement learning method is completed, the current period service flow information is remained in the queue, and the remained information and all the service flow information in the next period are used as the service flow route of the next period together.
And acquiring and checking the end-to-end delay, jitter, packet loss rate and bandwidth distributed by all nodes along the path of the service flow in the preferred strategy, if the network resource deficiency alarm is not met, notifying the SDN controller of the preferred strategy, generating a flow table according to the strategy by the SDN controller, and issuing the flow table to a corresponding node (SDN switch) to guide the subsequent service flow to reach the destination node from the source node, namely, carrying out service flow routing according to the preferred strategy.
The invention optimizes the service performance in multiple aspects such as time delay, jitter, packet loss rate, network link utilization rate and the like of the end-to-end service, and gives consideration to different performance requirements of various services, such as different time delay requirements of different services, and reasonably distributes node computing resources, storage resources and link bandwidth resources according to the existing resources of the network, thereby improving the service performance and further improving the network resource utilization rate.
Based on the same technical scheme, the invention also discloses a virtual system of the method, a service flow routing system, the system is loaded in an application server of SDN architecture shown in FIG. 5, and the virtual system comprises:
the acquisition module acquires network topology information containing network resource information and information of all service flows in a period.
The acquisition module is specifically divided into a queue scheduling module and a topology module, wherein the queue scheduling module is used for acquiring information of all service flows in a period and storing the information in a queue; the topology module is used for acquiring network topology information containing network resource information.
The reinforcement learning module is used for acquiring a preferred strategy by adopting a reinforcement learning method according to the network topology information and the service flow information; wherein the preferred policy comprises a traffic flow preferred routing policy and a preferred network resource allocation policy for each node and each link in the network;
the action module is used for carrying out service flow routing according to the preferred strategy if the preferred strategy meets the requirement;
and the alarm module is used for sending an alarm if the preferred strategy does not meet the requirements.
The data processing flow of each module is consistent with the corresponding steps of the method, and the description is not repeated here.
Based on the same technical solution, the present invention also discloses a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform a traffic flow routing method.
Based on the same technical scheme, the invention also discloses a computer device, which comprises one or more processors and one or more memories, wherein one or more programs are stored in the one or more memories and are configured to be executed by the one or more processors, and the one or more programs comprise instructions for executing the service flow routing method.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof, but rather as providing for the use of additional embodiments and advantages of all such modifications, equivalents, improvements and similar to the present invention are intended to be included within the scope of the present invention as defined by the appended claims.

Claims (9)

1. A traffic flow routing method, comprising:
acquiring network topology information containing network resource information and information of all service flows in a period;
according to the network topology information and the service flow information, a reinforcement learning method is adopted to obtain a preferred strategy; wherein the preferred policy comprises a traffic flow preferred routing policy and a preferred network resource allocation policy for each node and each link in the network;
and if the preferred strategy meets the requirement, carrying out service flow routing according to the preferred strategy.
2. The traffic flow routing method according to claim 1, wherein obtaining information for all traffic flows in a cycle comprises:
acquiring flow information of all services in a period by adopting a time slice polling mode;
sending the business flow information into different queues according to CoS grade; wherein each queue stores service flow information of the same CoS level; in the same queue, the service flow information is enqueued in a FIFO mode according to the arrival time.
3. The traffic flow routing method according to claim 2, wherein obtaining the preferred policy using the reinforcement learning method based on the network topology information and the traffic flow information comprises:
sequentially scheduling the service flow information in the queue according to the sequence from high to low of the CoS grade, taking the scheduled service flow information and the corresponding network topology information as the state of each step of the reinforcement learning method, and acquiring a preferred strategy by adopting the reinforcement learning method;
wherein, in the network topology information corresponding to the nth schedule, the network resource is equal to the initial network resource minus the allocated network resource; the allocated network resources are the network resources to which all traffic flows before the nth schedule are allocated.
4. A traffic flow routing method according to claim 3, wherein the network resources comprise computing resources and storage resources available to each node, bandwidth resources available to each link;
the actions in the reinforcement learning method comprise service flow routing, calculation resources allocated for service flows by all nodes, storage resources allocated for service flows by all nodes and bandwidth resources allocated for service flows by all links.
5. A traffic flow routing method according to claim 3, wherein the traffic flow information comprises a CoS class of the traffic flow, a maximum delay allowed by the traffic flow, a maximum jitter allowed by the traffic flow, a maximum packet loss rate allowed by the traffic flow, a minimum bandwidth required by the traffic flow;
in the reinforcement learning method, the single step reward function is:
where r is a single step prize value,B ij for the currently available bandwidth resources of the link between node i and node j,bandwidth resources, D, for each link being traffic flow k, respectively k 、J k 、L k 、B k 、CoS k The maximum delay allowed for the traffic flow k, the maximum jitter allowed, the maximum packet loss rate allowed, the minimum bandwidth required, and the CoS class D, J, L, B are the actual delay, jitter, packet loss rate, and minimum bandwidth of the traffic flow k, respectively.
6. A traffic routing method according to claim 3, wherein after traffic information is scheduled, the scheduled traffic information is deleted from the queue, and if the current period reinforcement learning method is completed, the current period traffic information remains in the queue, and the remaining information and all traffic information in the next period together serve as the traffic route in the next period.
7. A traffic flow routing system, comprising:
the acquisition module acquires network topology information containing network resource information and information of all service flows in a period;
the reinforcement learning module is used for acquiring a preferred strategy by adopting a reinforcement learning method according to the network topology information and the service flow information; wherein the preferred policy comprises a traffic flow preferred routing policy and a preferred network resource allocation policy for each node and each link in the network;
and the action module is used for carrying out service flow routing according to the preferred strategy if the preferred strategy meets the requirement.
8. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods of claims 1-6.
9. A computer device, comprising:
one or more processors, and one or more memories in which one or more programs are stored and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-6.
CN202310679913.XA 2023-06-09 2023-06-09 Service flow routing method, system, storage medium and equipment Pending CN116582479A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310679913.XA CN116582479A (en) 2023-06-09 2023-06-09 Service flow routing method, system, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310679913.XA CN116582479A (en) 2023-06-09 2023-06-09 Service flow routing method, system, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN116582479A true CN116582479A (en) 2023-08-11

Family

ID=87543904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310679913.XA Pending CN116582479A (en) 2023-06-09 2023-06-09 Service flow routing method, system, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN116582479A (en)

Similar Documents

Publication Publication Date Title
CN112953830B (en) Routing planning and scheduling method and device for flow frame in time-sensitive network
CN113472597B (en) Distributed convolutional neural network fine-grained parameter transmission scheduling method and device
CN114051715B (en) Control device, switching device and method
Liu Intelligent routing based on deep reinforcement learning in software-defined data-center networks
CN115190082B (en) Traffic flow scheduling method and device of TSN (traffic stream network)
CN115277574B (en) Data center network load balancing method under SDN architecture
CN114500354B (en) Switch control method, device, control equipment and storage medium
CN111865681A (en) Core network slice end-to-end time delay optimization method, system and storage medium
CN114938374A (en) Cross-protocol load balancing method and system
KR20150080183A (en) Method and Apparatus for dynamic traffic engineering in Data Center Network
CN114024907A (en) Flow scheduling method and system under multi-port annular structure
Ren et al. End-to-end network SLA quality assurance for C-RAN: a closed-loop management method based on digital twin network
Meng et al. Intelligent routing orchestration for ultra-low latency transport networks
CN116192746B (en) SDN-based routing path selection method, SDN-based routing path selection device and storage medium
CN109298932B (en) OpenFlow-based resource scheduling method, scheduler and system
CN116582479A (en) Service flow routing method, system, storage medium and equipment
CN114124732B (en) Cloud-oriented in-band computing deployment method, device and system
CN113965616B (en) SFC mapping method based on VNF resource change matrix
Kang et al. SAFCast: Smart inter-datacenter multicast transfer with deadline guarantee by store-and-forwarding
CN114884818A (en) Three-layer soft slicing system and method based on time slot fine granularity in deterministic network
CN110417682B (en) Periodic scheduling method in high-performance network
CN114285790A (en) Data processing method and device, electronic equipment and computer readable storage medium
Wu et al. Multi-objective provisioning of network slices using deep reinforcement learning
CN118394486B (en) Calculation network task scheduling method and device
CN117097681B (en) Scheduling method and device of network resources, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination