CN117082008B - Virtual elastic network data transmission scheduling method, computer device and storage medium - Google Patents
Virtual elastic network data transmission scheduling method, computer device and storage medium Download PDFInfo
- Publication number
- CN117082008B CN117082008B CN202311343764.6A CN202311343764A CN117082008B CN 117082008 B CN117082008 B CN 117082008B CN 202311343764 A CN202311343764 A CN 202311343764A CN 117082008 B CN117082008 B CN 117082008B
- Authority
- CN
- China
- Prior art keywords
- data
- network
- link
- transmission
- priority
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000005540 biological transmission Effects 0.000 title claims abstract description 119
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000012549 training Methods 0.000 claims description 43
- 230000009471 action Effects 0.000 claims description 28
- 230000008901 benefit Effects 0.000 claims description 24
- 238000013528 artificial neural network Methods 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 20
- 239000013598 vector Substances 0.000 claims description 17
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 9
- 238000012163 sequencing technique Methods 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 4
- 125000004122 cyclic group Chemical group 0.000 claims description 3
- 230000001537 neural effect Effects 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/625—Queue scheduling characterised by scheduling criteria for service slots or service orders
- H04L47/6275—Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/40—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2425—Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
- H04L47/2433—Allocation of priorities to traffic types
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2441—Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/25—Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application discloses a virtual elastic network data transmission scheduling method, a computer device and a storage medium, comprising the following steps: continuously acquiring a transmission data stream in a network link, and classifying the transmission data stream according to data types; distributing the data streams in the concurrent node set to different links by taking the data blocks as granularity; calculating a network state profit value at the next moment; calculating the priority of the current data block through a Net network module, establishing an experience pool by adopting a loop iteration mode, and scheduling data in real time through a data scheduling module; and the ERDQN congestion control algorithm is adopted to evaluate the bandwidth and round trip delay of the network link, the bearing capacity of the network link is monitored in real time, and the data scheduling module is started in real time according to the bearing capacity, so that the relation between the priority of the data blocks and the delivery deadline of the data blocks can be balanced, the concurrent transmission of as many data blocks as possible before the delivery deadline is ensured, the link resources are fully utilized, and the transmission reliability and the transmission effectiveness are improved.
Description
Technical Field
The present application relates to the field of virtual network data transmission technologies, and in particular, to a virtual elastic network data transmission scheduling method, a computer device, and a storage medium.
Background
The network transmission with low time delay and high stability is one of key problems of virtual network game development, as more and more devices are connected to a network, the requirements of different devices on network time delay are more and more complex, and strict requirements on network game transmission stability and low time delay are also provided, so that the network transmission stability is improved, and the network time delay is reduced to be a hot problem of research in the aspect of current network transmission.
The standard configuration of the large server cluster is generally multi-network card and multi-link, the existing multi-path data cooperative transmission algorithm is mainly focused on how to complete data distribution and flow distribution, and if the data transmission distribution is uneven, the link resources of a large number of network card interfaces are limited, so that the link stability is reduced.
The existing virtual elastic network data transmission scheduling method has the following problems: (1) When the existing data transmission is overtime or packet loss phenomenon occurs in a network link, the transmission rate adjustment amplitude of a transmitting end is overlarge, so that time delay in the network transmission process is increased, the network transmission rate is unstable, the step packet loss in the transmission process is also caused, and finally, the average transmission rate of the network link is too low and the network transmission quality is poor; (2) The existing data transmission scheduling method only takes one key index of the data block in the link into priority, and cannot maximally utilize the link bandwidth, so that more data blocks finish scheduling transmission before the delivery deadline.
Disclosure of Invention
The application aims to provide a virtual elastic network data transmission scheduling method, a computer device and a storage medium, which are used for solving the technical problems that in the prior art, for the periodic packet loss in the transmission process, the average transmission rate of a network link is too low, the network transmission quality is poor and the link bandwidth cannot be utilized to the maximum.
In order to solve the technical problems, the application specifically provides the following technical scheme:
in a first aspect of the present application, a virtual elastic network data transmission scheduling method is provided, which includes the following steps:
continuously acquiring transmission data streams in a network link, classifying the transmission data streams according to data types, calculating corresponding data scheduling emergency degrees through sending and receiving time nodes of each section of the transmission data streams, and establishing a concurrence node set based on the transmission data streams according to the data scheduling emergency degrees;
distributing the data flow in the concurrent node set to different links by taking the data block as granularity, adopting a network link sensing module to take the network state of the current transmitting end and the network states of different network links as feature vectors, calculating the network quality of the network links at the next moment, and taking the network quality as a network state profit value at the next moment;
sorting the data blocks on the same link according to the network state benefit value at the next moment, calculating the priority of the current data block through a Net network module, storing the network state benefit value at the next moment and the corresponding priority of the data block into an array, establishing an experience pool by adopting a cyclic iteration mode, and scheduling data in real time through a data scheduling module;
and evaluating the bandwidth and round trip delay of the network link by adopting an ERDQN congestion control algorithm, monitoring the bearing capacity of the network link in real time, starting the data scheduling module in real time according to the bearing capacity, and executing transmission link allocation one by one according to the data block priority and the network quality.
As a preferred scheme of the application, training the transmission data stream by adopting an SAE neural network, obtaining data streams with different data types, calculating the emergency degree of the data streams to be sent at the current moment, and establishing a concurrent node set, wherein the method comprises the following steps:
inputting the transmission data stream as basic data into an SAE neural network for training, establishing a memory bank in the training process, and storing the transmission data stream sending and receiving time nodes, network states, actions and the next time state at the current moment;
dividing time slots for the current time of the transmission data stream, setting a plurality of transmission nodes in each time slot, acquiring most urgent transmission data through the SAE neural network mapping relation by adopting a sectional concurrence mode, and taking the node where the most urgent transmission data is located as the most urgent node;
dynamically acquiring all the most urgent nodes in different time slots, determining the most urgent node number according to the concurrence quantity of the transmission data streams, and constructing an urgent node matrix;
comparing the emergency degrees of other data in the network link at the same moment, adding corresponding nodes into a concurrent node set if the node with the largest emergency degree is in the emergency node matrix, and deleting the node in the emergency node matrix;
if the node where the data with the maximum emergency degree is located is not in the emergency node matrix, the data corresponding to the node is not considered in the occupied time.
As a preferred scheme of the application, the network link sensing module is adopted to sense the current network state, and the gain value of the network state at the next moment is obtained, which comprises the following steps:
acquiring state information of an available link by calling a run function according to the emergency degree of the current moment of the data flow in the concurrent node set, wherein the state information comprises a detection period, the number of links and the state of the links;
calculating the average time delay and the average bandwidth of the data flow for completing transmission in the whole network link, defining the states of the data blocks in the buffer area of the sending end and the buffer area of the receiving end, and completing the network state monitoring of the network link at fixed time by calling a schedule function;
and generating a feature vector from the monitored parameters of the network state in real time synchronously, dynamically adapting to network link change, and calculating a network state profit value at the next moment by adopting a Net network module through the feature vector.
As a preferred solution of the present application, the obtaining the priority of the current data block according to the network status benefit value at the next time includes:
dividing the Net network module into a MainNet network structure and a TargetNet network structure, utilizing the MainNet network structure to update the data flow in real time, simulating the network state in real time according to the current network state and combining training data of the SAE neural network, and calculating a benefit value according to different network actions;
storing network states of the data streams at different moments by using the TargetNet network structure, updating network state data once in one detection period by taking the detection period of the corresponding link of the data stream as an operation interval, and calculating corresponding network benefit values;
calculating errors of network gain values under different network structures, determining an optimal network link of corresponding data according to the errors, acquiring most urgent data in the same time slot, and taking the most urgent data as an execution action of a current time slot;
determining a plurality of data which can simultaneously execute transmission tasks in the current time slot based on the execution action of the most urgent data and the network state of the optimal network link, and adding the data into a waiting queue;
and selecting data which are not in conflict with all data transmission in the waiting queue from the rest data according to the emergency degree of the data, repeatedly selecting until the transmission data quantity of the waiting queue which is not in conflict with each other is maximum, and sequencing the waiting queue according to time to obtain the priority of the data block.
As a preferred solution of the present application, storing the network status benefit value at the next time corresponding to the network link by the data block priority, constructing an experience pool by adopting a Sumtree structure, and continuously updating the network parameter real-time scheduling data by loop iteration, including:
taking the training data of the SAE neural network as a root node of a Sumtre structure, dividing the training data into a plurality of intervals according to the quantity of the training data, selecting one sample in each interval, and adding the current network state, the transmission action and the network state at the next moment into each sample;
storing the priority and index of training data in leaf nodes of the Sumtre structure, and searching relevant memory samples in the array according to the index corresponding to the sequence number on the leaf nodes;
when the data of the training sample reaches the Sumtre structure capacity, sequentially replacing the data of the leaf nodes from left to right, and updating the data of the father node after the replacement is completed until all the root node data are updated;
and acquiring the training sample priority of the Sumtre structure in each interval according to the updated network parameters of the node data, and scheduling data in real time through a data scheduling module according to the training sample priority.
As a preferred solution of the present application, the data scheduling module adopts multipath real-time scheduling data, including:
on the premise of the priority of the current training data, the scheduling sequence is comprehensively updated according to the granularity of the deadline time point of the data flow corresponding to the priority, and the comprehensive priority of all the current data flows is calculated;
allocating the data blocks to corresponding network links according to the comprehensive priority, and switching the next link to continue to allocate when no available bandwidth or congestion buffer exists on the corresponding links until all the data blocks to be transmitted are allocated to the links;
and sequencing the data blocks on the same link according to the comprehensive priority, and sequencing and transmitting the data blocks with the same priority according to the emergency degree.
As a preferred scheme of the present application, the link allocation of the scheduling data adopts ERDQN congestion control algorithm to evaluate the bandwidth and round trip delay of the network link, and monitors the bearing capacity of the network link in real time, including:
taking the current link network state and network action acquired by the Net network module as feature vectors, and calculating and updating the bandwidth condition of the current link, the size of the residual data block of the data block and the comprehensive priority of the data block when the network link sensing module detects that new data enters a transmitting end buffer zone or a receiving end data buffer zone;
and distributing the data blocks according to the current state of each link, accumulating the comprehensive priority of each link, and calculating the bandwidth and round trip delay of the corresponding virtual network link according to the total priority duty ratio of the data blocks on the links.
As a preferred scheme of the application, a Q function of the SAE neural network is adopted to calculate a network action Q value corresponding to the total priority of the data block on a link, the distribution condition of the benefit value under the Q function is used as a bandwidth basis for evaluating the network link according to the benefit value corresponding to the current network state at the same moment, and the network delay output by the Q function is used as the round trip delay.
In a second aspect of the application, a computer apparatus is provided,
comprising the following steps: at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor.
In a third aspect of the present application, a computer-readable storage medium is provided,
the computer-readable storage medium has stored therein computer-executable instructions that are executed by a processor.
Compared with the prior art, the application has the following beneficial effects:
the application adopts the SAE network neural model to take the current network link state and the network action possibly adopted as the characteristic vector, inputs the characteristic vector into the algorithm to predict the network transmission state at the next moment, carries out repeated iterative training on the network link data through the ERDQN congestion algorithm, finally finds the optimal parameter suitable for the current network transmission, solves the problem of large adjustment amplitude of the transmission rate of the traditional congestion control algorithm, utilizes the current network state to automatically adjust the network transmission parameter, improves the average transmission rate and stability of the network transmission, and reduces the transmission delay in the network time delay.
The multi-path data scheduling module is adopted to effectively balance the relation between the priority of the data blocks and the delivery deadline of the data blocks, ensure that as many data blocks as possible finish concurrent transmission before the delivery deadline, fully utilize link resources and improve the transmission reliability and effectiveness.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are exemplary only and that other implementations can be obtained from the extensions of the drawings provided without inventive effort.
Fig. 1 is a flowchart of a virtual elastic network data transmission scheduling method according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application. As shown in fig. 1, the present application provides a virtual elastic network data transmission scheduling method, which includes the following steps:
continuously acquiring transmission data streams in a network link, classifying the transmission data streams according to data types, calculating corresponding data scheduling emergency degrees through sending and receiving time nodes of each section of the transmission data streams, and establishing a concurrence node set based on the transmission data streams according to the data scheduling emergency degrees;
in this embodiment, the data stream in the queue of the sender is disassembled into three types of control, audio and video according to the type, different types of streams are defined as logical sub-streams, a plurality of different data blocks are arranged on the sub-streams, the data blocks on the different sub-streams are scheduled, and network communication transmission is completed in combination with the real-time link state.
In this embodiment, the data to be transmitted is divided into the concurrent node sets according to the data types, so that the data to be transmitted can be effectively classified, the actual connection condition of the network path is confirmed according to the data transmission deadline, the instantaneous switching is dynamically completed, and the distribution of the data blocks is completed through the packet forwarding sub-module.
Distributing the data flow in the concurrent node set to different links by taking the data block as granularity, adopting a network link sensing module to take the network state of the current transmitting end and the network states of different network links as feature vectors, calculating the network quality of the network links at the next moment, and taking the network quality as a network state profit value at the next moment;
in this embodiment, the Net network module is used to calculate the priority of the current data block, so that the data blocks with different priorities can complete transmission as far as possible before the delivery time, and the method can adapt to the dynamic change of the path state.
Sorting the data blocks on the same link according to the network state benefit value at the next moment, calculating the priority of the current data block through a Net network module, storing the network state benefit value at the next moment and the corresponding priority of the data block into an array, establishing an experience pool by adopting a cyclic iteration mode, and scheduling data in real time through a data scheduling module; and evaluating the bandwidth and round trip delay of the network link by adopting an ERDQN congestion control algorithm, monitoring the bearing capacity of the network link in real time, starting the data scheduling module in real time according to the bearing capacity, and executing transmission link allocation one by one according to the data block priority and the network quality.
In this embodiment, the euclidean calculation mode is adopted to obtain the bearing capacity of the link, the result is transmitted to the data scheduling module for monitoring, and when the abnormal or interrupted link state is detected, the processing and response are completed in time.
Training the transmission data stream by adopting an SAE neural network, obtaining data streams with different data types, calculating the emergency degree of the data streams to be sent at the current moment, and establishing a concurrent node set, wherein the method comprises the following steps:
inputting the transmission data stream as basic data into an SAE neural network for training, establishing a memory bank in the training process, and storing the transmission data stream sending and receiving time nodes, network states, actions and the next time state at the current moment;
dividing time slots for the current time of the transmission data stream, setting a plurality of transmission nodes in each time slot, acquiring most urgent transmission data through the SAE neural network mapping relation by adopting a sectional concurrence mode, and taking the node where the most urgent transmission data is located as the most urgent node;
dynamically acquiring all the most urgent nodes in different time slots, determining the most urgent node number according to the concurrence quantity of the transmission data streams, and constructing an urgent node matrix;
comparing the emergency degrees of other data in the network link at the same moment, adding corresponding nodes into a concurrent node set if the node with the largest emergency degree is in the emergency node matrix, and deleting the node in the emergency node matrix;
if the node where the data with the maximum emergency degree is located is not in the emergency node matrix, the data corresponding to the node is not considered in the occupied time.
In this embodiment, the SAE neural network assigns different weights to the memory of each reinforcement learning, and samples of each learning are selected according to the priority of training memory, so as to reduce the convergence rate of the reinforcement learning network, and also make full use of the network bandwidth, adjust the network sending rate to be appropriate, and improve the stability and average transmission rate of network transmission.
The network link sensing module is used for sensing the current network state, and obtaining the network state profit value at the next moment comprises the following steps:
acquiring state information of an available link by calling a run function according to the emergency degree of the current moment of the data flow in the concurrent node set, wherein the state information comprises a detection period, the number of links and the state of the links;
calculating the average time delay and the average bandwidth of the data flow for completing transmission in the whole network link, defining the states of the data blocks in the buffer area of the sending end and the buffer area of the receiving end, and completing the network state monitoring of the network link at fixed time by calling a schedule function;
and generating a feature vector from the monitored parameters of the network state in real time synchronously, dynamically adapting to network link change, and calculating a network state profit value at the next moment by adopting a Net network module through the feature vector.
In this embodiment, the run function is invoked to disassemble the data generated in the sender's total buffer according to the three types of streams including control, audio and video, and the state information of the available links is sequentially obtained according to the deadline of each data stream.
In this embodiment, a schedule function is called to allocate data blocks to corresponding network paths, and a data block manager on the paths allocates bandwidth reasonably according to a fairness principle.
Acquiring the priority of the current data block according to the network state benefit value at the next moment, including:
dividing the Net network module into a MainNet network structure and a TargetNet network structure, utilizing the MainNet network structure to update the data flow in real time, simulating the network state in real time according to the current network state and combining training data of the SAE neural network, and calculating a benefit value according to different network actions;
storing network states of the data streams at different moments by using the TargetNet network structure, updating network state data once in one detection period by taking the detection period of the corresponding link of the data stream as an operation interval, and calculating corresponding network benefit values;
calculating errors of network gain values under different network structures, determining an optimal network link of corresponding data according to the errors, acquiring most urgent data in the same time slot, and taking the most urgent data as an execution action of a current time slot;
determining a plurality of data which can simultaneously execute transmission tasks in the current time slot based on the execution action of the most urgent data and the network state of the optimal network link, and adding the data into a waiting queue;
and selecting data which are not in conflict with all data transmission in the waiting queue from the rest data according to the emergency degree of the data, repeatedly selecting until the transmission data quantity of the waiting queue which is not in conflict with each other is maximum, and sequencing the waiting queue according to time to obtain the priority of the data block.
In this embodiment, a mapping relationship between a network state and an action is established through a Net network module, and training is performed through an SAE neural network to obtain a final node scheduling policy.
Storing the network state benefit value of the corresponding network link at the next moment through the data block priority, constructing an experience pool by adopting a Sumtre structure, and continuously updating the network parameter real-time scheduling data through loop iteration, wherein the method comprises the following steps of:
taking the training data of the SAE neural network as a root node of a Sumtre structure, dividing the training data into a plurality of intervals according to the quantity of the training data, selecting one sample in each interval, and adding the current network state, the transmission action and the network state at the next moment into each sample;
storing the priority and index of training data in leaf nodes of the Sumtre structure, and searching relevant memory samples in the array according to the index corresponding to the sequence number on the leaf nodes;
when the data of the training sample reaches the Sumtre structure capacity, sequentially replacing the data of the leaf nodes from left to right, and updating the data of the father node after the replacement is completed until all the root node data are updated;
and acquiring the training sample priority of the Sumtre structure in each interval according to the updated network parameters of the node data, and scheduling data in real time through a data scheduling module according to the training sample priority.
In this embodiment, a Sumtree structure is adopted to establish a memory bank at the beginning of the learning and training process, the network state, the action and the next time slot state after executing the current action in a period of time are stored in an experience pool, a certain amount of memory data is randomly extracted from the experience pool according to batches when the neural network is trained each time, and when the experience pool is full, the old memory is covered by new memory, so that the sequence of the original data is disturbed, and the relevance of the data is further weakened.
The data scheduling module adopts multipath real-time scheduling data and comprises the following steps:
on the premise of the priority of the current training data, the scheduling sequence is comprehensively updated according to the granularity of the deadline time point of the data flow corresponding to the priority, and the comprehensive priority of all the current data flows is calculated;
allocating the data blocks to corresponding network links according to the comprehensive priority, and switching the next link to continue to allocate when no available bandwidth or congestion buffer exists on the corresponding links until all the data blocks to be transmitted are allocated to the links;
and sequencing the data blocks on the same link according to the comprehensive priority, and sequencing and transmitting the data blocks with the same priority according to the emergency degree.
And estimating the bandwidth and round trip delay of the network link by adopting an ERDQN congestion control algorithm for the link allocation of the scheduling data, and monitoring the bearing capacity of the network link in real time, wherein the method comprises the following steps:
taking the current link network state and network action acquired by the Net network module as feature vectors, and calculating and updating the bandwidth condition of the current link, the size of the residual data block of the data block and the comprehensive priority of the data block when the network link sensing module detects that new data enters a transmitting end buffer zone or a receiving end data buffer zone;
and distributing the data blocks according to the current state of each link, accumulating the comprehensive priority of each link, and calculating the bandwidth and round trip delay of the corresponding virtual network link according to the total priority duty ratio of the data blocks on the links.
In the embodiment, the ERDQN congestion control algorithm is adopted to ensure uniformity of feature vectors, improve quality of training samples collected by the Net network module, shorten time for model training convergence, and improve accuracy of model prediction.
And calculating a network action Q value corresponding to the total priority of the data block on the link by adopting the Q function of the SAE neural network, taking the distribution condition of the benefit value under the Q function as a bandwidth basis for evaluating the network link according to the benefit value corresponding to the current network state at the same moment, and taking the network delay output by the Q function as the round trip delay.
In this embodiment, the Q function of the SAE neural network is adopted to learn, a part of state and action data is obtained, after the experience pool is full, the SAE network is trained and parameters thereof are updated step by step in the learning process, when the system is transferred to a hidden state, the action of the SAE network mapping relation recommendation system in the state is performed, the Q function value is updated, and the like, the learning process is repeated until the Q function output reaches the target precision or the expected training times is finished, and finally, the data transmission scheduling of the system is performed by the state-action mapping relation in the trained SAE network model.
Second embodiment: a computer device for the computer of a computer system,
comprising the following steps: at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor.
Third embodiment: a computer-readable storage medium comprising a memory, a storage medium, and a memory,
the computer-readable storage medium has stored therein computer-executable instructions that are executed by a processor.
The application adopts the SAE network neural model to take the current network link state and the network action possibly adopted as the characteristic vector, inputs the characteristic vector into the algorithm to predict the network transmission state at the next moment, trains the network link data for a plurality of times through the ERDQN congestion algorithm, finally finds the optimal parameter suitable for the current network transmission, solves the problem that the traditional congestion control algorithm adjusts the transmission rate greatly, utilizes the current network state to automatically adjust the network transmission parameter, improves the average transmission rate and the stability of the network transmission, and reduces the transmission delay in the network time delay.
The multi-path data scheduling module is adopted to effectively balance the relation between the priority of the data blocks and the delivery deadline of the data blocks, ensure that as many data blocks as possible finish concurrent transmission before the delivery deadline, fully utilize link resources and improve the transmission reliability and effectiveness.
The above embodiments are only exemplary embodiments of the present application and are not intended to limit the present application, the scope of which is defined by the claims. Various modifications and equivalent arrangements of this application will occur to those skilled in the art, and are intended to be within the spirit and scope of the application.
Claims (10)
1. The virtual elastic network data transmission scheduling method is characterized by comprising the following steps of:
continuously acquiring transmission data streams in a network link, classifying the transmission data streams according to data types, calculating corresponding data scheduling emergency degrees through sending and receiving time nodes of each section of the transmission data streams, and establishing a concurrence node set based on the transmission data streams according to the data scheduling emergency degrees;
distributing the data flow in the concurrent node set to different links by taking the data block as granularity, adopting a network link sensing module to take the network state of the current transmitting end and the network states of different network links as feature vectors, calculating the network quality of the network links at the next moment, and taking the network quality as a network state profit value at the next moment;
sorting the data blocks on the same link according to the network state benefit value at the next moment, calculating the priority of the current data block through a Net network module, storing the network state benefit value at the next moment and the corresponding priority of the data block into an array, establishing an experience pool by adopting a cyclic iteration mode, and scheduling data in real time through a data scheduling module;
and evaluating the bandwidth and round trip delay of the network link by adopting an ERDQN congestion control algorithm, monitoring the bearing capacity of the network link in real time, starting the data scheduling module in real time according to the bearing capacity, and executing transmission link allocation one by one according to the data block priority and the network quality.
2. The method for scheduling virtual elastic network data transmissions of claim 1,
training the transmission data stream by adopting an SAE neural network, obtaining data streams with different data types, calculating the emergency degree of the data streams to be sent at the current moment, and establishing a concurrent node set, wherein the method comprises the following steps:
inputting the transmission data stream as basic data into an SAE neural network for training, establishing a memory bank in the training process, and storing the transmission data stream sending and receiving time nodes, network states, actions and the next time state at the current moment;
dividing time slots for the current time of the transmission data stream, setting a plurality of transmission nodes in each time slot, acquiring most urgent transmission data through the SAE neural network mapping relation by adopting a sectional concurrence mode, and taking the node where the most urgent transmission data is located as the most urgent node;
dynamically acquiring all the most urgent nodes in different time slots, determining the most urgent node number according to the concurrence quantity of the transmission data streams, and constructing an urgent node matrix;
comparing the emergency degrees of other data in the network link at the same moment, adding corresponding nodes into a concurrent node set if the node with the largest emergency degree is in the emergency node matrix, and deleting the node in the emergency node matrix;
if the node where the data with the maximum emergency degree is located is not in the emergency node matrix, temporarily not considering the data corresponding to the node.
3. The method for scheduling virtual elastic network data transmissions of claim 2,
the network link sensing module is used for sensing the current network state, and obtaining the network state profit value at the next moment comprises the following steps:
acquiring state information of an available link by calling a run function according to the emergency degree of the current moment of the data flow in the concurrent node set, wherein the state information comprises a detection period, the number of links and the state of the links;
calculating the average time delay and the average bandwidth of the data flow for completing transmission in the whole network link, defining the states of the data blocks in the buffer area of the sending end and the buffer area of the receiving end, and completing the network state monitoring of the network link at fixed time by calling a schedule function;
and generating a feature vector from the monitored parameters of the network state in real time synchronously, dynamically adapting to network link change, and calculating a network state profit value at the next moment by adopting a Net network module through the feature vector.
4. The method for scheduling virtual elastic network data transmissions of claim 3,
acquiring the priority of the current data block according to the network state benefit value at the next moment, including:
dividing the Net network module into a MainNet network structure and a TargetNet network structure, utilizing the MainNet network structure to update the data flow in real time, simulating the network state in real time according to the current network state and combining training data of the SAE neural network, and calculating a benefit value according to different network actions;
storing network states of the data streams at different moments by using the TargetNet network structure, updating network state data once in one detection period by taking the detection period of the corresponding link of the data stream as an operation interval, and calculating corresponding network benefit values;
calculating errors of network gain values under different network structures, determining an optimal network link of corresponding data according to the errors, acquiring most urgent data in the same time slot, and taking the most urgent data as an execution action of a current time slot;
determining a plurality of data which can simultaneously execute transmission tasks in the current time slot based on the execution action of the most urgent data and the network state of the optimal network link, and adding the data into a waiting queue;
and selecting data which are not in conflict with all data transmission in the waiting queue from the rest data according to the emergency degree of the data, repeatedly selecting until the transmission data quantity of the waiting queue which is not in conflict with each other is maximum, and sequencing the waiting queue according to time to obtain the priority of the data block.
5. The method for scheduling virtual elastic network data transmissions of claim 4,
storing the network state benefit value of the corresponding network link at the next moment through the data block priority, constructing an experience pool by adopting a Sumtre structure, and continuously updating the network parameter real-time scheduling data through loop iteration, wherein the method comprises the following steps of:
taking the training data of the SAE neural network as a root node of a Sumtre structure, dividing the training data into a plurality of intervals according to the quantity of the training data, selecting one sample in each interval, and adding the current network state, the transmission action and the network state at the next moment into each sample;
storing the priority and index of training data in leaf nodes of the Sumtre structure, and searching relevant memory samples in the array according to the index corresponding to the sequence number on the leaf nodes;
when the data of the training sample reaches the Sumtre structure capacity, sequentially replacing the data of the leaf nodes from left to right, and updating the data of the father node after the replacement is completed until all the root node data are updated;
and acquiring the training sample priority of the Sumtre structure in each interval according to the updated network parameters of the node data, and scheduling data in real time through a data scheduling module according to the training sample priority.
6. The method for scheduling virtual elastic network data transmissions of claim 5,
the data scheduling module adopts multipath real-time scheduling data and comprises the following steps:
on the premise of the priority of the current training data, the scheduling sequence is comprehensively updated according to the granularity of the deadline time point of the data flow corresponding to the priority, and the comprehensive priority of all the current data flows is calculated;
allocating the data blocks to corresponding network links according to the comprehensive priority, and switching the next link to continue to allocate when no available bandwidth or congestion buffer exists on the corresponding links until all the data blocks to be transmitted are allocated to the links;
and sequencing the data blocks on the same link according to the comprehensive priority, and sequencing and transmitting the data blocks with the same priority according to the emergency degree.
7. The method for scheduling virtual elastic network data transmissions of claim 6,
and estimating the bandwidth and round trip delay of the network link by adopting an ERDQN congestion control algorithm for the link allocation of the scheduling data, and monitoring the bearing capacity of the network link in real time, wherein the method comprises the following steps:
taking the current link network state and network action acquired by the Net network module as feature vectors, and calculating and updating the bandwidth condition of the current link, the size of the residual data block of the data block and the comprehensive priority of the data block when the network link sensing module detects that new data enters a transmitting end buffer zone or a receiving end data buffer zone;
and distributing the data blocks according to the current state of each link, accumulating the comprehensive priority of each link, and calculating the bandwidth and round trip delay of the corresponding virtual network link according to the total priority duty ratio of the data blocks on the links.
8. The method for scheduling virtual elastic network data transmissions of claim 7,
and calculating a network action Q value corresponding to the total priority of the data block on the link by adopting the Q function of the SAE neural network, taking the distribution condition of the benefit value under the Q function as a bandwidth basis for evaluating the network link according to the benefit value corresponding to the current network state at the same moment, and taking the network delay output by the Q function as the round trip delay.
9. A computer device, characterized in that,
comprising the following steps: at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor, whereby the method of any one of claims 1-8 is performed by the processor.
10. A computer-readable storage medium comprising,
the computer readable storage medium having stored therein computer executable instructions which, when executed by a processor, implement the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311343764.6A CN117082008B (en) | 2023-10-17 | 2023-10-17 | Virtual elastic network data transmission scheduling method, computer device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311343764.6A CN117082008B (en) | 2023-10-17 | 2023-10-17 | Virtual elastic network data transmission scheduling method, computer device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117082008A CN117082008A (en) | 2023-11-17 |
CN117082008B true CN117082008B (en) | 2023-12-15 |
Family
ID=88719825
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311343764.6A Active CN117082008B (en) | 2023-10-17 | 2023-10-17 | Virtual elastic network data transmission scheduling method, computer device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117082008B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117528698B (en) * | 2024-01-08 | 2024-03-19 | 南京海汇装备科技有限公司 | High-speed data transmission system and method based on data chain |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108965024A (en) * | 2018-08-01 | 2018-12-07 | 重庆邮电大学 | A kind of virtual network function dispatching method of the 5G network slice based on prediction |
CN115658278A (en) * | 2022-12-07 | 2023-01-31 | 中国电子科技集团公司第三十研究所 | Micro task scheduling machine supporting high concurrency protocol interaction |
CN115686779A (en) * | 2022-10-14 | 2023-02-03 | 兰州交通大学 | Self-adaptive edge computing task scheduling method based on DQN |
CN115767325A (en) * | 2022-10-31 | 2023-03-07 | 苏州大学 | Service function chain profit maximization mapping method and system |
CN116489104A (en) * | 2023-05-10 | 2023-07-25 | 南京理工大学 | Traffic scheduling method and system based on dynamic priority |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10292070B2 (en) * | 2016-02-19 | 2019-05-14 | Hewlett Packard Enterprise Development Lp | Managing network traffic |
-
2023
- 2023-10-17 CN CN202311343764.6A patent/CN117082008B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108965024A (en) * | 2018-08-01 | 2018-12-07 | 重庆邮电大学 | A kind of virtual network function dispatching method of the 5G network slice based on prediction |
CN115686779A (en) * | 2022-10-14 | 2023-02-03 | 兰州交通大学 | Self-adaptive edge computing task scheduling method based on DQN |
CN115767325A (en) * | 2022-10-31 | 2023-03-07 | 苏州大学 | Service function chain profit maximization mapping method and system |
CN115658278A (en) * | 2022-12-07 | 2023-01-31 | 中国电子科技集团公司第三十研究所 | Micro task scheduling machine supporting high concurrency protocol interaction |
CN116489104A (en) * | 2023-05-10 | 2023-07-25 | 南京理工大学 | Traffic scheduling method and system based on dynamic priority |
Also Published As
Publication number | Publication date |
---|---|
CN117082008A (en) | 2023-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110381541B (en) | Smart grid slice distribution method and device based on reinforcement learning | |
CN111246586B (en) | Method and system for distributing smart grid resources based on genetic algorithm | |
US10091675B2 (en) | System and method for estimating an effective bandwidth | |
CN117082008B (en) | Virtual elastic network data transmission scheduling method, computer device and storage medium | |
CN110708259A (en) | Information-agnostic Coflow scheduling system capable of automatically adjusting queue threshold and scheduling method thereof | |
CN109104373B (en) | Method, device and system for processing network congestion | |
CN104796422A (en) | Online customer service staff equilibrium assignment method and online customer service staff equilibrium assignment device | |
CN113692021A (en) | 5G network slice intelligent resource allocation method based on intimacy | |
CN114866494B (en) | Reinforced learning intelligent agent training method, modal bandwidth resource scheduling method and device | |
Li et al. | OPTAS: Decentralized flow monitoring and scheduling for tiny tasks | |
EP4024212A1 (en) | Method for scheduling interference workloads on edge network resources | |
CN111740925B (en) | Deep reinforcement learning-based flow scheduling method | |
CN113132490A (en) | MQTT protocol QoS mechanism selection scheme based on reinforcement learning | |
Villota-Jacome et al. | Admission control for 5G core network slicing based on deep reinforcement learning | |
CN111211988B (en) | Data transmission method and system for distributed machine learning | |
US11616730B1 (en) | System and method for adapting transmission rate computation by a content transmitter | |
CN114760644A (en) | Multilink transmission intelligent message scheduling method based on deep reinforcement learning | |
CN112153702B (en) | Local area network bandwidth resource allocation method, storage device and equipment | |
Bhattacharyya et al. | QFlow: A learning approach to high QoE video streaming at the wireless edge | |
CN113543160A (en) | 5G slice resource allocation method and device, computing equipment and computer storage medium | |
CN116467069A (en) | Spatial flight information system resource scheduling method and system based on PPO algorithm | |
CN107360483B (en) | Controller load balancing algorithm for software defined optical network | |
Bensalem et al. | Towards optimal serverless function scaling in edge computing network | |
CN107743077A (en) | A kind of method and device for assessing information physical emerging system network performance | |
CN115190027A (en) | Natural fault survivability evaluation method based on network digital twin body |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |