CN114666285A - Ethernet transmission queue scheduling method, system, storage medium and computing equipment - Google Patents
Ethernet transmission queue scheduling method, system, storage medium and computing equipment Download PDFInfo
- Publication number
- CN114666285A CN114666285A CN202210192057.0A CN202210192057A CN114666285A CN 114666285 A CN114666285 A CN 114666285A CN 202210192057 A CN202210192057 A CN 202210192057A CN 114666285 A CN114666285 A CN 114666285A
- Authority
- CN
- China
- Prior art keywords
- uplink data
- queue
- empty
- transmission priority
- sending
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000005540 biological transmission Effects 0.000 title claims abstract description 167
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000015654 memory Effects 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 9
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/625—Queue scheduling characterised by scheduling criteria for service slots or service orders
- H04L47/6275—Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/622—Queue service order
- H04L47/6225—Fixed service order, e.g. Round Robin
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/6295—Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Small-Scale Networks (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a method, a system, a storage medium and a computing device for scheduling Ethernet sending queues, wherein the method stores uplink data with different transmission priorities into different queues, performs polling according to the transmission priorities (namely, each turn of each data has a sending chance) under a non-data overflow state, and sends the uplink data according to a service quota and a byte number allowed to be sent, thereby not only ensuring the fairness of sending various data, but also ensuring the real-time performance of important data.
Description
Technical Field
The invention relates to a method, a system, a storage medium and a computing device for scheduling an Ethernet transmission queue, belonging to the technical field of power line networking communication and intelligent power utilization.
Background
In the 5G era, communication objects are not limited to people any more, people-to-things and things-to-things communication gradually occupies a mainstream position, and under the large background of interconnection of everything, edge sensing equipment is greatly increased, and data types and data measuring range exponential level are increased. In a complex network topology, even if the network bandwidth of all links is increased, when all types of data are transmitted simultaneously, network congestion is still caused, and data loss is caused. Therefore, how to reasonably allocate the limited network resources to each type of data has important research significance.
The key technology for realizing that each type of data obtains a reasonable network Service level from limited network resources is a Quality of Service (QoS) technology, queue scheduling is a core technology for realizing QoS, and common queue scheduling algorithms include algorithms based on static priority, polling, a Global Positioning System (GPS) model, delay and the like, but these methods cannot be directly applied to an NR (New Radio, i.e., 5G wireless access network) network topology of the existing Service model, and cannot improve the real-time performance and reliability of NR network uplink data transmission.
Disclosure of Invention
The invention provides a method, a system, a storage medium and a computing device for scheduling an Ethernet transmission queue, which solve the problems disclosed in the background technology.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
the Ethernet transmission queue scheduling method comprises the following steps:
analyzing the uplink data to obtain a preset transmission priority in the uplink data;
storing the uplink data into a queue corresponding to the cache according to the transmission priority; the buffer memory is preset with a plurality of queues, and different queues respectively store uplink data with different transmission priorities;
if the uplink data sending time is up and the buffer memory is in a data overflow state, sending the uplink data in the non-empty queue according to the transmission priority;
and if the uplink data sending time is reached and the cache is in a non-data overflow state, polling the non-empty queue according to the transmission priority, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each turn and the number of bytes allowed to be sent.
And if the transmission priority of the uplink data newly stored in the buffer is higher than the transmission priority of the uplink data currently sent in the process of sending the uplink data in the non-empty queue according to the transmission priority, ending the sending of the current uplink data and sending the newly stored uplink data.
If the uplink data sending time is up and the buffer is in a non-data overflow state, performing non-empty queue polling according to the transmission priority, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each round and the number of bytes allowed to be sent, wherein the method comprises the following steps:
if the uplink data sending time is up, the cache is in a non-data overflow state, and the queue corresponding to the highest transmission priority is non-empty, the uplink data in the queue corresponding to the highest transmission priority is sent out first, then the polling of the remaining non-empty queues is carried out according to the transmission priority, and the uplink data in the remaining non-empty queues are sent according to the service quota distributed to the remaining non-empty queues in each turn and the number of bytes allowed to be sent;
and if the uplink data sending time is up, the cache is in a non-data overflow state, and the queue corresponding to the highest transmission priority is empty, performing non-empty queue polling according to the transmission priority, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each round and the number of bytes allowed to be sent.
Performing non-empty queue polling according to the transmission priority, and sending the uplink data in the non-empty queue according to the service quota allocated to the non-empty queue in each round and the number of bytes allowed to be sent, wherein the method comprises the following steps:
1) sending the uplink data in the non-empty queues according to the transmission priority according to the initial service quota distributed by each non-empty queue and the number of bytes allowed to be sent;
2) if the non-empty queue exists, calculating the remaining uplink data in the non-empty queue; if the non-empty queue does not exist, the uplink data transmission is finished;
3) calculating the weight of each non-empty queue in the round according to the remaining uplink data and the initial service quota in the non-empty queue;
4) calculating the service quota of each non-empty queue in the round according to the weight and the initial service quota;
5) and sending the uplink data in the non-empty queues according to the transmission priority according to the service quota distributed to each non-empty queue in the round and the number of bytes allowed to be sent, and turning to 2).
The calculation formula of the round non-empty queue weight is as follows:
wherein R iskIs the weight of the non-empty queue corresponding to the k-th transmission priority N, N is the defined queue number, N is less than or equal to N,QD for transmitting the remaining uplink data after the k-1 turn of the non-empty queue corresponding to the priority nnAnd transmitting the initial service quota of the non-empty queue corresponding to the priority n.
The calculation formula of the round non-empty queue service quota is as follows:
QDn(k)=QDn(1+Rk)
wherein R iskQD for the weight of the non-empty queue corresponding to the k-th transmission priority nnFor an initial service quota of a non-empty queue corresponding to a transmission priority n, QDn(k) And transmitting the service quota of the non-empty queue corresponding to the priority n for the k round.
According to the service quota distributed by the non-empty queue in each round and the number of bytes allowed to be sent, if the queue corresponding to the highest transmission priority stores new uplink data in the process of sending the uplink data in the non-empty queue, finishing the current uplink data sending, and sending the newly stored uplink data; if the transmission priority of the newly stored uplink data in the cache is higher than the transmission priority of the currently sent uplink data, the currently sent queue still sends the uplink data according to the current service quota, and the transmission priority is smaller than the redistribution service quota of the queue of the currently sent queue.
The Ethernet transmission queue scheduling system comprises:
the analysis module is used for analyzing the uplink data and acquiring the preset transmission priority in the uplink data;
the buffer module is used for storing the uplink data into a queue corresponding to the buffer according to the transmission priority; the buffer memory is preset with a plurality of queues, and different queues respectively store uplink data with different transmission priorities;
the sequence sending module is used for sending the uplink data in the non-empty queue according to the transmission priority if the uplink data sending time is up and the buffer memory is in a data overflow state;
and the polling sending module is used for polling the non-empty queue according to the transmission priority if the uplink data sending time is reached and the cache is in a non-data overflow state, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each turn and the number of bytes allowed to be sent.
The polling transmission module comprises:
the indirect polling sending module is used for sending the uplink data in the queue corresponding to the highest transmission priority, then carrying out polling on the remaining non-empty queues according to the transmission priority, and sending the uplink data in the remaining non-empty queues according to the service quota distributed by the remaining non-empty queues in each round and the number of bytes allowed to be sent if the uplink data sending time is up, the cache is in a non-data overflow state, and the queue corresponding to the highest transmission priority is not empty;
and the direct polling sending module is used for polling the non-empty queue according to the transmission priority if the uplink data sending time is up, the buffer is in a non-data overflow state, and the queue corresponding to the highest transmission priority is empty, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each turn and the number of bytes allowed to be sent.
The indirect polling sending module and the direct polling sending module both comprise data sending modules; the data sending module is used for polling the non-empty queues according to the transmission priority and sending the uplink data in the non-empty queues according to the service quotas distributed to the non-empty queues in each turn and the number of bytes allowed to be sent; the data sending module comprises:
the initial sending module is used for sending the uplink data in the non-empty queues according to the transmission priority according to the initial service quota distributed to each non-empty queue and the number of bytes allowed to be sent;
the judging module is used for calculating the remaining uplink data in the non-empty queue if the non-empty queue exists; if the non-empty queue does not exist, the uplink data transmission is finished;
the weight calculation module is used for calculating the weight of each non-empty queue in the round according to the remaining uplink data and the initial service quota in the non-empty queue;
the service quota calculation module is used for calculating the service quota of each non-empty queue in the round according to the weight and the initial service quota;
and the non-initial sending module is used for sending the uplink data in the non-empty queues according to the transmission priority according to the service quota distributed to each non-empty queue in the round and the byte number allowed to be sent, and then switching to the judging module.
A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform an ethernet transmit queue scheduling method.
A computing device comprising one or more processors, one or more memories, and one or more programs stored in the one or more memories and configured to be executed by the one or more processors, the one or more programs including instructions for performing an ethernet transmit queue scheduling method.
The invention achieves the following beneficial effects: the invention stores the uplink data with different transmission priorities into different queues, polls according to the transmission priority (namely, each turn of each data has a sending chance) in a non-data overflow state, and sends the uplink data according to the service quota and the number of bytes allowed to be sent, thereby not only ensuring the fairness of sending various data, but also ensuring the real-time performance of important data.
Drawings
Fig. 1 is a flowchart of a method for scheduling ethernet transmission queues.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
As shown in fig. 1, the method for scheduling ethernet transmission queues includes the following steps:
step 1, analyzing uplink data to obtain a preset transmission priority in the uplink data;
step 2, storing the uplink data into a queue corresponding to the cache according to the transmission priority; the buffer memory is preset with a plurality of queues, and different queues respectively store uplink data with different transmission priorities;
step 3, if the uplink data sending time is up and the buffer is in a data overflow state, sending the uplink data in the non-empty queue according to the transmission priority; and if the uplink data sending time is reached and the cache is in a non-data overflow state, polling the non-empty queue according to the transmission priority, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each turn and the number of bytes allowed to be sent.
According to the method, the uplink data with different transmission priorities are stored in different queues, polling is carried out according to the transmission priorities (namely, each turn of each data has a sending opportunity) in a non-data overflow state, the uplink data are sent according to the service quota and the number of bytes allowed to be sent, the fairness of sending various data is guaranteed, the real-time performance of important data is guaranteed, the transmission mode is adjusted in the data overflow state, the uplink data are sent according to the data overflow state, network congestion caused by uploading of a large amount of data is improved, data loss is avoided, and the reliability of data uploading is improved.
The preset transmission priority mainly sets the transmission priority of the data frame of the sub-device, a priority control layer can be added on a data link layer of a network protocol stack sent by the sub-device, a 32-bit label field is added, and the transmission priority of the data frame is written in the first three bits of the third byte of the label field.
The transmission priority sets 8 levels, which are sequentially reduced from 0 to 7, wherein n is 0 and is the highest transmission priority, the corresponding data is alarm error reporting information, the data with n being 1 is sensor conventional reporting information, the data with n being 2 is video and audio data, and the rest is redundancy design.
The uplink data sent by the sub-device can carry a corresponding transmission priority, and when the data reaches the NR network sending port, the uplink data can be analyzed to obtain the transmission priority of the uplink data, which specifically includes: and adding a data priority analyzing layer above the data link layer by the NR network virtual Ethernet node protocol stack, reading the first three bits of the third byte in the label field, and identifying the transmission priority.
A plurality of queues, generally 8 queues, are set in advance in the buffer, different queues store uplink data of different transmission priorities, and after the transmission priorities are identified, the uplink data are stored in the queues corresponding to the buffer. For example: all uplink data with n equal to 0 are stored in queue Q0All uplink data with n equal to 1 are stored in queue Q1In all uplink data queue Q with n equal to 22In (1).
When the uplink data sending time is up and the buffer is in a data overflow state, the data needs to be uploaded immediately, so a first scheduling algorithm is adopted in the situation, the first scheduling algorithm is used for sending the uplink data in the non-empty queue, namely the queue Q, directly according to the transmission priority0Priority upload, queue Q0Uploading queue Q after uploading1And so on.
And in the uploading process, new uplink data may be stored, and if the transmission priority of the newly stored uplink data in the cache is higher than the transmission priority of the currently transmitted uplink data, the current uplink data transmission is finished, and the newly stored uplink data is transmitted. For example: current upload queue Q1Newly storing the uplink data of n-0, i.e. the queue Q which is empty originally0When the data is available again, the data becomes a non-empty queue, and the uploading queue Q is immediately ended1The uplink data of (2) completes the queue Q first0Uplink data upload of, queue Q0After the uploading is finished, the uploading queue Q is continued1The uplink data of (1).
When the uplink data sending time is up and the cache is in a non-data overflow state, a second scheduling algorithm is adopted under the condition, the second scheduling algorithm is to poll a non-empty queue according to the transmission priority, and the uplink data in the non-empty queue is sent according to the service quota distributed to the non-empty queue in each round and the number of bytes allowed to be sent, and the specific process can be as follows:
11) if the uplink data sending time is up, the cache is in a non-data overflow state, and the queue corresponding to the highest transmission priority is non-empty, the uplink data in the queue corresponding to the highest transmission priority is sent out first, then the polling of the remaining non-empty queues is carried out according to the transmission priority, and the uplink data in the remaining non-empty queues are sent according to the service quota distributed to the remaining non-empty queues in each turn and the number of bytes allowed to be sent;
12) and if the uplink data sending time is up, the cache is in a non-data overflow state, and the queue corresponding to the highest transmission priority is empty, performing non-empty queue polling according to the transmission priority, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each round and the number of bytes allowed to be sent.
In 11) and 12) above, performing non-empty queue polling according to the transmission priority, and sending the uplink data in the non-empty queue according to the service quota allocated to the non-empty queue in each round and the number of bytes allowed to be sent, where a specific process may be:
1) sending the uplink data in the non-empty queues according to the transmission priority (the transmission priority is high for priority transmission) according to the initial service quota distributed to each non-empty queue and the number of bytes allowed to be sent;
the network service can be abstracted into a set of system network resources, the service quota can be understood as the allocation amount of the network resources, and is also allocated according to the transmission priority (the allocation with high transmission priority is more), the number of bytes allowed to be sent is one element in the set of the service quota, and the data is sent according to the service quota and the number of bytes allowed to be sent;
2) if the non-empty queue exists, calculating the remaining uplink data in the non-empty queue; if the non-empty queue does not exist, the uplink data transmission is finished;
3) calculating the weight of each non-empty queue in the round according to the remaining uplink data and the initial service quota in the non-empty queue;
the calculation formula is as follows:
wherein R iskIs the weight of the non-empty queue corresponding to the k-th transmission priority N, N is the defined number of queues, usually 0 ≦ N ≦ 8, N ≦ N,QD for transmitting the remaining uplink data after the k-1 turn of the non-empty queue corresponding to the priority nnAn initial service quota of a non-empty queue corresponding to the transmission priority n;
4) calculating the service quota of each non-empty queue in the round according to the weight and the initial service quota;
the calculation formula is as follows:
QDn(k)=QDn(1+Rk)
wherein QDn(k) Transmitting the service quota of the non-empty queue corresponding to the priority n for the kth round;
5) and sending the uplink data in the non-empty queues according to the transmission priority according to the service quotas distributed to each non-empty queue in the round and the number of bytes allowed to be sent, and turning to 2).
At the beginning, if TAB { Q }n1(n ≠ 0), i.e., there is Q present0Other non-empty queues, DCn=QDn(n ≠ 0), otherwise DCnWhen the SAT is equal to 0, n (n is equal to 0), polling all the non-empty queues according to the priority, and sequentially transmitting data according to the service quota and the number of bytes allowed to be transmitted; wherein TAB records all queue states, DCnIs a queue QnSAT indicates the amount of transmission priority state of the current transmission queue in the margin (remaining uplink data) after transmission of data.
In the first round, queue Qn(n ≠ 0) number of bytes allowed to be transmittedAfter sending data, if queue QnIf it is empty, then DCn0; otherwiseCalculating the weight and the service quota of the second round, and uploading again; wherein,
QDn(2)=QDn(1+R2)
in a similar manner, queue QnAfter the kth round sends data, the weight of the next round is:
queue QnThe service quota obtained in the k round is QDn(k)=QDn(1+Rk)。
In the uploading process, if new data is added into the cache, the transmission priority of newly stored uplink data is not higher than the transmission priority of currently transmitted uplink data, the transmission mode is not adjusted, and data are transmitted according to polling and the initially allocated service quota; if the transmission priority of the newly stored uplink data in the cache is higher than the transmission priority of the currently sent uplink data, the currently sent queue still sends the uplink data according to the current service quota, the transmission priority is smaller than the queue of the currently sent queue to redistribute the service quota, and the service quota is specifically distributed as the minimum service quota; the data with n equal to 0 is the alarm error reporting information, and the data needs to be uploaded preferentially under any condition, so that the queue Q corresponding to the highest transmission priority level0Storing new uplink data and ending current uplink dataAnd sending the newly stored uplink data according to the sending.
The method transmits the data in a polling way under the conventional condition, and each turn of each data type has the opportunity of transmitting the data, so that the fairness of transmitting each type of data is ensured; network service quotas are distributed according to the priorities, the network service quotas are sequentially sent according to the priority order, different strategies are determined according to the priority condition of the data being stored, and the instantaneity of important data is guaranteed; monitoring the data allowance of the queue, adjusting the network service quota in the next round according to the data allowance, and adjusting the queue scheduling algorithm in time when the data overflows, thereby improving the network congestion caused by uploading a large amount of data, avoiding the data loss and improving the data uploading reliability.
Based on the same technical scheme, the invention also discloses a software system of the method, and an Ethernet transmission queue scheduling system comprises:
and the analysis module is used for analyzing the uplink data and acquiring the transmission priority preset in the uplink data.
The buffer module is used for storing the uplink data into a queue corresponding to the buffer according to the transmission priority; the buffer is preset with several queues, and different queues store uplink data with different transmission priorities.
And the sequence sending module is used for sending the uplink data in the non-empty queue according to the transmission priority if the uplink data sending time is reached and the buffer is in a data overflow state.
And the polling sending module is used for polling the non-empty queue according to the transmission priority if the uplink data sending time is up and the cache is in a non-data overflow state, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each turn and the number of bytes allowed to be sent.
The polling transmission module comprises:
the indirect polling sending module is used for sending the uplink data in the queue corresponding to the highest transmission priority, then carrying out polling on the remaining non-empty queues according to the transmission priority, and sending the uplink data in the remaining non-empty queues according to the service quota distributed by the remaining non-empty queues in each round and the number of bytes allowed to be sent if the uplink data sending time is up, the cache is in a non-data overflow state, and the queue corresponding to the highest transmission priority is not empty;
and the direct polling sending module is used for polling the non-empty queue according to the transmission priority if the uplink data sending time is up, the buffer is in a non-data overflow state, and the queue corresponding to the highest transmission priority is empty, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each turn and the number of bytes allowed to be sent.
The indirect polling sending module and the direct polling sending module both comprise data sending modules; the data sending module is used for polling the non-empty queues according to the transmission priority and sending the uplink data in the non-empty queues according to the service quotas distributed to the non-empty queues in each turn and the number of bytes allowed to be sent; the data sending module comprises:
the initial sending module is used for sending the uplink data in the non-empty queues according to the transmission priority according to the initial service quota distributed to each non-empty queue and the number of bytes allowed to be sent;
the judging module is used for calculating the remaining uplink data in the non-empty queue if the non-empty queue exists; if the non-empty queue does not exist, the uplink data transmission is finished;
the weight calculation module is used for calculating the weight of each non-empty queue in the round according to the remaining uplink data and the initial service quota in the non-empty queue;
the service quota calculation module is used for calculating the service quota of each non-empty queue in the round according to the weight and the initial service quota;
and the non-initial sending module is used for sending the uplink data in the non-empty queues according to the transmission priority according to the service quotas distributed to each non-empty queue in the round and the number of bytes allowed to be sent, and then transferring the uplink data to the judging module.
Based on the same technical solution, the present invention also discloses a computer-readable storage medium storing one or more programs, the one or more programs including instructions, which when executed by a computing device, cause the computing device to execute an ethernet transmit queue scheduling method.
Based on the same technical solution, the present invention also discloses a computing device, comprising one or more processors, one or more memories, and one or more programs, wherein the one or more programs are stored in the one or more memories and configured to be executed by the one or more processors, and the one or more programs include instructions for executing the ethernet transmit queue scheduling method.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The present invention is not limited to the above embodiments, and any modifications, equivalent replacements, improvements, etc. made within the spirit and principle of the present invention are included in the scope of the claims of the present invention which are filed as the application.
Claims (12)
1. The Ethernet transmission queue scheduling method is characterized by comprising the following steps:
analyzing the uplink data to obtain a preset transmission priority in the uplink data;
storing the uplink data into a queue corresponding to the cache according to the transmission priority; the buffer memory is preset with a plurality of queues, and different queues respectively store uplink data with different transmission priorities;
if the uplink data sending time is up and the buffer memory is in a data overflow state, sending the uplink data in the non-empty queue according to the transmission priority;
and if the uplink data sending time is reached and the cache is in a non-data overflow state, polling the non-empty queue according to the transmission priority, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each turn and the number of bytes allowed to be sent.
2. The Ethernet transmission queue scheduling method of claim 1, wherein in the process of transmitting the uplink data in the non-empty queue according to the transmission priority, if the transmission priority of the uplink data newly stored in the buffer is higher than the transmission priority of the uplink data currently transmitted, the current uplink data transmission is ended, and the newly stored uplink data is transmitted.
3. The ethernet transmit queue scheduling method according to claim 1, wherein if the uplink data transmission time is reached and the buffer is in a non-data overflow state, performing non-empty queue polling according to the transmission priority, and transmitting the uplink data in the non-empty queue according to the service quota allocated to the non-empty queue and the number of bytes allowed to be transmitted in each round, comprises:
if the uplink data sending time is up, the cache is in a non-data overflow state, and the queue corresponding to the highest transmission priority is non-empty, the uplink data in the queue corresponding to the highest transmission priority is sent out firstly, then the remaining non-empty queues are polled according to the transmission priority, and the uplink data in the remaining non-empty queues are sent according to the service quota distributed to the remaining non-empty queues in each round and the number of bytes allowed to be sent;
and if the uplink data sending time is up, the cache is in a non-data overflow state, and the queue corresponding to the highest transmission priority is empty, performing non-empty queue polling according to the transmission priority, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each round and the number of bytes allowed to be sent.
4. The ethernet transmission queue scheduling method according to claim 3, wherein the non-empty queue polling is performed according to the transmission priority, and the uplink data in the non-empty queue is transmitted according to the service quota allocated to the non-empty queue and the number of bytes allowed to be transmitted in each round, the method comprises:
1) sending the uplink data in the non-empty queues according to the transmission priority according to the initial service quota distributed by each non-empty queue and the number of bytes allowed to be sent;
2) if the non-empty queue exists, calculating the remaining uplink data in the non-empty queue; if the non-empty queue does not exist, the uplink data transmission is finished;
3) calculating the weight of each non-empty queue in the round according to the remaining uplink data and the initial service quota in the non-empty queue;
4) calculating the service quota of each non-empty queue in the round according to the weight and the initial service quota;
5) and sending the uplink data in the non-empty queues according to the transmission priority according to the service quota distributed to each non-empty queue in the round and the number of bytes allowed to be sent, and turning to 2).
5. The Ethernet transmission queue scheduling method of claim 4, wherein the calculation formula of the weight of the non-empty queue in the current round is:
wherein R iskIs the weight of the non-empty queue corresponding to the k-th transmission priority N, N is the defined queue number, N is less than or equal to N,QD for transmitting the remaining uplink data after the k-1 turn of the non-empty queue corresponding to the priority nnAnd the initial service quota of the non-empty queue corresponding to the transmission priority n.
6. The Ethernet transmission queue scheduling method of claim 4, wherein the calculation formula of the service quota of the round of non-empty queue is as follows:
QDn(k)=QDn(1+Rk)
wherein R iskQD for the weight of the non-empty queue corresponding to the k-th transmission priority nnFor an initial service quota of a non-empty queue corresponding to a transmission priority n, QDn(k) And transmitting the service quota of the non-empty queue corresponding to the priority n for the k round.
7. The Ethernet transmission queue scheduling method according to claim 4, characterized in that, according to the service quota allocated to the non-empty queue and the number of bytes allowed to be transmitted in each round, in the process of transmitting the uplink data in the non-empty queue, if the queue corresponding to the highest transmission priority stores new uplink data, the current uplink data transmission is ended, and the newly stored uplink data is transmitted; if the transmission priority of the newly stored uplink data in the cache is higher than the transmission priority of the currently sent uplink data, the currently sent queue still sends the uplink data according to the current service quota, and the transmission priority is smaller than the redistribution service quota of the queue of the currently sent queue.
8. An ethernet transmit queue scheduling system, comprising:
the analysis module is used for analyzing the uplink data and acquiring the preset transmission priority in the uplink data;
the buffer module is used for storing the uplink data into a queue corresponding to the buffer according to the transmission priority; the buffer memory is preset with a plurality of queues, and different queues respectively store uplink data with different transmission priorities;
the sequence sending module is used for sending the uplink data in the non-empty queue according to the transmission priority if the uplink data sending time is up and the buffer memory is in a data overflow state;
and the polling sending module is used for polling the non-empty queue according to the transmission priority if the uplink data sending time is reached and the cache is in a non-data overflow state, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each turn and the number of bytes allowed to be sent.
9. The ethernet transmit queue scheduling system of claim 8, wherein the polling transmission module comprises:
the indirect polling sending module is used for sending the uplink data in the queue corresponding to the highest transmission priority, then carrying out polling on the remaining non-empty queues according to the transmission priority, and sending the uplink data in the remaining non-empty queues according to the service quota distributed by the remaining non-empty queues in each round and the number of bytes allowed to be sent if the uplink data sending time is up, the cache is in a non-data overflow state, and the queue corresponding to the highest transmission priority is not empty;
and the direct polling sending module is used for polling the non-empty queue according to the transmission priority if the uplink data sending time is up, the buffer is in a non-data overflow state, and the queue corresponding to the highest transmission priority is empty, and sending the uplink data in the non-empty queue according to the service quota distributed to the non-empty queue in each turn and the number of bytes allowed to be sent.
10. The ethernet transmit queue scheduling system of claim 9, wherein the indirect polling transmit module and the direct polling transmit module each comprise a data transmit module; the data sending module is used for polling the non-empty queues according to the transmission priority and sending the uplink data in the non-empty queues according to the service quotas distributed to the non-empty queues in each turn and the number of bytes allowed to be sent; the data sending module comprises:
the initial sending module is used for sending the uplink data in the non-empty queues according to the transmission priority according to the initial service quota distributed to each non-empty queue and the number of bytes allowed to be sent;
the judging module is used for calculating the remaining uplink data in the non-empty queue if the non-empty queue exists; if the non-empty queue does not exist, the uplink data transmission is finished;
the weight calculation module is used for calculating the weight of each non-empty queue in the round according to the remaining uplink data and the initial service quota in the non-empty queue;
the service quota calculation module is used for calculating the service quota of each non-empty queue in the round according to the weight and the initial service quota;
and the non-initial sending module is used for sending the uplink data in the non-empty queues according to the transmission priority according to the service quota distributed to each non-empty queue in the round and the byte number allowed to be sent, and then switching to the judging module.
11. A computer readable storage medium storing one or more programs, characterized in that: the one or more programs include instructions that, when executed by a computing device, cause the computing device to perform any of the methods of claims 1-7.
12. A computing device, comprising:
one or more processors, one or more memories, and one or more programs stored in the one or more memories and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210192057.0A CN114666285B (en) | 2022-02-28 | 2022-02-28 | Method, system, storage medium and computing device for scheduling Ethernet transmission queue |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210192057.0A CN114666285B (en) | 2022-02-28 | 2022-02-28 | Method, system, storage medium and computing device for scheduling Ethernet transmission queue |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114666285A true CN114666285A (en) | 2022-06-24 |
CN114666285B CN114666285B (en) | 2023-11-17 |
Family
ID=82028332
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210192057.0A Active CN114666285B (en) | 2022-02-28 | 2022-02-28 | Method, system, storage medium and computing device for scheduling Ethernet transmission queue |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114666285B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117857475A (en) * | 2024-03-08 | 2024-04-09 | 中车南京浦镇车辆有限公司 | Data transmission scheduling method and system for Ethernet train control network |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1921444A (en) * | 2005-08-24 | 2007-02-28 | 上海原动力通信科技有限公司 | Method for classified package dispatching and resource distributing based on service quality |
CN101478483A (en) * | 2009-01-08 | 2009-07-08 | 中国人民解放军信息工程大学 | Method for implementing packet scheduling in switch equipment and switch equipment |
CN101964758A (en) * | 2010-11-05 | 2011-02-02 | 南京邮电大学 | Differentiated service-based queue scheduling method |
CN106533982A (en) * | 2016-11-14 | 2017-03-22 | 西安电子科技大学 | Dynamic queue scheduling device and method based on bandwidth borrowing |
CN107733689A (en) * | 2017-09-15 | 2018-02-23 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Dynamic weighting polling dispatching strategy process based on priority |
CN108259383A (en) * | 2016-12-29 | 2018-07-06 | 北京华为数字技术有限公司 | The transmission method and the network equipment of a kind of data |
KR102137651B1 (en) * | 2019-06-10 | 2020-07-24 | 국방과학연구소 | Method and apparatus for service flow-based packet scheduling |
-
2022
- 2022-02-28 CN CN202210192057.0A patent/CN114666285B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1921444A (en) * | 2005-08-24 | 2007-02-28 | 上海原动力通信科技有限公司 | Method for classified package dispatching and resource distributing based on service quality |
CN101478483A (en) * | 2009-01-08 | 2009-07-08 | 中国人民解放军信息工程大学 | Method for implementing packet scheduling in switch equipment and switch equipment |
CN101964758A (en) * | 2010-11-05 | 2011-02-02 | 南京邮电大学 | Differentiated service-based queue scheduling method |
CN106533982A (en) * | 2016-11-14 | 2017-03-22 | 西安电子科技大学 | Dynamic queue scheduling device and method based on bandwidth borrowing |
CN108259383A (en) * | 2016-12-29 | 2018-07-06 | 北京华为数字技术有限公司 | The transmission method and the network equipment of a kind of data |
CN107733689A (en) * | 2017-09-15 | 2018-02-23 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Dynamic weighting polling dispatching strategy process based on priority |
KR102137651B1 (en) * | 2019-06-10 | 2020-07-24 | 국방과학연구소 | Method and apparatus for service flow-based packet scheduling |
Non-Patent Citations (3)
Title |
---|
伍金富;周井泉;: "基于区分服务的队列调度算法研究", 计算机技术与发展, no. 01, pages 146 - 148 * |
周鹏;郝明;唐政;胡军锋;: "基于QoS的优先级队列调度算法", 电子科技, no. 05, pages 128 - 130 * |
张华;谭献海;赵晋南;刘力浩;: "动态调整调度配额的TCSN调度算法", 计算机应用研究, no. 11, pages 73 - 76 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117857475A (en) * | 2024-03-08 | 2024-04-09 | 中车南京浦镇车辆有限公司 | Data transmission scheduling method and system for Ethernet train control network |
CN117857475B (en) * | 2024-03-08 | 2024-05-14 | 中车南京浦镇车辆有限公司 | Data transmission scheduling method and system for Ethernet train control network |
Also Published As
Publication number | Publication date |
---|---|
CN114666285B (en) | 2023-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103841052B (en) | A kind of bandwidth resource allocation System and method for | |
CN109842868B (en) | Frame aggregation and network setting frame sending method and equipment | |
US20220224614A1 (en) | Technologies for capturing processing resource metrics as a function of time | |
CN111966289B (en) | Partition optimization method and system based on Kafka cluster | |
CN113141590B (en) | Industrial Internet of things-oriented wireless communication scheduling method and device | |
CN112511325B (en) | Network congestion control method, node, system and storage medium | |
WO2012145841A1 (en) | Hierarchical profiled scheduling and shaping | |
WO2018233425A1 (en) | Network congestion processing method, device, and system | |
CN113498106A (en) | Scheduling method and device for time-sensitive network TSN (transport stream network) stream | |
WO2020042612A1 (en) | Method and device for storing and reading a message, server, and storage medium | |
CN107846443A (en) | Distributed treatment in network | |
CN115473855B (en) | Network system and data transmission method | |
CN110213338A (en) | A kind of clustering acceleration calculating method and system based on cryptographic calculation | |
US20220342719A1 (en) | Autonomous virtual radio access network control | |
WO2021078286A1 (en) | Data processing method and device | |
CN114666285B (en) | Method, system, storage medium and computing device for scheduling Ethernet transmission queue | |
JP7348293B2 (en) | Data processing methods and equipment | |
CN113328953B (en) | Method, device and storage medium for network congestion adjustment | |
US9621438B2 (en) | Network traffic management | |
US10986036B1 (en) | Method and apparatus for orchestrating resources in multi-access edge computing (MEC) network | |
CN113906720B (en) | Traffic scheduling method, traffic scheduling device and storage medium | |
CN114064226A (en) | Resource coordination method and device for container cluster and storage medium | |
CN115277561B (en) | Transmission queue control method and terminal based on time sensitivity | |
CN114168299A (en) | Cloud native scheduling method based on differentiated task portraits and server system | |
CN115774614A (en) | Resource regulation and control method, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |