CN110213178B - Flow management method, integrated chip and device - Google Patents

Flow management method, integrated chip and device Download PDF

Info

Publication number
CN110213178B
CN110213178B CN201810548610.3A CN201810548610A CN110213178B CN 110213178 B CN110213178 B CN 110213178B CN 201810548610 A CN201810548610 A CN 201810548610A CN 110213178 B CN110213178 B CN 110213178B
Authority
CN
China
Prior art keywords
scheduling
queue
data
initiator
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810548610.3A
Other languages
Chinese (zh)
Other versions
CN110213178A (en
Inventor
李嘉昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810548610.3A priority Critical patent/CN110213178B/en
Publication of CN110213178A publication Critical patent/CN110213178A/en
Application granted granted Critical
Publication of CN110213178B publication Critical patent/CN110213178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/76Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention provides a flow management method, an integrated chip and a device, wherein the method comprises the following steps: receiving data sent by at least two traffic initiators; acquiring service demand information of at least two traffic initiators, wherein the service demand information refers to the demand information of services corresponding to data to be sent by the traffic initiators; and determining a scheduling strategy according to the service demand information. The scheduling strategy is determined through the service requirement information of the flow initiator instead of using a fixed scheduling strategy, so that the bandwidth can be more flexibly allocated to the flow initiator, the requirements of more flow initiators are met, and the scheduling efficiency is further improved.

Description

Flow management method, integrated chip and device
Technical Field
The embodiment of the invention relates to the technical field of electronics, in particular to a flow management method, an integrated chip and a device.
Background
The flow management is a related technology for identifying and classifying data flows, implementing flow control and optimization and guaranteeing key application based on the current flow situation of a network and a flow control strategy.
Taking a Field Programmable Gate Array (FPGA) cloud server as an example, an FPGA includes two areas, a static area and a dynamic area, which are connected by an interface bus, where the static area is a hardware part of an FPGA system architecture, and the dynamic area is an area that can be designed. After the flow initiator registers to the dynamic area, the data is sent to the dynamic area, and then the scheduling module of the static area executes the flow management.
There may be many traffic initiators registered to the dynamic zone, for example, a plurality of applications are registered to the dynamic zone. The traffic initiator is its user for the scheduling module, and the traffic management performed by the scheduling module is typically multi-user traffic management. The current traffic management mode is balanced scheduling, that is: the total bandwidth is evenly distributed among the plurality of users.
Practice shows that the implementation mode of balanced scheduling cannot meet the requirements of more traffic initiators, so that bandwidth bottlenecks occur to a plurality of users, and the scheduling efficiency is low.
Disclosure of Invention
Embodiments of the present invention provide a traffic management method, an integrated chip, and an apparatus, which are used to flexibly allocate bandwidth to a traffic initiator, so as to meet the requirements of more traffic initiators and further improve scheduling efficiency.
In one aspect, an embodiment of the present invention provides a traffic management method, including:
receiving data sent by at least two traffic initiators;
acquiring service demand information of at least two traffic initiators, wherein the service demand information refers to the demand information of services corresponding to data to be sent by the traffic initiators; determining a scheduling strategy according to the service demand information;
and performing scheduling on the data sent by the traffic initiator by using the scheduling strategy.
The traffic initiator may be any application or user that has a traffic transmission need. The service requirement information refers to information of requirements of a service corresponding to data to be sent by a traffic initiator, for example: time delay, bandwidth, etc.
In one possible implementation, before the performing scheduling on the data sent by the traffic initiator using the scheduling policy, the method includes:
when the data sent by the flow initiator is scheduled by using an initial scheduling strategy, waiting for the completion of one-time scheduling execution of the initial scheduling strategy; the initial scheduling policy is a scheduling policy used before the scheduling policy.
In one possible implementation manner, before waiting for the initial scheduling policy to finish scheduling execution once, the method further includes:
receiving a switching request message from a traffic initiator, wherein the scheduling switching request message requests to execute scheduling by using the scheduling policy;
or determining that the scheduling efficiency improvement amount exceeds a threshold value when the scheduling strategy is used compared with the initial scheduling strategy.
In a possible implementation manner, the obtaining the service requirement information of the traffic initiator, and determining the scheduling policy according to the service requirement information includes:
acquiring at least one item of bandwidth requirement, time delay requirement and priority information in the service requirement information of the flow initiator; and selecting a scheduling strategy from the scheduling strategies to be selected according to the service demand information.
In one possible implementation, the candidate scheduling policy includes: a bandwidth optimizing mode, a bandwidth sharing mode and a bandwidth adjustable mode;
the bandwidth optimization mode is a scheduling strategy which is scheduled in sequence from high to low according to the service priority;
the bandwidth sharing mode is a scheduling strategy of balanced scheduling;
the bandwidth adjustable mode is a scheduling strategy which is scheduled from high to low according to the weight of a traffic initiator, and after each round of scheduling, the weight value of the unscheduled traffic initiator relative to the scheduled traffic initiator is increased.
In one possible implementation manner, the performing, by using the scheduling policy, scheduling data sent by the traffic initiator includes:
and sending the data sent by the traffic initiator to a task queue of the traffic initiator, distributing the task queue into a task queue group corresponding to the scheduling strategy, and waiting for a scheduling module which uses the scheduling strategy to execute scheduling.
In one possible implementation manner, the performing scheduling on the data sent by the traffic initiator by using the scheduling policy includes:
and storing the data sent by the traffic initiator into a task queue of the traffic initiator, closing other scheduling modules except the scheduling module which uses the scheduling strategy to execute scheduling, and waiting for the scheduling module which uses the scheduling strategy to execute scheduling.
In another aspect, an embodiment of the present invention further provides an integrated chip, including:
the system comprises a scheduling module, a scheduling control module and a receiving module;
the receiving module is used for receiving data sent by a flow initiator;
the scheduling control module is used for acquiring the service demand information of the flow initiator and determining a scheduling strategy according to the service demand information;
and the scheduling module is used for executing scheduling on the data sent by the flow initiator by using the scheduling strategy.
In a possible implementation manner, the scheduling module is further configured to wait for the end of one-time scheduling execution of an initial scheduling policy when data sent by the traffic initiator is scheduled by using the initial scheduling policy before performing scheduling on the data sent by the traffic initiator by using the scheduling policy, and then perform scheduling on the data sent by the traffic initiator by using the scheduling policy; the initial scheduling policy is a scheduling policy used before the scheduling policy.
In a possible implementation manner, the scheduling module is further configured to receive a handover request message from a traffic initiator before the scheduling module waits for the end of one-time scheduling execution of the initial scheduling policy, where the scheduling handover request message requests that scheduling is executed using the scheduling policy;
or, the scheduling module is further configured to determine that, compared with the use of the initial scheduling policy, the amount of improvement in scheduling efficiency exceeds a threshold value when the scheduling policy is used before the end of one-time scheduling execution of the initial scheduling policy is waited.
In a possible implementation manner, the scheduling control module is configured to acquire at least one of a bandwidth requirement, a delay requirement, and priority information in the service requirement information of the traffic initiator; and selecting a scheduling strategy from the scheduling strategies to be selected according to the service demand information.
In one possible implementation manner, the candidate scheduling policy includes: a bandwidth optimizing mode, a bandwidth sharing mode and a bandwidth adjustable mode;
the bandwidth optimization mode is a scheduling strategy which is scheduled in sequence from high to low according to the service priority;
the bandwidth sharing mode is a scheduling strategy of balanced scheduling;
the bandwidth adjustable mode is a scheduling strategy which is scheduled from high to low according to the weight of a traffic initiator, and after each round of scheduling, the weight value of the unscheduled traffic initiator relative to the scheduled traffic initiator is increased.
In a possible implementation manner, the scheduling module is configured to send data sent by the traffic initiator to a task queue of the traffic initiator, allocate the task queue to a task queue packet corresponding to the scheduling policy, and wait for the scheduling module that performs scheduling using the scheduling policy to perform scheduling.
In a possible implementation manner, the scheduling module is configured to store data sent by the traffic initiator into a task queue of the traffic initiator, close other scheduling modules except the scheduling module that performs scheduling using the scheduling policy, and wait for the scheduling module that performs scheduling using the scheduling policy to perform scheduling.
The embodiment of the present invention in three aspects provides a traffic management apparatus, including an input device, an output device, a memory and a processor, where the memory is used to store program instructions, and the program instructions are suitable for being loaded by the processor; the input device is used for receiving data sent by a flow initiator;
the embodiment of the present invention further provides a storage medium, where a plurality of program instructions are stored in the storage medium, and the program instructions are adapted to be loaded by a processor and execute any one of the traffic management methods provided in the embodiment of the present invention.
The embodiment of the present invention further provides a computer program product, where the computer program product includes a plurality of program instructions, and the program instructions are adapted to be loaded by a processor and execute any one of the traffic management methods provided by the embodiment of the present invention.
According to the embodiment of the invention, the scheduling strategy is determined through the service requirement information of the flow initiator instead of using a fixed scheduling strategy, so that the bandwidth can be more flexibly allocated to the flow initiator, the requirements of more flow initiators are met, and the scheduling efficiency is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present invention, the drawings required to be used in the embodiments or the background art of the present invention will be described below.
FIG. 1 is a schematic structural diagram of an application system according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an application system according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a user queue according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an application system according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an application system according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an application system according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating scheduling results of scheduling policies according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating scheduling results of scheduling policies according to an embodiment of the present invention;
FIG. 10 is a schematic flow chart of a method according to an embodiment of the present invention;
FIG. 11 is a schematic flow chart of a method according to an embodiment of the present invention;
FIG. 12 is a diagram illustrating an integrated chip structure according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a traffic management device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
As shown in fig. 1, which is a schematic structural diagram of an application system according to an embodiment of the present invention, a static area in fig. 1 may be a hardware part, a virtual hardware part, or a functional module existing in software; this portion may be referred to as a static region because it is relatively stable with respect to the dynamic region. The dynamic area in fig. 1 may be a software implementation area in general, or a customized hardware implementation area in general, and this portion is referred to as a dynamic area because it involves dynamic changes of a user, and the relative changes are large.
The dynamic area and the static area can be independent respectively, and a bus interface is used for transmitting data and instructions. It is understood that if the dynamic area and the static area are implemented as functional modules of software, a functional interface may be used for data and instruction transfer.
Fig. 1 illustrates that N users register in the dynamic area, and then the N users can all serve as traffic initiators to put forward a data transmission requirement, that is, data is transmitted to the dynamic area to wait for scheduling of the function module related to scheduling management.
Based on the structure shown in fig. 1, the static area may be further divided in the embodiment of the present invention, as shown in fig. 2: compared with the structure shown in fig. 1, a function module for scheduling management and a plurality of scheduling modules are subdivided in a static area, and different scheduling modules correspond to different scheduling strategies; there are M scheduling strategies in fig. 2; it should be noted that if the scheduling policy of the scheduling module is variable, the scheduling module may only include one, and in this case, software may be used to implement the functions of the scheduling module; if the scheduling policy of the scheduling module is fixed, the functions of the scheduling module can be implemented generally using hardware. The function module of the dispatching management is mainly used for controlling the working state of the dispatching module; the scheduling module is used for executing scheduling according to the scheduling strategy of the scheduling module; the process of scheduling performed by the scheduling module is a process of providing traffic service to users using a scheduling policy.
Based on the system structure shown in fig. 2, an embodiment of the present invention provides a traffic management method, as shown in fig. 3, including:
301: receiving data sent by at least two traffic initiators;
the traffic initiator may be any application or user that has a traffic transmission need. Corresponding to fig. 2, the dynamic area receives data sent from the user after receiving the user registration; the data sent by the user can be stored in the queue corresponding to the user. Fig. 4 is a schematic diagram of a user data queue, and fig. 4 illustrates queues of users 1 to 3, for a total of 3 users; each grid represents a scheduling unit, which may be generally a data packet or a collection of data packets that need to be scheduled to perform a certain function, and the specific form of the embodiment of the present invention is not limited uniquely. The 4 queues shown in fig. 4 each include 8 cells, that is, a maximum of 8 scheduling units can be stored; wherein if there is a fill, then: and if not, indicating that the corresponding scheduling unit has data to be scheduled.
Before the flow initiator sends data, the flow initiator registers; for example, the traffic initiator registers in the dynamic zone shown in fig. 2, and the traffic initiator registers in the dynamic zone in order to let the traffic manager know that a certain traffic initiator exists; the specific registration mode may refer to registration processes such as user login or identity authentication, and the specific registration process is not limited in the embodiments of the present invention.
302: obtaining service demand information of at least two traffic initiators, wherein the service demand information refers to the demand information of services corresponding to data to be sent by the traffic initiators; determining a scheduling strategy according to the service demand information;
based on the foregoing description, since data comes from at least two traffic initiators, different traffic initiators may have different service requirements, and may also have the same service requirement, which is not limited uniquely by the embodiment of the present invention.
The service requirement information refers to requirement information of a service corresponding to data to be sent by a traffic initiator, for example: time delay, bandwidth, etc. Since the service requirement information comes from at least two traffic initiators, there may be two or more service requirement information, and therefore, the process of determining the scheduling policy may determine the scheduling policy by using the information meeting the service requirement maximally as a principle, or by using the information meeting the service requirement with higher priority preferentially as a principle. The specific scheduling policy determination process is not limited in this embodiment.
303: and executing scheduling on the data sent by the traffic initiator by using the scheduling strategy.
The process of performing scheduling after the scheduling policy is determined is a process of providing traffic service to users using the scheduling policy, and several possible examples will be provided in the following embodiments.
According to the embodiment of the invention, the scheduling strategy is determined through the service requirement information of the flow initiator instead of using a fixed scheduling strategy, so that the bandwidth can be more flexibly allocated to the flow initiator, the requirements of more flow initiators are met, and the utilization rate of the system bandwidth is further improved.
In a possible implementation manner, this embodiment further provides an implementation manner of scheduling policy switching, which is specifically as follows: before the scheduling policy is used to schedule the data sent by the traffic initiator, the method includes:
when the data sent by the flow initiator is scheduled by using an initial scheduling strategy, waiting for the completion of one-time scheduling execution of the initial scheduling strategy; the initial scheduling policy is a scheduling policy used before the scheduling policy.
In this embodiment, the scheduling policy may schedule data requested by a user in an execution process, and more specifically, for example: when the scheduling strategy is executed, scheduling is carried out by taking a data packet as a unit, and then bandwidth is distributed to a certain data packet when one-time scheduling execution is finished, or the certain data packet is sent out; accordingly, one scheduling execution end refers to a scheduling end for one packet. A scheduling execution end may not be a scheduling end for all packets of a certain traffic originator here.
In this embodiment, a switching implementation manner of the scheduling policy is provided, and control implementation in the switching process is provided, so that smooth operation of scheduling can be ensured in the switching process, switching is timely, and a chaotic condition is avoided.
In a possible implementation manner, this embodiment further provides two implementation manners of the handover scheduling policy, which are specifically as follows: before the waiting for the end of the primary scheduling execution of the initial scheduling policy, the method further includes:
receiving a switching request message from a traffic initiator, wherein the scheduling switching request message requests to execute scheduling by using the scheduling strategy;
or, it is determined that the scheduling efficiency improvement amount exceeds a threshold value when the scheduling policy is used compared with the initial scheduling policy.
In the above two implementation manners of the switching scheduling policy, the former can adapt to the requirement of the traffic initiator by the request of the traffic initiator, and the latter realizes intelligent control without the request of the traffic initiator.
In one possible implementation, several possible cases of the service requirement information are also provided, specifically as follows: the obtaining the service demand information of the traffic initiator and determining the scheduling policy according to the service demand information includes:
acquiring at least one item of bandwidth requirement, delay requirement and priority information in the service requirement information of the traffic initiator; and selecting a scheduling strategy from the scheduling strategies to be selected according to the service requirement information.
It should be noted that the Service requirement information may be other requirements besides the above three requirements, for example, the Service requirement information may further include a Quality of Service (QoS) requirement, and more specifically, may further include a packet loss rate requirement, and the like. The above examples should not therefore be construed as limiting the embodiments of the invention solely thereto.
The candidate scheduling policies correspond to M scheduling policies in fig. 2, and all scheduling policies that can be supported by the traffic scheduling device should be used.
In a possible implementation manner, the embodiment further provides contents of a scheduling policy to be selected based on the three service requirement information, which are specifically as follows: the candidate scheduling policy includes: a bandwidth optimizing mode, a bandwidth sharing mode and a bandwidth adjustable mode;
the bandwidth optimization mode is a scheduling strategy which is scheduled in sequence from high to low according to the service priority;
the bandwidth sharing mode is a scheduling strategy of balanced scheduling;
the bandwidth adjustable mode is a scheduling strategy which is scheduled from high to low according to the weight of a traffic initiator, and after each scheduling, the weight value of the unscheduled traffic initiator relative to the scheduled traffic initiator is increased.
An illustration of a specific scheduling procedure of the above three scheduling policies will be provided in the subsequent embodiments. In this embodiment, based on the foregoing scheduling policies provided by the three service requirement information, as the specific content of the service requirement information may also be other, the scheduling policies may also be correspondingly expanded, and the above three scheduling policies should not be construed as the only limitations to the embodiments of the present invention.
The embodiment also provides two optional implementation modes for managing the data sent by the traffic initiator in the dynamic area and correspondingly adjusting the working mode of the organization scheduling module. The method comprises the following specific steps:
the scheduling of the data sent by the traffic initiator using the scheduling policy includes:
and sending the data sent by the traffic initiator to a task queue of the traffic initiator, distributing the task queue into a task queue group corresponding to the scheduling policy, and waiting for a scheduling module which uses the scheduling policy to execute scheduling to execute the scheduling.
As shown in fig. 5, the scheduling module has its own fixed scheduling policy, and the scheduling module and the scheduling policy correspond to one queue packet, so that the present embodiment may divide the task queue of the user into corresponding queue packets; the different scheduling modules may work in parallel. The different scheduling modules in this embodiment may perform scheduling in parallel. In fig. 5, two queued packets are included, where users 1 to 3 are in the queued packet corresponding to scheduling policy 1, and users 4 to 6 are in the queued packet corresponding to scheduling policy 2.
The performing of the scheduling on the data sent by the traffic initiator by using the scheduling policy includes:
and storing the data sent by the traffic initiator into a task queue of the traffic initiator, closing other scheduling modules except the scheduling module for executing scheduling by using the scheduling strategy, and waiting for the scheduling module for executing scheduling by using the scheduling strategy to execute scheduling.
As shown in fig. 6, the scheduling module has its own fixed scheduling policy, the scheduling module and the scheduling policy correspond to the whole task queue, only one scheduling module is in a working state at a time, and the scheduling management function module controls whether the scheduling module is in a working state. In fig. 6, only the scheduling module of scheduling policy 1 is shown to be in a working state, and the scheduling modules corresponding to other scheduling policies do not work.
In the above two implementations, the former can process scheduling in parallel, and the latter has no competition among scheduling modules and thus it is easier to control the allocation of the overall bandwidth.
In the following embodiments, taking the static area and the dynamic area included in the FPGA as an example, the application scenarios include three scenarios: the bandwidth-optimized mode, the bandwidth-shared mode, and the bandwidth-tunable mode are exemplified as examples. Compared with the mode of bandwidth average allocation, namely the mode of multi-user balanced scheduling, the embodiment of the invention can select one of three scheduling modes to execute scheduling according to the requirements of specific applications, and the three scheduling modes are respectively as follows: bandwidth-optimized mode, i.e. scheduling is performed in a fixed-priority manner; a bandwidth sharing mode, namely multi-user balanced scheduling; bandwidth adjustable mode, i.e. multi-user weighted scheduling. The upper layer software application program can configure one scheduling mode according to actual requirements to execute scheduling, the function of the scheduling control module is realized, the flexibility is high, and various different service types can be met.
In addition, in the running process of the FPGA, the requirement of a user may change, so that the dynamic switching of the scheduling mode can be carried out. Namely, switching from one current scheduling mode to another scheduling mode; in the scheduling process, interruption is not needed, the data packet is not lost, the stability is high, and the scheduling mode is switched with zero loss.
Based on the foregoing description, as shown in fig. 7, the static area in this embodiment includes four parts, which include modules corresponding to three scheduling modes in addition to the module for scheduling management, respectively: a module of a bandwidth optimal sharing mode, a bandwidth average sharing mode and a bandwidth adjustable mode; namely: a module scheduled by fixed priority, a module scheduled by multi-user weighting, and a module scheduled by multi-user balance; the module for scheduling management can execute the control function of switching the scheduling modes besides determining which scheduling mode is currently used. The current service does not need to be interrupted in the switching process of the scheduling mode, and no additional time overhead exists.
The above three scheduling modes are introduced as follows:
the module of multi-user balanced scheduling is applied to a bandwidth sharing mode, namely, the situation that the bandwidth required by each user is basically the same.
As shown in fig. 8, a bandwidth sharing mode is illustrated, and when the bandwidth sharing mode is applied to a scene in which bandwidths required by users are basically the same, multi-user bandwidth equalization scheduling is used, which is specifically as follows:
in the bandwidth sharing mode, a scheduler sequentially inquires whether each user queue has data to be sent or not in a polling mode; if the current inquired user queue is empty, jumping to the next user queue; if each user queue has data waiting to be sent, the scheduling sequence is consistent with the user queue sequence. If some user queues are empty, other user queues will get more bandwidth. In the extreme case, a single queue may use the full link bandwidth if the other queues are empty. In the bandwidth sharing mode, user queues can be avoided.
In fig. 8, 3 user queues are illustrated, and the scheduled result illustrates the order of transmission, where the left side is the starting position of transmission, which is transmitted first.
And secondly, the fixed priority scheduling module is applied to a bandwidth optimization mode, namely, a certain user has the highest priority for bandwidth utilization and enjoys high-quality service with low time delay and high priority.
The fixed priority scheduling, i.e. the bandwidth-optimized mode, is as follows:
in the bandwidth optimization mode, the scheduler schedules according to a priority strategy preset for the user, schedules a user queue with higher priority preferentially, and schedules a queue with lower priority after the user queue with higher priority has no data to be sent.
Fixed priority scheduling is particularly suitable for applications requiring high bandwidth and low latency for a user, while other users are insensitive to bandwidth and latency for traffic. The user with the highest priority will get the best quality service at this time.
As shown in fig. 9, there are three user queues, assuming user queue 1 is the highest priority, with 4 packets. The user queue is 2 times high priority with 3 packets. User queue 3 is the lowest priority, with 4 packets. Then the scheduling order of the scheduler according to the above algorithm is:
the data packet of the user queue 1- > the data packet of the user queue 2- > the data packet of the user queue 3. The result after scheduling indicates the order of transmission, where the left side is the starting position of transmission, which is transmitted first.
And thirdly, the multi-user weighted scheduling module is applied to a bandwidth adjustable mode, namely, the bandwidth requirements of all users are inconsistent. The method is applied to the situation that some user applications need high bandwidth and some user applications are not sensitive to the bandwidth.
The multi-user weighted scheduling, i.e. the bandwidth adjustable mode, is particularly suitable for a scenario where bandwidth requirements of users are inconsistent, and belongs to the most flexible scheduling manner, as shown in fig. 10, specifically the following is:
1001: generating an initial weight value of each user queue according to the bandwidth required by the service type applied by each user;
1002: the software application program obtains the weight value of each user queue through a hardware interface, and selects the user queue with the maximum weight value for scheduling; if the number of the user queues with the largest weight value is more than 1, scheduling the user queues with the largest weight value according to the queue order;
1003: and adjusting the weight value in the following specific manner: reducing the weight value of the selected user queue, namely reducing the weight value of the user queue scheduled in 1002; the weight value of the unscheduled user queue can also be increased;
1004: whether all the data to be sent in all the user queues are completely sent or whether the user queues are switched to other scheduling modes is judged, and if all the data to be sent in all the user queues are completely sent, the operation returns to 1002.
In the above 1003, the weight value of the user queue may be specifically: and subtracting the total weight from the selected queue weight to obtain an intermediate weight, and then adding the intermediate weight and the current weight of each user queue as the weight of next round of scheduling. After the above 1003 is repeatedly executed, if all the data to be transmitted are transmitted, the weights will be all 0. Suppose there are three users currently, user 0, user 1, and user 2. If the initial weight values are 1, 5, and 1, respectively, the weight values and the intermediate weight values of each round of scheduling are calculated as shown in table 1 below.
Table 1: example of Multi-user weighted scheduling
Figure BDA0001680559160000111
Figure BDA0001680559160000121
The functional module for scheduling management in the embodiment of the present invention may perform switching of 3 scheduling modes, as shown in table 2.
Table 2: mode switching list
Pre-switch scheduling mode Post-switch scheduling mode
Fixed priority scheduling Multi-user weighted scheduling
Fixed priority scheduling Multi-user balanced scheduling
Multi-user weighted scheduling Fixed priority scheduling
Multi-user weighted scheduling Multi-user balanced scheduling
Multi-user balanced scheduling Fixed priority scheduling
Multi-user balanced scheduling Multi-user weighted scheduling
Based on the foregoing, the manner of switching may be global switching, that is: only one scheduling module runs, the functional module for scheduling management controls the module corresponding to the switched scheduling mode to work, and other modules stop working; alternatively, scheduling may also be performed at the granularity of user queues, i.e.: and moving the user queue to a task queue group corresponding to the switched scheduling mode.
The functional module of the scheduling management is mainly used for ensuring that when the service switching occurs, the current scheduling mode is stably and seamlessly switched to a new scheduling mode. As shown in fig. 11, the specific process includes:
1101: based on the three scheduling modes, in any scheduling mode, executing: detecting whether a request for switching a scheduling strategy exists;
1102: if there is no request for switching the scheduling policy, continue to execute 1101; if a request for switching the scheduling policy exists, the process proceeds to 1103;
1103: detecting whether the scheduling of the current round is finished, namely: sequentially inquiring whether each user queue completes the current scheduling; if yes, entering 1104, otherwise, waiting for each user queue to finish the current scheduling, and then entering 1104;
1104: and confirming that each user queue completes the current scheduling, and executing the scheduling strategy appointed by the request for switching to the switching scheduling strategy.
In 1104, a hard handoff approach may be employed, namely: and starting the scheduling module corresponding to the scheduling policy specified by the request for switching the scheduling policy, and closing other scheduling modules, thereby realizing the switching of the scheduling policy. The scheduling module in the working state schedules the data of each user queue in the dynamic area.
In fig. 11, only this branch of the multi-user weighted scheduling is labeled, and the other scheduling modes are not described in detail here.
An embodiment of the present invention further provides an integrated chip, as shown in fig. 12, including:
a scheduling module 1201, a scheduling control module 1202, and a receiving module 1203;
the receiving module 1203 is configured to receive data sent by at least two traffic initiators;
the scheduling control module 1202 is configured to obtain service requirement information of at least two traffic initiators, where the service requirement information refers to requirement information of a service corresponding to data to be sent by a traffic initiator; determining a scheduling strategy according to the service demand information;
the scheduling module 1201 is configured to perform scheduling on the data sent by the traffic initiator using the scheduling policy.
In the integrated chip shown in fig. 12, the number of modules shown in the receiving module 1203 and the scheduling module 1201 should not be construed as a limitation to the embodiment of the present invention.
In this embodiment, the scheduling control module 1202 implements the scheduling management function, and reference may be made to the foregoing description about scheduling management, which is not described herein again. The receiving module 1203 may be located in the dynamic area, and the receiving module 1203 may have a storage function to temporarily store data sent by the traffic initiator. The scheduling module 1201 is a module for executing scheduling, and may correspond to a certain scheduling policy, and is turned on when the scheduling control module 1202 determines the scheduling policy, where the scheduling module 1201 may be a hardware module of an FPGA or a functional module existing in a software form; the scheduling module 1201 may also support multiple scheduling policies, and execute the scheduling policy determined by the scheduling control module 1202 at one time, where the scheduling module 1201 may be a hardware module of an FPGA or a functional module existing in a software form;
in a possible implementation manner, the scheduling module 1201 is further configured to, before performing scheduling on the data sent by the traffic initiator using the scheduling policy, wait for completion of one-time scheduling execution of the initial scheduling policy when the data sent by the traffic initiator is scheduled using the initial scheduling policy, and then perform scheduling on the data sent by the traffic initiator using the scheduling policy; the initial scheduling policy is a scheduling policy used before the scheduling policy.
In a possible implementation manner, the scheduling module 1201 is further configured to receive a handover request message from a traffic initiator before the scheduling module 1201 waits for the end of one-time scheduling execution of the initial scheduling policy, where the handover request message requests that scheduling is executed using the scheduling policy;
or, the scheduling module 1201 is further configured to determine that, compared with the use of the initial scheduling policy, the amount of improvement in scheduling efficiency exceeds a threshold value when the use of the scheduling policy is determined before waiting for the end of one-time scheduling execution of the initial scheduling policy.
In a possible implementation manner, the scheduling control module 1202 is configured to obtain at least one of a bandwidth requirement, a delay requirement, and priority information in the service requirement information of the traffic initiator; and selecting a scheduling strategy from the scheduling strategies to be selected according to the service requirement information.
In a possible implementation manner, the candidate scheduling policy includes: a bandwidth optimizing mode, a bandwidth sharing mode and a bandwidth adjustable mode;
the bandwidth optimization mode is a scheduling strategy which is scheduled in sequence from high to low according to the service priority;
the bandwidth sharing mode is a scheduling strategy of balanced scheduling;
the bandwidth adjustable mode is a scheduling strategy which is scheduled from high to low according to the weight of a traffic initiator, and after each scheduling, the weight value of the unscheduled traffic initiator relative to the scheduled traffic initiator is increased.
In a possible implementation manner, the scheduling module 1201 is configured to send data sent by the traffic initiator to a task queue of the traffic initiator, allocate the task queue to a task queue packet corresponding to the scheduling policy, and wait for the scheduling module 1201 that performs scheduling using the scheduling policy to perform scheduling.
In a possible implementation manner, the scheduling module 1201 is configured to store data sent by the traffic initiator into a task queue of the traffic initiator, close other scheduling modules 1201 except the scheduling module 1201 that performs scheduling using the scheduling policy, and wait for the scheduling module 1201 that performs scheduling using the scheduling policy to perform scheduling.
Referring to fig. 13, fig. 13 is a schematic block diagram of a traffic management device according to another embodiment of the present application. The traffic management device in the present embodiment shown in fig. 13 may include: one or more processors 1301; one or more input devices 1302, one or more output devices 1303, and memory 1304. The processor 1301, the input device 1302, the output device 1303, and the memory 1304 are connected by a bus 1305. The memory 1304 is used to store computer programs comprising program instructions, and the processor 1301 is used to execute the program instructions stored in the memory 1304. Wherein, the processor 1301 is configured to call the program instruction to execute the scheduling control method in the foregoing method embodiment. The input device 1302 may be configured to receive data from the traffic initiator and may further receive a request from the traffic initiator. The output device 1303 may send the data of the traffic initiator after the processor 1301 executes the scheduling control. It is understood that if the data sent by the traffic initiator only needs to be processed in the traffic management apparatus and does not need to be fed back, the output device 1303 does not need to send the processing result or forward the data.
The Processor 1301 may be a Central Processing Unit (CPU), or other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Input device 1302 may include a radio frequency antenna, interface hardware, an inter-program interface, etc., and output device 1303 may include a radio frequency antenna, interface hardware, an inter-program interface, etc. The input device 1302 may implement functionality relating to receiving data in method embodiments of the invention.
The memory 1304, which may include both read-only memory and random-access memory, provides instructions and data to the processor 1301. A portion of memory 1304 may also include non-volatile random access memory. For example, the memory 1304 may also buffer data sent by traffic initiators.
In a specific implementation, the processor 1301, the input device 1302, and the output device 1303 described in this embodiment of the present application may execute a scheduling control function described in any method provided in this embodiment of the present application; and will not be described in detail herein.
The embodiment of the present invention further provides a storage medium, where a plurality of program instructions are stored in the storage medium, and the program instructions are suitable for being loaded by a processor and executing any one of the traffic management methods provided in the embodiments of the present invention.
Embodiments of the present invention further provide a computer program product, where the computer program product includes multiple program instructions, and the program instructions are suitable for being loaded by a processor and executing any one of the traffic management methods provided in the embodiments of the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is a possible division of logical functions, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described above in accordance with the embodiments of the invention may be generated, in whole or in part, when the computer program instructions described above are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Versatile Disk (DVD)), a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media that can store program codes, such as a read-only memory (ROM) or a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (12)

1. An integrated chip, comprising:
the system comprises a scheduling module, a scheduling control module and a receiving module;
the receiving module is used for receiving data sent by at least two traffic initiators; the data sent by at least two flow initiators are respectively stored in the data queues corresponding to the flow initiators;
the scheduling control module is used for obtaining service demand information of at least two flow promoters, wherein the service demand information refers to the demand information of services corresponding to data to be sent by each flow promoter; determining a scheduling strategy according to the service requirement information of each flow initiator, and distributing the data queues corresponding to each flow initiator to queue groups of corresponding scheduling strategies according to the determined scheduling strategy, wherein different scheduling modules correspond to different queue groups;
the scheduling module is configured to perform scheduling on the data sent by the traffic initiator by using the scheduling policy;
when the scheduling module executes scheduling, the scheduling module is allowed to be specifically used for generating an initial weight value of a data queue corresponding to each flow initiator in a queue packet according to a bandwidth required by a service type applied by each flow initiator in the queue packet, obtaining the weight value of the data queue corresponding to each flow initiator in the queue packet, and selecting the data queue with the maximum weight value in the queue packet for scheduling; if the number of the data queues with the largest weight value in the queue packets is more than 1, scheduling the data queues with the largest weight value in the queue packets according to the queue order, adjusting the weight value of each data queue in the queue packets after one-time scheduling is finished, and repeatedly performing scheduling on the data queues with the largest weight value in the queue packets according to the adjusted weight values until all data to be sent in all the data queues in the queue packets are completely sent or switched to other scheduling modes;
when the weight values of the data queues are adjusted, the scheduling module is specifically configured to subtract a total weight from the weight value of a selected queue in the queue packets to obtain an intermediate weight value of the selected queue, directly use current weight values of other queues except the selected queue in the queue packets as intermediate weight values of corresponding data queues, and then correspondingly add initial weight values of the data queues in each queue packet to each intermediate weight value to obtain an adjusted weight value, set the weight value of the data queue in which all data in the queue packets are completely sent to be 0, where the selected queue refers to the data queue in which data are scheduled to be sent most recently in the queue packets.
2. The integrated chip of claim 1,
the scheduling module is further configured to wait for the end of one-time scheduling execution of an initial scheduling policy when data sent by the traffic initiator is scheduled by using the initial scheduling policy before the scheduling policy is used to perform scheduling on the data sent by the traffic initiator, and then perform scheduling on the data sent by the traffic initiator by using the scheduling policy; the initial scheduling policy is a scheduling policy used before the scheduling policy.
3. The integrated chip of claim 2,
the scheduling module is further configured to receive a handover request message from the traffic initiator before the scheduling module waits for the end of one-time scheduling execution of the initial scheduling policy, where the handover request message requests that scheduling be executed using the scheduling policy;
or, the scheduling module is further configured to determine that, compared with the use of the initial scheduling policy, the amount of improvement in scheduling efficiency exceeds a threshold value when the scheduling policy is used before the end of one-time scheduling execution of the initial scheduling policy is waited.
4. The integrated chip of claim 1,
the scheduling control module is used for acquiring at least one of bandwidth requirement, delay requirement and priority information in the service requirement information of the flow initiator; and selecting a scheduling strategy from the scheduling strategies to be selected according to the service demand information.
5. The integrated chip of claim 4,
the candidate scheduling policy comprises: a bandwidth optimizing mode, a bandwidth sharing mode and a bandwidth adjustable mode;
the bandwidth optimization mode is a scheduling strategy which is scheduled in sequence from high to low according to the service priority;
the bandwidth sharing mode is a scheduling strategy of balanced scheduling;
the bandwidth adjustable mode is a scheduling strategy which is scheduled from high to low according to the weight of a traffic initiator, and after each round of scheduling, the weight value of the unscheduled traffic initiator relative to the scheduled traffic initiator is increased.
6. A method of traffic management, comprising:
receiving data sent by at least two traffic initiators; the data sent by at least two flow initiators are respectively stored in the data queues corresponding to the flow initiators;
acquiring service demand information of at least two traffic initiators, wherein the service demand information refers to the demand information of services corresponding to data to be sent by each traffic initiator; determining a scheduling strategy according to the service requirement information of each flow initiator, and distributing the data queues corresponding to each flow initiator to queue groups of the corresponding scheduling strategies according to the determined scheduling strategy;
performing scheduling on the data sent by the traffic initiator by using the scheduling strategy;
when scheduling is executed, the method is allowed to be specifically used for generating an initial weight value of a data queue corresponding to each flow initiator in a queue packet according to a bandwidth required by a service type applied by each flow initiator in the queue packet, obtaining the weight value of the data queue corresponding to each flow initiator in the queue packet, and selecting the data queue with the maximum weight value in the queue packet for scheduling; if the number of the data queues with the largest weight value in the queue packets is more than 1, scheduling the data queues with the largest weight value in the queue packets according to the queue order, adjusting the weight value of each data queue in the queue packets after one-time scheduling is finished, and repeatedly performing scheduling on the data queues with the largest weight value in the queue packets according to the adjusted weight values until all data to be sent in all the data queues in the queue packets are completely sent or switched to other scheduling modes;
when the weight values of the data queues are adjusted, subtracting the total weight from the weight value of a selected queue in the queue packets to obtain an intermediate weight value of the selected queue, directly taking the current weight values of other queues except the selected queue in the queue packets as the intermediate weight values of the corresponding data queues, and then correspondingly adding the initial weight values of the data queues in the queue packets to each intermediate weight value to obtain an adjusted weight value, wherein the weight values of the data queues after all data in the queue packets are sent are set to be 0, and the selected queue refers to the data queue which is scheduled to send data in the queue packets for the last time.
7. The method of claim 6, wherein before the scheduling of the data transmitted by the traffic initiator using the scheduling policy, the method comprises:
when the data sent by the flow initiator is scheduled by using an initial scheduling strategy, waiting for the completion of one-time scheduling execution of the initial scheduling strategy; the initial scheduling policy is a scheduling policy used before the scheduling policy.
8. The method of claim 7, wherein before the waiting for the initial scheduling policy to schedule execution once, the method further comprises:
receiving a handover request message from the traffic initiator, the scheduling handover request message requesting to perform scheduling using the scheduling policy;
or determining that the scheduling efficiency improvement amount exceeds a threshold value by using the scheduling strategy compared with the initial scheduling strategy.
9. The method of claim 6, wherein the obtaining the service requirement information of the traffic initiator and determining the scheduling policy according to the service requirement information comprises:
acquiring at least one item of bandwidth requirement, time delay requirement and priority information in the service requirement information of the flow initiator; and selecting a scheduling strategy from the scheduling strategies to be selected according to the service demand information.
10. The method of claim 9, wherein the candidate scheduling policy comprises: a bandwidth optimizing mode, a bandwidth sharing mode and a bandwidth adjustable mode;
the bandwidth optimization mode is a scheduling strategy which is scheduled in sequence from high to low according to the service priority;
the bandwidth sharing mode is a scheduling strategy of balanced scheduling;
the bandwidth adjustable mode is a scheduling strategy which is scheduled from high to low according to the weight of a traffic initiator, and after each round of scheduling, the weight value of the unscheduled traffic initiator relative to the scheduled traffic initiator is increased.
11. A flow management apparatus comprising an input device, an output device, a memory for storing program instructions, and a processor, the program instructions adapted to be loaded by the processor;
the input device is used for receiving data sent by a flow initiator;
the processor configured to load the program instructions and execute the traffic management method according to any of claims 6 to 10.
12. A storage medium having stored therein a plurality of program instructions adapted to be loaded by a processor and to carry out a method of traffic management according to any one of claims 6 to 10.
CN201810548610.3A 2018-05-31 2018-05-31 Flow management method, integrated chip and device Active CN110213178B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810548610.3A CN110213178B (en) 2018-05-31 2018-05-31 Flow management method, integrated chip and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810548610.3A CN110213178B (en) 2018-05-31 2018-05-31 Flow management method, integrated chip and device

Publications (2)

Publication Number Publication Date
CN110213178A CN110213178A (en) 2019-09-06
CN110213178B true CN110213178B (en) 2022-08-12

Family

ID=67778866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810548610.3A Active CN110213178B (en) 2018-05-31 2018-05-31 Flow management method, integrated chip and device

Country Status (1)

Country Link
CN (1) CN110213178B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112583739A (en) * 2019-09-30 2021-03-30 华为技术有限公司 Scheduling method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101459699A (en) * 2008-12-25 2009-06-17 华为技术有限公司 Method and apparatus for network address conversion
CN101938403A (en) * 2009-06-30 2011-01-05 中国电信股份有限公司 Assurance method of multi-user and multi-service quality of service and service access control point
CN102300326A (en) * 2010-06-28 2011-12-28 中兴通讯股份有限公司 Scheduling method of multi-user multi-input multi-output (MIMO) communication system and base station

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020073129A1 (en) * 2000-12-04 2002-06-13 Yu-Chung Wang Integrated multi-component scheduler for operating systems
CN103476062B (en) * 2012-06-06 2015-05-27 华为技术有限公司 Data flow scheduling method, equipment and system
CN104717158B (en) * 2015-03-02 2019-03-05 中国联合网络通信集团有限公司 A kind of method and device adjusting bandwidth scheduling strategy
CN107196877B (en) * 2016-03-14 2021-07-20 华为技术有限公司 Method for controlling network flow and network equipment thereof
CN107483363B (en) * 2017-08-15 2020-04-14 无锡职业技术学院 Layered weighted polling scheduling device and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101459699A (en) * 2008-12-25 2009-06-17 华为技术有限公司 Method and apparatus for network address conversion
CN101938403A (en) * 2009-06-30 2011-01-05 中国电信股份有限公司 Assurance method of multi-user and multi-service quality of service and service access control point
CN102300326A (en) * 2010-06-28 2011-12-28 中兴通讯股份有限公司 Scheduling method of multi-user multi-input multi-output (MIMO) communication system and base station

Also Published As

Publication number Publication date
CN110213178A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
US8996756B2 (en) Using process location to bind IO resources on NUMA architectures
US8539498B2 (en) Interprocess resource-based dynamic scheduling system and method
JP2940450B2 (en) Job scheduling method and apparatus for cluster type computer
EP1256039B1 (en) Workload management in a computing environment
CN109564528B (en) System and method for computing resource allocation in distributed computing
JP2001142854A (en) Processing of channel subsystem holding input/output work queue based on priority
Wang et al. Application-aware offloading policy using SMDP in vehicular fog computing systems
KR20200017589A (en) Cloud server for offloading task of mobile node and therefor method in wireless communication system
US20090228895A1 (en) Method and system for polling network controllers
Kovacevic et al. Cloud and edge computation offloading for latency limited services
US8831026B2 (en) Method and apparatus for dynamically scheduling requests
US11201824B2 (en) Method, electronic device and computer program product of load balancing for resource usage management
US11838389B2 (en) Service deployment method and scheduling apparatus
CN110213178B (en) Flow management method, integrated chip and device
US8171484B2 (en) Resource management apparatus and radio network controller
Li et al. A network-aware scheduler in data-parallel clusters for high performance
CN105027656B (en) Method for scheduling user's set in a communication network is executed by communication network node
CN107634978B (en) Resource scheduling method and device
CN110515564B (en) Method and device for determining input/output (I/O) path
CN114640630B (en) Flow control method, device, equipment and readable storage medium
US10853138B2 (en) Scheduling resource usage
KR20150012071A (en) Apparatus and method for allocating computing resource to multiple users
Kumar et al. Two-Level Priority Task Scheduling Algorithm for Real-Time IoT Based Storage Condition Assessment System
Lu et al. An adaptive algorithm for resolving processor thrashing in load distribution
WO2022102087A1 (en) Computer system and flow control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant