CN117768375A - Data scheduling method, device, equipment and computer readable storage medium - Google Patents

Data scheduling method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN117768375A
CN117768375A CN202311765963.6A CN202311765963A CN117768375A CN 117768375 A CN117768375 A CN 117768375A CN 202311765963 A CN202311765963 A CN 202311765963A CN 117768375 A CN117768375 A CN 117768375A
Authority
CN
China
Prior art keywords
data
forwarding
node
target
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311765963.6A
Other languages
Chinese (zh)
Inventor
柴双林
鄢贵海
卢文岩
原德鹏
孙云刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yusur Technology Co ltd
Original Assignee
Yusur Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yusur Technology Co ltd filed Critical Yusur Technology Co ltd
Priority to CN202311765963.6A priority Critical patent/CN117768375A/en
Publication of CN117768375A publication Critical patent/CN117768375A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure relates to a data scheduling method, apparatus, device, and computer-readable storage medium, the method comprising: responding to a data forwarding request, and selecting a target forwarding outlet of the data according to the dispatching node parameters and a preset rule; based on the data, forwarding preemption occurs in the nodes, and a forwarding path of the data is determined according to the priority of the data and the target forwarding outlet; and dispatching and forwarding the data according to the forwarding path. The method includes the steps of responding to a data forwarding request, and selecting a target forwarding outlet of data according to scheduling node parameters and preset rules; based on the data forwarding preemption at the node, determining a data forwarding path according to the priority of the data and a target forwarding outlet; and scheduling and forwarding are carried out on the data according to the forwarding paths, so that the nodes and the paths are planned, the network bandwidth utilization rate is improved, the chip resource utilization rate is improved, and the delay is reduced.

Description

Data scheduling method, device, equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data scheduling method, apparatus, device, and computer readable storage medium.
Background
In the multi-processing system interconnection network topology framework, compared with a butterfly network, the direct connection network has better symmetry, and each node interconnection timing sequence convergence is easy to realize. Mesh (Mesh) networks and Torus (Torus) networks are common direct network topologies, and compared with Mesh networks, torus network structures are completely symmetrical and have better expansibility.
In the prior art, solving realization paths are generally given according to a Torus network routing algorithm, and the realization method has the defects of insufficient consumption and evaluation of hardware logic resources and wiring resources, low utilization rate of network forwarding bandwidth on a Torus chip, low utilization rate of chip resources and high delay.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a data scheduling method, apparatus, device, and computer readable storage medium to improve network bandwidth utilization.
In a first aspect, an embodiment of the present disclosure provides a data scheduling method, including:
responding to a data forwarding request, and selecting a target forwarding outlet of the data according to the dispatching node parameters and a preset rule;
based on the data, forwarding preemption occurs in the nodes, and a forwarding path of the data is determined according to the priority of the data and the target forwarding outlet;
and dispatching and forwarding the data according to the forwarding path.
In a second aspect, an embodiment of the present disclosure provides a data scheduling apparatus, including:
the selection module is used for responding to the data forwarding request and selecting a target forwarding outlet of the data according to the dispatching node parameters and a preset rule;
the determining module is used for determining a forwarding path of the data according to the priority of the data and the target forwarding outlet based on forwarding preemption of the data at the node;
and the scheduling module is used for scheduling and forwarding the data according to the forwarding path.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method according to the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored thereon a computer program for execution by a processor to implement the method of the first aspect.
In a fifth aspect, embodiments of the present disclosure also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement the method of the first aspect.
The data scheduling method, the device, the equipment and the computer readable storage medium provided by the embodiment of the disclosure select a target forwarding outlet of data according to scheduling node parameters and preset rules by responding to a data forwarding request; determining a forwarding path of the data according to the priority of the data and the target forwarding outlet; and scheduling and forwarding are carried out on the data according to the forwarding paths, so that the nodes and the paths are planned, the network bandwidth utilization rate is improved, the chip resource utilization rate is improved, and the delay is reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart of a data scheduling method provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an application scenario provided in an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an application scenario provided in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of node input/output data according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of a data scheduling apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
In the multi-processing system interconnection network topology framework, compared with a butterfly network, the direct connection network has better symmetry, and each node interconnection timing sequence convergence is easy to realize. Mesh (Mesh) networks and Torus (Torus) networks are common direct network topologies, and compared with Mesh networks, torus network structures are completely symmetrical and have better expansibility.
In the prior art, solving realization paths are generally given according to a Torus network routing algorithm, and the realization method has the defects of insufficient consumption and evaluation of hardware logic resources and wiring resources, low utilization rate of network forwarding bandwidth on a Torus chip, low utilization rate of chip resources and high delay. In view of this problem, the embodiments of the present disclosure provide a data scheduling method, which is described below with reference to specific embodiments.
Fig. 1 is a flowchart of a data scheduling method according to an embodiment of the present disclosure. The method can be executed by a data scheduling device, the data scheduling device can be implemented in a software and/or hardware mode, the data scheduling device can be configured in an electronic device, such as a server or a terminal, wherein the terminal specifically comprises a mobile phone, a computer or a tablet computer. In addition, the method can be applied to the application scenario shown in fig. 2, and it can be understood that the data scheduling method provided by the embodiment of the disclosure can also be applied to other scenarios.
The data scheduling method shown in fig. 1 is described below in conjunction with the application scenario shown in fig. 2, where the method includes the following specific steps:
s101, responding to a data forwarding request, and selecting a target forwarding outlet of the data according to the dispatching node parameters and a preset rule.
When the Torus node processes a data forwarding request, the data can be service flow data, and a target forwarding outlet of the data is selected according to node parameters and preset rules of the service flow data.
Optionally, the preset rule includes: and the target forwarding outlet comprises a target node and a target channel according to the shortest path principle.
And selecting a target forwarding outlet of the service flow data according to the node parameters of the service flow data and the path shortest principle. The target forwarding outlet includes a target node and a target channel, the target channel including Out0 or Out1. The shortest path principle is that the traffic flow data flows through the nodes least when being sent to the target node, which is beneficial to reducing the path delay, reducing the occupied routing bandwidth and improving the utilization rate of the routing bandwidth. For example, if the target node is 12, the target channel is Out0, and the data is a traffic flow, then the node 11 forwards the traffic flow to the target node 12 and goes to the target channel Out0; the destination node is 21, the destination channel is Out1, the data is the traffic flow, the node 11 forwards the traffic flow going to the destination node 21, and goes to the destination channel Out1.
S102, forwarding preemption occurs at the node based on the data, and a forwarding path of the data is determined according to the priority of the data and the target forwarding outlet.
When the data is in the forward preemption of the node, the forwarding path of the data is determined according to the priority of the data and the target forwarding outlet, for example, when the target node is 33, the target channel is Out0 or Out1, and the data is a service flow, the forwarding path from the node 00 to the target node 33 can be 00-01-02-03-13-23-33, and the target channel can be Out0-Out1-Out 0/Out1.
S103, dispatching and forwarding are carried out on the data according to the forwarding paths.
The data is scheduled and forwarded according to the forwarding paths described above (i.e., paths 00-01-02-03-13-23-33, lanes Out0-Out1-Out 0/Out 1).
According to the embodiment of the disclosure, a target forwarding outlet of data is selected according to the dispatching node parameters and the preset rules by responding to the data forwarding request; determining a forwarding path of the data according to the priority of the data and the target forwarding outlet; and scheduling and forwarding are carried out on the data according to the forwarding paths, so that the nodes and the paths are planned, the network bandwidth utilization rate is improved, the chip resource utilization rate is improved, and the delay is reduced.
In some embodiments, before determining the forwarding path of the data according to the priority of the data and the target forwarding outlet, the method further comprises: and determining the priority of the data according to the survival time of the data based on the forward preemption of the data at the scheduling node.
When the data is forwarded and preempted in the dispatching node, determining the priority of the data according to the survival time of the data.
Optionally, determining the priority of the data according to the lifetime of the data includes: acquiring the survival time of the data; sorting the data according to the survival time of the data; and determining the priority of the data according to the sorting.
Specifically, the survival time of the data is obtained; sorting the data according to the survival time of the data; the priority of the data is determined according to the ordering. I.e., data that is long in time-to-live is preferentially processed, it will be appreciated that the TTL is incremented as each node forwards.
According to the embodiment of the disclosure, when forwarding preemption occurs to the scheduling node through the data, the survival time of the data is obtained; sorting the data according to the survival time of the data; the priority of the data is determined according to the ordering, the principle of the scheduling node for forwarding and preempting the data for scheduling and forwarding is defined, and the logic of the data scheduling method is improved.
In some embodiments, the method further comprises: judging whether the data is preempted in the node in a forwarding way or not; and determining a forwarding route node according to the transfer buffer waterline of the next node based on the fact that the data does not have forwarding preemption in the node.
Judging whether the data is forwarded and preempted in the node or not; when the data is not forwarded and preempted in the node, the forwarding route node is determined according to the transfer buffer waterline of the next node under the condition that the target forwarding outlet is selected without violating the shortest path principle, so that the purpose of avoiding the busy route is achieved.
Optionally, based on that the data is forwarded and preempted in the node, determining that the data with the priority lower than the threshold is waiting data, and storing the waiting data into the transfer buffer area.
When the data is forwarded and preempted in the node, determining that the data with the priority lower than the threshold value is waiting data, wherein the priority of the data is determined according To the duration of the Time To Live (TTL) of the data, the higher the priority of the data is, the Time To Live (TTL) is increased, the waiting data is stored in a transit buffer zone in an increasing mode in each node, the transit buffer zone and the landing buffer zone share storage resources, the occupation of the resources can be reduced, and it can be understood that the priority of the transit data is higher than that of the original data.
According to the embodiment of the disclosure, whether the data is forwarded and preempted in the node is judged, the processing mode of the data when the data is not forwarded and preempted in the node and the processing mode of the data when the data is forwarded and preempted in the node are specifically described, and the scheduler comprehensively judges the egress congestion and busy nodes according to the node transfer buffer waterline and the landing buffer waterline, so that each node is reasonably scheduled, and the utilization rate of network bandwidth is improved.
In some embodiments, the data includes a plurality of messages, and forwarding orders and forwarding paths of the plurality of messages are consistent.
The data includes a plurality of messages, which may be 3-8 or other numbers, for example, and the embodiment is not limited. The forwarding sequence and forwarding path of the messages are consistent, i.e. the messages follow the path of the order-preserving queue to be consistent, and the forwarding outlets of the order-preserving sequences are consistent, thereby meeting the order-preserving requirement of the data messages.
According to the embodiment of the disclosure, the description data comprises the plurality of messages, the forwarding sequence and the forwarding path of the plurality of messages are consistent, the order-preserving requirement of the data messages is realized, the maximization of the bandwidth of the torus network in a scene of the order-preserving requirement of the large-bandwidth data is further realized, and the overall performance index of the intelligent network card is improved.
In some embodiments, as shown in fig. 2, 4x4 network nodes (it can be understood that the network nodes can be expanded to nxn scale, and specifically, the implementation principle and implementation are similar), 3 traffic flows are input, the 3 traffic flows are Ingress0, ingress1, and Ingress2, each of the traffic flows is 100Mpps, and each node in the network node 4x4 has two transit input flows, two transit output flows, one drop traffic flow, and one transit traffic data flow. The 00 nodes and the 11 nodes are selected as original service stream sink points, namely, the 00 nodes and the 11 nodes sink into 3 paths of original service streams. And setting priority scheduling 00 nodes, 01 nodes, 02 nodes, 03 nodes, 11 nodes, 12 nodes, 13 nodes, 21 nodes, 22 nodes, 23 nodes, 31 nodes, 32 nodes and 33 nodes for processing the traffic flow (Ingress 0) which is converged by the number 0 traffic channel to provide processing calculation force, avoiding 10 nodes, 20 nodes and 30 nodes at the same time, and avoiding the occupation of the transfer channel bandwidth of the number 1 traffic flow (Ingress 1) as much as possible. As shown in fig. 3, in the traffic flow (Ingress 1) that the traffic channel of No. 1 merges, priority scheduling is set to 00 nodes, 10 nodes, 20 nodes, 30 nodes, 11 nodes, 12 nodes, 13 nodes, 21 nodes, 22 nodes, 23 nodes, 31 nodes, 32 nodes, 33 nodes provide processing calculation force, and meanwhile 01 nodes, 02 nodes, 03 nodes are avoided, so that the occupation of the traffic channel bandwidth in the traffic flow of No. 0 (Ingress 1) is avoided as much as possible. As shown in fig. 4, in the service flow (Ingress 2) which is converged by the service channels No. 2, the priority scheduling 11 node, 12 node, 13 node, 21 node, 22 node, 23 node, 31 node, 32 node and 33 node are set to provide processing calculation force, and meanwhile, the priority scheduling nodes of the service flows of each path can be flexibly set according to actual conditions in the actual application process while avoiding the occupation of the transfer channel bandwidths of the service flows No. 0 and No. 1 as far as possible by avoiding the 00 node, 01 node, 02 node, 03 node, 10 node, 20 node and 30 node.
According to the embodiment of the disclosure, through adopting one-way two-in and one-way two-out for each node transit service flow path, the situation that the consumption of logic resources and wiring resources of the two-way path is doubled is avoided, and the doubling deployment of computing resources is facilitated. In addition, the data flow direction of each node is designed to be northwest forward southeast, and meanwhile, the service inflow point is designed to be an edge node, so that the data processing delay in a general service scene is reduced, the transmission bandwidth is maximized when the computing nodes are fully loaded and run in parallel, the path blocking caused by data backflow and backward flow is avoided, the data generated after calculation of each Torus node is preferentially scheduled to the next node along the data flow direction, and the Torus network bandwidth is improved.
In some embodiments, as shown in fig. 5. Two paths of unidirectional transit services flow in and two paths of transit services flow out. The 00 node also has two paths of original service data flows, and the service flows occupy the bandwidth of the transit service flow to flow into the Torus node under the general service scene, and the service flows are fully loaded and connected into the Torus network. The 00 node reserves a transit input channel to realize the full-connection Torus network data forwarding function. When the service flow processing node is scheduled as the local node, the service data is directly dropped to the local node, and the outlet bandwidth is not occupied. The landing bandwidth is the sum of two paths of input transit bandwidths, so that the occupation of the transit bandwidth caused by landing preemption is avoided, and the service data is directly landed without transit again. So that the switching controller can implement the scheduling of original service flow, medium transfer flow, drop data flow and forward data flow.
Fig. 6 is a schematic structural diagram of a data scheduling apparatus according to an embodiment of the present disclosure. The data scheduling means may be a terminal as described in the above embodiments, or the data scheduling means may be a part or component in the terminal. The data scheduling apparatus provided in the embodiment of the present disclosure may execute the processing flow provided in the embodiment of the data scheduling method, as shown in fig. 6, the data scheduling apparatus 60 includes: a selection module 61, a first determination module 62, a scheduling module 63; the selecting module 61 is configured to respond to a data forwarding request, and select a target forwarding outlet of the data according to a scheduling node parameter and a preset rule; a first determining module 62, configured to determine a forwarding path of the data according to the priority of the data and the target forwarding outlet, based on that forwarding preemption occurs at the node for the data; and the scheduling module 63 is configured to schedule and forward the data according to the forwarding path.
Optionally, the preset rule includes: and the target forwarding outlet comprises a target outlet and a target channel according to the shortest path principle.
Optionally, the data scheduling device 60 includes: a second determining module 64, configured to determine a priority of the data according to a lifetime of the data based on forwarding preemption of the data at the scheduling node.
Optionally, the second determining module 64 is further configured to obtain a lifetime of the data; sorting the data according to the survival time of the data; and determining the priority of the data according to the sorting.
Optionally, the data scheduling device 60 includes: the judging module is used for judging whether the data is forwarded and preempted in the node or not; and determining a forwarding route node according to the transfer buffer waterline of the next node based on the fact that the data does not have forwarding preemption in the node.
Optionally, the data includes a plurality of messages, and forwarding orders and forwarding paths of the plurality of messages are consistent.
Optionally, the data scheduling device 60 includes: and the storage module is used for determining that the data with the priority lower than the threshold value is waiting data based on the forwarding preemption of the data in the nodes, and storing the waiting data into the transfer buffer area.
The data scheduling device of the embodiment shown in fig. 6 may be used to implement the technical solution of the embodiment of the data scheduling method, and its implementation principle and technical effects are similar, and are not repeated here.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. The electronic device may be a terminal as described in the above embodiments. The electronic device provided in the embodiment of the present disclosure may execute the processing flow provided in the embodiment of the data scheduling method, as shown in fig. 7, where the electronic device 70 includes: memory 71, processor 72, computer programs and communication interface 73; wherein the computer program is stored in the memory 71 and configured to be executed by the processor 72 for performing the data scheduling method as described above.
In addition, the embodiment of the present disclosure also provides a computer-readable storage medium having stored thereon a computer program that is executed by a processor to implement the data scheduling method described in the above embodiment.
Furthermore, the disclosed embodiments also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implements a data scheduling method as described above.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
responding to a data forwarding request, and selecting a target forwarding outlet of the data according to the dispatching node parameters and a preset rule;
based on the data, forwarding preemption occurs in the nodes, and a forwarding path of the data is determined according to the priority of the data and the target forwarding outlet;
and dispatching and forwarding the data according to the forwarding path.
In addition, the electronic device may also perform other steps in the data scheduling method as described above.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of data scheduling, the method comprising:
responding to a data forwarding request, and selecting a target forwarding outlet of the data according to the dispatching node parameters and a preset rule;
based on the data, forwarding preemption occurs in the nodes, and a forwarding path of the data is determined according to the priority of the data and the target forwarding outlet;
and dispatching and forwarding the data according to the forwarding path.
2. The method of claim 1, wherein the preset rule comprises: and the target forwarding outlet comprises a target node and a target channel according to the shortest path principle.
3. The method of claim 2, wherein prior to determining a forwarding path for the data based on the priority of the data and the target forwarding outlet, the method further comprises:
and determining the priority of the data according to the survival time of the data.
4. A method according to claim 3, wherein determining the priority of the data based on the time-to-live of the data comprises:
acquiring the survival time of the data;
and determining the priority of the data according to the survival time of the data.
5. The method according to claim 1, wherein the method further comprises:
judging whether the data is preempted in the node in a forwarding way or not;
and determining a forwarding route node according to the transfer buffer waterline of the next node based on the fact that the data does not have forwarding preemption in the node.
6. The method of claim 1, wherein the data comprises a plurality of messages, and wherein forwarding orders and forwarding paths of the plurality of messages are consistent.
7. The method according to claim 1, wherein the method further comprises:
and based on the data forwarding preemption at the node, determining the data with the priority lower than a threshold value as waiting data, and storing the waiting data into a transit buffer zone.
8. A data scheduling apparatus, the apparatus comprising:
the selection module is used for responding to the data forwarding request and selecting a target forwarding outlet of the data according to the dispatching node parameters and a preset rule;
the first determining module is used for determining a forwarding path of the data according to the priority of the data and the target forwarding outlet based on forwarding preemption of the data at the node;
and the scheduling module is used for scheduling and forwarding the data according to the forwarding path.
9. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any of claims 1-4.
10. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any of claims 1-4.
CN202311765963.6A 2023-12-20 2023-12-20 Data scheduling method, device, equipment and computer readable storage medium Pending CN117768375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311765963.6A CN117768375A (en) 2023-12-20 2023-12-20 Data scheduling method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311765963.6A CN117768375A (en) 2023-12-20 2023-12-20 Data scheduling method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117768375A true CN117768375A (en) 2024-03-26

Family

ID=90319401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311765963.6A Pending CN117768375A (en) 2023-12-20 2023-12-20 Data scheduling method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117768375A (en)

Similar Documents

Publication Publication Date Title
US7809006B2 (en) Routing with virtual channels
US8223650B2 (en) Express virtual channels in a packet switched on-chip interconnection network
CN112003787B (en) Routing path determining method, device, control equipment and storage medium
JP4995101B2 (en) Method and system for controlling access to shared resources
US20130148506A1 (en) Bufferless nonblocking networks on chip
CN110891093A (en) Method and system for selecting edge computing node in delay sensitive network
Aujla et al. An ensembled scheme for QoS-aware traffic flow management in software defined networks
Shi et al. Real-time communication analysis with a priority share policy in on-chip networks
CN112468412A (en) Method for generating schedules for mixed-critical computer networks
WO2022213817A1 (en) Routing method and routing apparatus
Kentis et al. Effects of port congestion in the gate control list scheduling of time sensitive networks
CN113543210B (en) 5G-TSN cross-domain QoS and resource mapping method, equipment and computer readable storage medium
CN115277429B (en) Power communication service resource allocation method and device based on flexible Ethernet
US20230029812A1 (en) Method to configure real-time communications in a network with time-triggered and rate-constrained traffic
EP3063969B1 (en) System and method for traffic engineering using link buffer status
Kurbanov et al. Deadlock-free routing in spacewire onboard network
González-Ortega et al. LOBS-H: an enhanced OBS with wavelength sharable home circuits
JP2001197110A (en) Traffic control method
US10764191B2 (en) Device and method for managing end-to-end connections
CN117768375A (en) Data scheduling method, device, equipment and computer readable storage medium
Liu et al. A dependency-graph based priority assignment algorithm for real-time traffic over NoCs with shared virtual-channels
Finzi et al. Breaking vs. solving: Analysis and routing of real-time networks with cyclic dependencies using network calculus
Bouillard et al. Worst-case analysis of tandem queueing systems using network calculus
JP2002359634A (en) Method and device for designing communication path and program
Cobb et al. A theory of multi‐channel schedulers for quality of service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination