CN117938750B - Method, device, equipment, storage medium and product for processing scheduling route information - Google Patents

Method, device, equipment, storage medium and product for processing scheduling route information Download PDF

Info

Publication number
CN117938750B
CN117938750B CN202410335042.4A CN202410335042A CN117938750B CN 117938750 B CN117938750 B CN 117938750B CN 202410335042 A CN202410335042 A CN 202410335042A CN 117938750 B CN117938750 B CN 117938750B
Authority
CN
China
Prior art keywords
scheduled
data
data flow
link
network device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410335042.4A
Other languages
Chinese (zh)
Other versions
CN117938750A (en
Inventor
姬雪枫
单国志
王金柱
陈诏和
陈捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202410335042.4A priority Critical patent/CN117938750B/en
Publication of CN117938750A publication Critical patent/CN117938750A/en
Application granted granted Critical
Publication of CN117938750B publication Critical patent/CN117938750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a processing method, a device, equipment, a storage medium and a product for scheduling routing information, comprising the following steps: selecting first-hop network equipment after the data flow to be scheduled is sent out from source network equipment according to the data flow of the data flow to be scheduled and the target network address; sequentially determining other hop network devices of the data stream to be scheduled in the transmission process based on the target network address and the selected first hop network device; the (i+1) th hop network equipment of the data flow to be scheduled is determined according to the queried routing information of the (i) th hop network equipment; determining a scheduling link according to each hop of network equipment of a data stream to be scheduled in the transmission process; and if the data flow is matched with the available capacity of the scheduling link, sending scheduling route information corresponding to the data flow to be scheduled to the source network equipment according to the scheduling link. The technical scheme of the embodiment of the application can determine the specific path of the data flow, thereby being capable of accurately carrying out bandwidth assessment and being beneficial to saving network bandwidth resources.

Description

Method, device, equipment, storage medium and product for processing scheduling route information
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, a storage medium, and a product for processing scheduling routing information.
Background
With the gradual development of computer technology, a High-Performance Computing (HPC) network has become an important infrastructure in the fields of industry, scientific research and the like due to the characteristics of High bandwidth and low delay. HPC networks typically employ fat-tree architecture (fat-tree), which is a networking mode that includes multiple levels of switching nodes, which provides a fully connected network for computing nodes accessing the network to exchange data between different computing nodes.
Currently, a control device for managing a switching node may respond to congestion alarms in the network, and in particular may adjust a data flow to be scheduled to a link with the lowest load to eliminate the alarms by injecting a 32-bit marginal gateway protocol (Border Gateway Protocol, BGP) route to the source switching node, i.e., designating the next hop. Although this approach can reduce congestion to some extent, in order to avoid route jitter from causing a next hop to be unreachable, it is common to specify multiple equivalent next hops when issuing a route. Because the control device cannot determine the specific paths of the scheduled data streams in a plurality of equivalent next hops, it is necessary to ensure that each idle link can accommodate all the scheduled data streams, so as to avoid waste of network bandwidth resources.
Disclosure of Invention
The embodiment of the application provides a processing method, a device, equipment, a storage medium and a product for scheduling routing information, which can determine a specific path of a data stream to be scheduled, further can carry out accurate bandwidth assessment based on the specific path, and is beneficial to saving network bandwidth resources.
In a first aspect, an embodiment of the present application provides a method for processing scheduling routing information, including:
selecting first-hop network equipment after the data flow to be scheduled is sent out from source network equipment according to the data flow of the data flow to be scheduled and a target network address;
sequentially determining other network jumping devices of the data stream to be scheduled in the transmission process based on the target network address and the selected first network jumping device; the i+1th hop network equipment of the data flow to be scheduled is determined according to the queried routing information of the i hop network equipment, wherein i is a positive integer;
Determining a scheduling link of the data stream to be scheduled according to each hop of network equipment of the data stream to be scheduled in the transmission process;
and if the data flow is matched with the available capacity of the scheduling link, sending scheduling route information corresponding to the data flow to be scheduled to the source network equipment according to the scheduling link.
In a second aspect, an embodiment of the present application provides a processing apparatus for scheduling routing information, including:
a selecting unit, configured to select a first hop network device after the data flow to be scheduled is sent from a source network device according to the data flow of the data flow to be scheduled and a target network address;
A determining unit, configured to sequentially determine other network-hop devices of the data stream to be scheduled in a transmission process based on the target network address and the selected first network-hop device; the i+1th hop network equipment of the data flow to be scheduled is determined according to the queried routing information of the i hop network equipment, wherein i is a positive integer; determining a scheduling link of the data stream to be scheduled according to each hop of network equipment of the data stream to be scheduled in the transmission process;
and the sending unit is used for sending the scheduling route information corresponding to the data flow to be scheduled to the source network equipment according to the scheduling link if the data flow is matched with the available capacity of the scheduling link.
In a third aspect, an embodiment of the present application provides a processing apparatus for scheduling routing information, where the processing apparatus for scheduling routing information includes one or more processors; and a memory for storing one or more computer programs which, when executed by the one or more processors, cause the electronic device to implement the method of processing scheduling routing information of the first aspect described above.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium having instructions stored therein, which when executed on a computer, cause the computer to perform the method for processing scheduling routing information of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, where the computer program product includes a computer program or computer instructions, where the computer program or computer instructions, when executed by a processor, implement a method for processing scheduling routing information as in the first aspect.
In the technical solutions provided in some embodiments of the present application, after selecting a first-hop network device after a data stream to be scheduled is sent from a source network device according to a data flow of the data stream to be scheduled and a target network address, a control device may sequentially determine other-hop network devices of the data stream to be scheduled in a transmission process based on the target network address and the selected first-hop network device, where an i+1th-hop network device of the data stream to be scheduled is determined according to the queried routing information of the i-th-hop network device. Furthermore, the control device may determine a scheduling link of the data stream to be scheduled according to each hop network device of the data stream to be scheduled in the transmission process. Therefore, the control device can determine the next hop network device according to the route information of the previous hop network device, is beneficial to avoiding strong dependence on topology, can be suitable for network architectures with different topologies, and improves the adaptability and flexibility of scheduling. And under the condition that the data flow to be scheduled is matched with the available capacity of the scheduling link, the scheduling route information corresponding to the data flow to be scheduled is sent to the source network equipment according to the scheduling link, and under the condition that a specific scheduling link is determined, accurate capacity assessment can be carried out based on the scheduling link, so that waste of network bandwidth resources is avoided, and scheduling accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a data exchange system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an architecture of a processing system for scheduling routing information according to an embodiment of the present application;
fig. 3 is a flow chart of a method for processing scheduling routing information according to an embodiment of the present application;
Fig. 4 is a schematic diagram of a method for processing scheduling routing information according to an embodiment of the present application;
FIG. 5 is a diagram of a user interface for pushing an alarm event according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a user interface of the details of a jumped schedule provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a user interface for an overview of scheduling situations of clusters in a certain area according to an embodiment of the present application;
fig. 8 is a schematic diagram of a scheduling effect AllReduce according to an embodiment of the present application;
Fig. 9 is a schematic diagram of a scheduling effect AlltoAll according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of a routing information processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a routing information processing device according to an embodiment of the present application.
Detailed Description
It should be noted in advance that, in order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the embodiments of the present application will be clearly and completely described in connection with one or more drawings. Moreover, the drawings shown in the embodiments of the present application are only exemplary, and for example, the execution sequence of each step in the drawings may be adaptively adjusted according to the actual application scenario. Furthermore, in the embodiments of the present application, the block diagrams shown in the drawings are merely functional entities, and do not necessarily correspond to physically independent entities. That is, the functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
In the present embodiment, the term "module" or "unit" refers to a computer program or a part of a computer program having a predetermined function and working together with other relevant parts to achieve a predetermined object, and may be implemented in whole or in part by using software, hardware (such as a processing circuit or a memory), or a combination thereof. Also, a processor (or multiple processors or memories) may be used to implement one or more modules or units. Furthermore, each module or unit may be part of an overall module or unit that incorporates the functionality of the module or unit.
It should be noted that: references herein to "a plurality" means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., a and/or B may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The technical scheme provided by the application relates to a data exchange system, which can comprise a data exchange network and access equipment (such as computing equipment) for accessing the data exchange network, wherein the data exchange network comprises a plurality of network equipment, and the network equipment realizes the communication of different access equipment by transmitting different data streams. The data exchange network may be a data center network (DATA CENTER network, DCN) or a campus network, among others. In some implementations, the data exchange network may also be referred to as a computing network, such as an HPC network. The data exchange network may adopt a CLOS network architecture, specifically may adopt a fat-tree (fat-tree) architecture, where the CLOS network architecture is a non-blocking multi-stage switch structure for reducing the number of ports required in the interconnection structure, and the fat-tree architecture is an application form of the CLOS network architecture, and the data exchange network may be classified into a three-stage fat-tree network (3-stage CLOS network) or a two-stage fat-tree network (2-stage CLOS network) according to different network topologies.
Referring to fig. 1, fig. 1 is a schematic diagram of a data exchange system according to an embodiment of the application. As shown in fig. 1, the data exchange system includes a data exchange network and a computing device connected to the data exchange network, where the data exchange network adopts a multi-level switching node networking mode (fat tree architecture), such as two or three levels, and the switching node may be each network device in the data exchange network. Fig. 1 illustrates an example of a data exchange network using a three-level switching node networking mode, where the data exchange network is a three-level fat tree network. Wherein the data exchange network provides a fully connected network for a plurality of computing devices within the access network, the plurality of computing devices exchanging data between different computing devices may be graphics processors (Graphics Processing Unit, GPUs) in fig. 1.
Specifically, the three-level fat tree network includes an access Layer (ACCESS LAYER), a convergence Layer (Aggregation Layer), and a Core Layer (Core Layer). Wherein:
The access layer is the bottom layer of the fat tree architecture and is connected to computing devices or user devices, including the most numerous network devices, which may be switches, and the switches of the access layer (commonly referred to as LA) are typically located at the Top of the Rack, and are also referred to as Top of Rack Switches (TORs).
The convergence layer is located between the core layer and the access layer, and is used for converging and forwarding the data streams of the access layers to the core layer, and the network equipment included in the convergence layer can also be a switch, and is connected with the switch of the access layer and is responsible for executing network policies such as access control, traffic shaping and the like. In fat tree architectures, the number of network devices at the convergence layer is typically greater than the core layer and less than the access layer. The convergence layer may include leaf switches in a fat tree architecture for connecting and managing a large number of network devices, such as network devices (switches) of the access layer, among others. Alternatively, the convergence layer may also include a Line Card (LC), which is a hardware component in the network device (e.g., switch, router) for handling Line interfaces and data transmission.
The core layer is the top layer of the fat tree architecture for providing high-speed, non-blocking connections to support data transmission throughout the network. The core layer includes network devices that may be high performance switches or routers with the highest bandwidth and lowest latency. The number of switches at the core layer is minimal, but they have the highest port density and connectivity capability. The core layer may be a trunk node in a fat tree architecture, which may also be referred to as a spine node (spine), and is responsible for connecting the leaf nodes, determining that the data flow is forwarded between the leaf nodes at a high speed, and providing high-performance data transmission and forwarding capability. Alternatively, the core layer may include a switch port analyzer (Switched Port Analyzer, SPAN) or may include a super LC.
The data flow may be transferred between the GPU server and the access layer as shown in ① in fig. 1, and likewise, the data flow may be transferred between the access layer and the convergence layer as shown in ②, and the data flow may be transferred between the convergence layer and the core layer as shown in ③. The bandwidth between the layers in the data exchange network is not convergent, so that the network bandwidth from the leaf nodes to the trunk nodes is not gradually reduced, but is maintained or possibly increased, so that the fat tree architecture is more like a real tree, and the thicker the tree root and trunk are, as represented by the thicker lines included in ③ than the lines included in ② in fig. 1.
It can be appreciated that, due to the different numbers of computing devices and the different numbers of GPU servers as shown in fig. 1, the number of layers of the fat tree architecture is different, and a core layer switch is not required to be introduced into the clusters with fewer computing devices, that is, the whole network architecture only comprises two layers, which is a two-level fat tree network. More clusters in the computing device need to introduce core layer switches, that is, fat tree architectures of data switching networks of different clusters are different.
The video stream processing scheme provided by the embodiment of the application relates to technologies such as cloud storage, wherein:
Cloud storage (cloud storage) is a new concept that extends and develops in the concept of cloud computing, and a distributed cloud storage system (hereinafter referred to as a storage system for short) refers to a storage system that integrates a large number of storage devices (storage devices are also referred to as storage nodes) of various types in a network to work cooperatively through application software or application interfaces through functions such as cluster application, grid technology, and a distributed storage file system, so as to provide data storage and service access functions for the outside.
At present, the storage method of the storage system is as follows: when creating logical volumes, each logical volume is allocated a physical storage space, which may be a disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, the data is stored on a file system, the file system divides the data into a plurality of parts, each part is an object, the object not only contains the data but also contains additional information such as a data identifier, the file system writes each object into a physical storage space of the logical volume, and the file system records storage position information of each object, so that when the client requests to access the data, the file system can enable the client to access the data according to the storage position information of each object.
The process of allocating physical storage space for the logical volume by the storage system specifically includes: physical storage space is divided into stripes in advance according to a set of capacity measures for objects stored on a logical volume (the measures often have a large margin with respect to the capacity of the objects actually to be stored) and redundant array of independent disks (Redundant Array of INDEPENDENT DISK, RAID), and a logical volume can be understood as a stripe, whereby physical storage space is allocated for the logical volume.
Based on the above description, please refer to fig. 2, fig. 2 is a schematic diagram of an architecture of a processing system for scheduling routing information according to an embodiment of the present application, and as shown in fig. 2, the processing system for scheduling routing information includes a data processing device 101, a control device 102, a data exchange network, and a computing device 105 accessing the data exchange network, where fig. 2 is an illustration taking the data exchange network as a second-level fat-tree network, and the data exchange network includes a convergence layer device 103 and an access layer device 104. It should be noted that the number and the form of the devices shown in fig. 2 are used as examples, and are not limited to the embodiments of the present application, and the data processing device 101 may be the same electronic device as the control device 102 in practical application. In practical applications, the control device 102 may be a plurality of electronic devices, and the number of the control devices 102 is not limited by the present application.
Specifically, the data processing device 101 is an electronic device that obtains an alarm event, and since the alarm event may have a plurality of different formats, the data processing device 101 may process the alarm event, so that the processed alarm event has the same format, which is the format of the alarm event that the control device 102 can respond to. After processing the alarm event, the data processing device 101 may send the alarm event as a message publisher (Producer) into a message queue, and then the control device 102 may read the alarm event from the message queue as a message subscriber (Consumer) and respond. The alarm event includes a congestion link, and the control device may schedule a data flow in the congestion link to alleviate the congestion. Wherein the data stream is sent from the computing device 105 to the accessed access layer device 104, and further, the access layer network device may forward the data stream to the corresponding convergence layer device 103 based on the destination IP address of the data stream, and further, the convergence layer device forwards the data stream to the corresponding access layer device 104 based on the destination IP address of the data stream, and the data stream is sent to the corresponding computing device 105 by the access layer device 104.
The data processing device 101, the control device 102, and the computing device 105 may be, but are not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a smart voice interaction device, a smart home appliance, a vehicle-mounted terminal, and the like. The data processing device 101, the control device 102, and the computing device 105 may be servers, for example, independent physical servers, a server cluster or a distributed system formed by a plurality of physical servers, and cloud servers that provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms. Each of the above-described convergence layer device 103 and access layer device 104 may be a device for forwarding data packets in a data exchange network, such as a switch, router, virtual switch, or virtual router.
In the process of responding to the link congestion alarm of the data exchange network, the following steps are found: in a data switching network, each network device has a capacity of buffers that can be used to absorb bursty data streams. When the traffic to be scheduled (data flow to be scheduled) of a network device exceeds the switching capability and the buffering capability of the network device, the phenomena of packet loss and the like may be caused. For example, multiple source network devices send data to the same destination network device at the same time, and because the buffer capacity of the destination network device is limited, when the data traffic of the sent data stream exceeds the buffer capacity of the destination network device, congestion, packet loss and other phenomena occur, so that the communication efficiency is reduced.
Currently, link traffic overrun and explicit congestion notification (Explicit Congestion Notification, ECN) count anomaly alarms are commonly employed to control traffic in a data switching network. The link traffic overrun means that the traffic of the data stream to be scheduled exceeds a preset traffic threshold, and congestion phenomenon may or may already occur. ECN refers to that when a network device detects congestion or congestion is about to occur in the network, a specific bit is set at the packet header to notify congestion, and then the data flow receiver recognizes the ECN flag, and the data flow receiver notifies the data flow sender, that is, the source network device, by modifying an acknowledgement character (Acknowledge Character, ACK) packet. The source network device, upon receiving the ECN marked ACK, may reduce the rate at which it sends the data stream (the packet rate) to reduce congestion.
The control device 102 may respond to alarms that occur when link traffic is overrun and ECN technology is abnormal, among other things. After the alarm event occurs, the control device 102 may analyze the flow data of the congested link in the alarm event by using a sampling flow (Sampled Flow, sFlow), where sFlow is a network flow monitoring technology based on packet sampling, and may be used for statistical analysis of network flow, specifically may be used for providing interface-based flow analysis, and may also be used for monitoring the flow condition in real time, and finding out the source of the abnormal flow and the attack flow. And determines the data streams that need to be scheduled. Furthermore, the control device 102 may select a data flow to be scheduled, and inject a 32-bit border gateway protocol (Border Gateway Protocol, BGP) route into a source network device corresponding to the data flow to be scheduled.
It should be noted that BGP is the basis of internet communication, and is a small unit for autonomously determining that various routing protocols should be adopted in the present system, that is, an external gateway protocol for exchanging routing information between autonomous systems (Autonomous System, AS), and may be used to select a mobile routing protocol for optimal routing, so AS to implement routing reachability between ases. In the embodiment of the application, BGP is used for dynamically controlling the path of the data flow. Route injection refers to the addition of specific routing information to the network in order to affect the routing of data flows, and the mask length of the injected routing entry is 32 bits, the 32 bit mask indicating that this routing entry is very specific, pointing to only a single internet protocol (Internet Protocol, IP) address. That is, the data flow to be scheduled is adjusted to the link with the lowest load to eliminate the alarm, i.e. the data flow to be scheduled is adjusted to the link with the most idle.
This approach may suffer from redundant bandwidth occupation, path selection randomness, insensitivity to topology events, and strong topology dependencies. Specifically, since the control device 102 injects the 32-bit BGP route directly into the source network device (data flow sender), i.e., manually configures or dynamically learns a route entry in the source network device that points to a particular target IP address. When the source network device transmits a data stream, the destination IP address of the data packet is acquired first, and the routing table is used to determine how to forward the data stream, if a 32-bit BGP route of a specific destination IP address is injected, all data streams with destination IP addresses matching the route entry will be affected by the route entry, i.e., all data streams passing through the source network device and having the same destination IP address will be affected by the injected route, because the source network device forwards the data stream to the next hop address based on the routing table.
To avoid the problem of unreachable next hops due to route jitter, we will typically specify multiple equivalent next hops, i.e., equal-Cost Multi-Path (ECMP) groups, when issuing routes. ECMP refers to a network routing strategy that allows data flows to be distributed among multiple paths of equal cost for distributing network traffic among multiple equal cost paths for load balancing and network redundancy. ECMP allows network devices to evenly distribute traffic over multiple paths to achieve load balancing, such as when the paths reach the same destination IP address. Since the control device 102 can only learn the affected data streams and cannot determine the specific paths of each data stream to be scheduled between a plurality of equivalent next hops, when performing bandwidth capacity assessment, it is necessary to ensure that each idle link has enough bandwidth to accommodate all traffic that may flow through it, so as to cope with the worst case, thereby resulting in waste of network bandwidth resources and occupation of redundant bandwidth. In addition, if there are more affected data flows, the routing of the affected data flows together may also present scheduling failure problems, and the remaining allocable bandwidth of the idle links is insufficient to meet the capacity requirements of the data flows.
Second, the routing procedure mainly performs routing based on a preset number of next hops, for example, selecting the top N links that are most idle for scheduling, and since the controller injects routing information into the source network device, the control device 102 can only interfere with a path from the source network device (source LA device, such as a certain network device in the access layer device 104) to a certain intermediate network device (LC device, such as a certain network device in the convergence layer device 103). That is, control device 102 can only intervene in the link selection of the first hop, whereas in a transmission link of a data stream involving multiple hops, e.g. in a two-level fat-tree structure (two-level topology) comprising only an access layer and a convergence layer, the transmission link involves two hops, i.e. source LA device- > LC device- > destination LA device. For another example, in a three-level fat-tree structure (three-level topology) comprising an access layer, a convergence layer and a core layer, the transmission link involves four hops, namely source LA device- > source LC device- > core layer LC device- > destination LA device. If only the N maximum possible bandwidth remaining in the first hop link is considered, the subsequent paths are not necessarily optimal paths, i.e. the candidate paths do not necessarily meet the capacity requirement of the data stream to be scheduled, the risk of congestion of the network after scheduling is increased, and the problem of randomness of path selection occurs.
Furthermore, the execution of the path selection and scheduling algorithm by the control device 102 depends on pre-stored topology information of the HPC network. The topology information update frequency is low, the update time interval is long, for example, once every hour, and if an event of topology change such as link disconnection occurs in the update time interval (for example, within one hour), the control device 102 may schedule by using the topology information that is outdated. For example, the control device 102 may schedule a data flow to be scheduled onto a failed link, which may cause a "traffic black hole" problem, that is, a problem that topology events are insensitive.
Also, as the HPC networks develop, topology information of the HPC networks of different parks is different, and the control device 102 needs to perform path computation based on an actual network topology. When the control device 102 accesses a new HPC network, an algorithm evaluation needs to be performed according to the networking topology, so that the control device 102 can perform congestion processing on the new HPC network, a problem of strong topology dependence occurs, and adaptability and flexibility of the control device 102 are reduced.
Therefore, the control device 102 may select the first-hop network device after the data stream to be scheduled is sent from the source network device according to the data flow of the data stream to be scheduled and the target network address, and further sequentially determine other hop network devices of the data stream to be scheduled in the transmission process based on the target network address and the selected first-hop network device; the (i+1) th hop network equipment of the data flow to be scheduled is determined according to the queried routing information of the i th hop network equipment; determining a scheduling link of the data stream to be scheduled according to each hop of network equipment of the data stream to be scheduled in the transmission process; if the data traffic matches the available capacity of the scheduling link, the control device 102 may send scheduling route information corresponding to the data flow to be scheduled to the source network device according to the scheduling link. Therefore, the specific path is determined based on the routing information, so that strong dependence on topology can be avoided, adaptability and flexibility of scheduling can be improved, accurate bandwidth evaluation can be performed, network bandwidth resources can be saved, and scheduling accuracy can be improved.
In one implementation manner, the data traffic of the data flow to be scheduled, the target network address, other network-jumping devices of the data flow to be scheduled in the transmission process, the scheduling link of the data flow to be scheduled and the scheduling route information can be stored in the blockchain, so that the data flow to be scheduled can be prevented from being tampered. The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like, is essentially a decentralised database, and is a series of data blocks which are generated by correlation by using a cryptography method, and each data block contains information of a batch of network transactions and is used for verifying the validity (anti-counterfeiting) of the information and generating a next block.
It may be understood that, the processing system for scheduling routing information described in the embodiment of the present application is for more clearly describing the technical solution of the embodiment of the present application, and does not constitute a limitation on the technical solution provided in the embodiment of the present application, and those skilled in the art can know that, with the evolution of the system architecture and the appearance of a new service scenario, the technical solution provided in the embodiment of the present application is equally applicable to similar technical problems.
Based on the above-mentioned processing system for scheduling routing information, the embodiment of the present application provides a processing method for scheduling routing information, and the processing method for scheduling routing information described in the embodiment of the present application may be executed by an electronic device, which may be the control device 105 in the processing system for scheduling routing information shown in fig. 2. Referring to fig. 3, fig. 3 is a flowchart of a method for processing scheduling routing information according to an embodiment of the present application, where the processing of the video stream includes the following steps S301 to S304:
s301, selecting first-hop network equipment after the data flow to be scheduled is sent out from source network equipment according to the data flow of the data flow to be scheduled and a target network address.
In the embodiment of the application, the data flow to be scheduled is the data flow to be scheduled by the control equipment, and the data flow of the data flow to be scheduled is the flow of the data flow to be scheduled. Wherein a five-tuple is employed in the computer network to uniquely identify a particular data stream, the five-tuple comprising five attributes of the data stream: the source IP address, destination IP address, source port number, destination port number, and transport layer protocol, for example, five attributes included in the five-tuple are: the source IP address is 192.168.1.1, the destination IP address is 121.14.88.76, the source port number is 10000, the destination port number is 80, and the transport layer protocol is the transmission control protocol (Transmission Control Protocol, TCP), meaning that one electronic device (e.g., computing device or network device) with an IP address of 192.168.1.1 transmits the data stream (data packet) through port number 10000, using the TCP protocol, with another electronic device with an IP address of 121.14.88.76, port number 80. The destination network address is a destination IP address attribute in the five-tuple. The source network device is a network device that sends out the data stream in the switched data network, such as the source LA device in fig. 1 or fig. 2, and the first-hop network device after sending out the source network device refers to the next-hop network device, such as the source LC device, to which the source network device forwards the data stream.
In one possible implementation manner, when congestion occurs in a link in the HPC network, the network device may respond to an alarm event by aiming at an alarm that link traffic overruns and ECN technology anomalies occur, and further the control device may respond to the alarm event. The alert event may include details of the congested link. Taking the data exchange network shown in fig. 1 as an example for explanation, the number of congestion links included in the alarm event is 4, and an alarm occurs for network equipment a- > network equipment B: specifically, the uplink of the LA device- > LC device, the downlink of the LC device- > LA device, the uplink of the LC device- > core layer LC device, and the downlink of the core layer LC device- > LC device may be mentioned. It can be understood that, for different types of congested links, the alarm types are different, and the processing modes of the control devices are the same as the four different link failures.
The alarm event received by the control device may be obtained from a message queue, specifically, the network device may send the alarm event to the data processing platform, and then the data processing platform may process the alarm event to obtain an alarm event in a uniform format, so that the control device processes the alarm event. The data processing platform can send the processed alarm event to the message queue, and the control equipment can read the alarm event from the message queue and analyze the alarm content in the alarm event to obtain a congestion link. After receiving the alarm event, the control device may verify the alarm event, and in case of verification passing, respond to the alarm event to relieve congestion. The checking of the alarm event by the control device may be understood as a pre-checking of the alarm event, on the one hand checking if an abnormality is entered, and on the other hand checking if the processing is repeated.
Specifically, after analyzing the alarm content in the alarm event, the control device may obtain various fields included in the alarm content, and further may check whether an empty field exists, and if so, the control device determines that the input is abnormal. The control device may also use the congestion link as the source LA device- > source LC device- > core layer LC device- > destination LA device according to the format of the transmission direction (outgoing direction) of the data stream, where the congestion link is only a part of the transmission link of the entire data stream, such as the uplink of the LC device- > core layer LC device, that is, the part of the transmission link of the entire data stream that is the source LC device- > core layer LC device. This is because, for a congested link, multiple (e.g., two) alarm events may occur, and two-directional alarms are respectively performed on the links of the network device a- > network device B, where the congested link is converted into a format according to the transmission direction (outgoing direction) of the data stream, so that whether the alarm events are repeatedly processed can be verified. If the control device determines that the process is not repeated, it may respond to the alarm event.
Further, the control device may obtain the data stream transmitted on the congestion link, and select a portion of the data stream as the data stream to be scheduled, so as to alleviate the congestion condition. The control device analyzes the alarm content, and can obtain different alarm types, such as a flow overrun alarm and an ECN counting abnormal alarm. The control device may select the data flow to be scheduled in two ways, one way is to select the data flow with the largest flow based on the data flow of the data flow, and the other way is to select the data flow in the congestion link according to the data flow proportion.
In one implementation manner, the control device may obtain the data flow of each data flow with congestion in the transmission process, where each data flow with congestion in the transmission process may be a data flow in the congestion link, and the data flow of each data flow with congestion refers to the flow size of each data flow. The control device may further select at least one data flow according to the order of the data flows from large to small, as the data flow to be scheduled, that is, the control device may select one or more data flows with the largest data flow, which may be understood that the control device may schedule the data flows with the flows ordered from large to small and arranged in topN, where N is the number of data flows, N may be preset, or may be dynamically configured by a developer, and N is a positive integer.
In another implementation manner, after the control device obtains the data flows of each data flow with congestion in the transmission process, the control device may select, as the data flow to be scheduled, a data flow matching with a preset flow selection ratio from the data flows with congestion according to the sum of the data flows of each data flow and the preset flow selection ratio. The preset flow selection proportion may be a percentage of total data flow, for example, 30%, and the control device may determine the data flow to be scheduled according to the sum of the data flows and the preset flow selection proportion, and further may select the data flow meeting the data flow to be scheduled according to the data flow of each data flow.
Wherein, in the case that the alarm type is an over-traffic alarm, if the traffic threshold (such as design bandwidth) of the congested link is 100 megabits (Megabits per second, mb/s) transmitted per second, the current link analog traffic is 200Mb/s, and the excess traffic is 100 Mb/s, the control device schedules the data stream to restore the traffic water level of the congested link below the normal water level, that is, below the traffic threshold of 100 Mb/s, and at least 100 Mb/s of the data stream, such as the data stream of the data traffic with the total of 120Mb/s, needs to be scheduled. The control device can select one or more data streams according to the data flow size of the data streams to be scheduled, and the data flow size of each data stream is selected from the large data stream to the small data stream, so that the sum of the data flow sizes of the selected data streams is larger than the data flow size of the data streams to be scheduled, and the data streams to be scheduled are obtained. The control device may also determine a ratio of the data flow to be scheduled to a sum of the data flows, further determine a preset flow selection ratio, and select a data flow with a percentage being the preset flow selection ratio from the data flows of the data flows, so as to obtain the data flow to be scheduled.
Under the condition that the alarm type is ECN counting abnormal alarm, if the ECN value of the congestion link is 655 and is larger than the ECN alarm threshold 500, as the control device cannot know how many data flows are required to be scheduled or how many data flows are required to be scheduled, the number of preset data flows, such as 10 data flows, can be obtained, and then the number of the preset data flows (10 data flows) is selected from the data flows according to the sequence of the data flows from large to small. The control device may also obtain a preset flow selection ratio, for example, 20%, and further select, from the multiple data flows, a data flow with a data flow matching the preset flow selection ratio (for example, 20%), that is, a data flow with a sum of data flows of the selected data flows being greater than or equal to the preset flow ratio, to obtain a data flow to be scheduled.
In one possible implementation manner, after determining the data flows to be scheduled, the control device may determine a network device that issues scheduling route information, that is, a source LA device corresponding to each data flow to be scheduled. Specifically, the control device may acquire a source network address of a data stream to be scheduled, and determine a data stream sender corresponding to the source network address. The control device may obtain a quintuple of the data stream to be scheduled, and the source network address may be a source IP address in the quintuple, where the source IP address is an IP address of a data stream sender that sends the data stream to be scheduled, for example, a GPU server in fig. 1, and further, for example, one of the computing devices 105 in fig. 2. Further, the network device accessed by the data flow sender is queried in the network device database as a source network device. That is, the control device may query the network device accessed by the data flow sender from the network device database to obtain the source network device, i.e. the source LA device. Optionally, the control device may query the network device accessed by the data flow sender in the network device database, and the network device and the target network address of the data flow to be scheduled pass through the congestion link to obtain a query result, that is, the source network device. The source network device is the device that the subsequent control device issues the scheduling routing information.
It should be noted that, if the congestion link included in the alarm event includes the source network device, the source network device may not be queried, for example, if the congestion link is a source LA device- > a source LC device, the control device may already know the source network device, and no need to determine the source network device any more. If the congested link included in the alarm event does not include the source network device, the control device needs to query the source network device based on the source network address of the data flow to be scheduled.
The data flow to be scheduled can be a plurality of data flows, each data flow can inquire a source network device, in order to facilitate the control device to issue scheduling route information subsequently, the control device can aggregate the identification information of the source network device and five-tuple information of the data flow to be scheduled to obtain aggregate information and store the aggregate information, so as to obtain a target network address related to the scheduling route information to be issued to the source network device subsequently. The aggregate information may be obtained by combining the source network device and the destination network address in the data stream to be scheduled, for example, the aggregate information is { source network device: destination network address }. It will be appreciated that the aggregate information includes an identification of a source network device and destination network addresses for one or more data flows.
Further, after determining the source network device sent out by the data flow to be scheduled, the control device may select the first-hop network device sent out by the source network device according to the data flow of the data flow to be scheduled and the target network address. Specifically, the control device may obtain a plurality of candidate network device sets connected to the source network device according to the destination network address of the data flow to be scheduled and the routing information of the source network device. The routing information of the source network device is an address of a BGP neighbor of the source network device, which may be queried by the control device in real time, so that the control device may query each next-hop network device corresponding to the target network address, obtain each next-hop network device connected to the source network device, and refer to each next-hop network device as a candidate network device, for example, each next-hop network device connected to the source network device (source LA device) may be a source LC device. Further, the control device may determine a plurality of candidate network device sets from the candidate network devices.
Therefore, the routing query service based on the dispatching destination network segment (based on the destination IP address) queries all next hop information of the real-time destination network address for the source LA device, which is equivalent to converting the whole network topology into a directed acyclic graph (i.e. a graph with a direction but without any path from a certain node which can return to the node), taking the graph as the basis of the 'routing topology', carrying out the routing query in real time, dynamically sensing the change of the network, and avoiding the situation that the corresponding BGP route is withdrawn along with the link when the link is disconnected and other abnormal events occur, namely, the situation that the injected BGP route is invalid. Therefore, the influence caused by topology change can be effectively shielded, the limitation of the HPC network topology structure is eliminated, the topology and the change event thereof are not strongly dependent, and the problem that the topology event is insensitive can be avoided. Meanwhile, the method is beneficial to solving the problem of strong dependence of scheduling on topology. And the control device can construct the scheduling topology in real time in different HPC network architectures through route inquiry, and can be suitable for the network architectures of different topologies, thereby improving the adaptability and flexibility of scheduling.
In one possible implementation manner, the control device may acquire a target data stream transmitted by the source network device, where the target data stream is a data stream having the same target network address as the data stream to be scheduled, and the target data stream may be understood as a data stream sent by the source network device, where the data stream to be scheduled affects a transmission link thereof when being scheduled, because a destination IP address of the target data stream is the same as a destination IP address of the data stream to be scheduled. The control device may determine the number of target data flows, and determine the number of candidate network device sets and the number of elements contained in each candidate network device set according to the number of flows of the target data flows. The number of the candidate network device sets refers to the number of the candidate network device sets, and the number of elements contained in the candidate network device sets may refer to the number of the candidate network devices in the candidate network device sets.
Specifically, the control device may determine, according to a correspondence between the number of target data flows and the number of candidate network device sets, that the target data flows are data flows that may be affected by the current scheduling, where the number of target data flows is greater, the number of preset candidate network device sets is greater, and otherwise, where the number of target data flows is smaller, the number of preset candidate network device sets is less. For example, in the case that the number of target data flows is smaller than the preset number threshold, the number of preset candidate network device sets is determined to be the first number, whereas in the case that the number of target data flows is greater than or equal to the preset number threshold, the number of preset candidate network device sets is determined to be the second number. The predetermined number threshold may be, for example, 10, and the first number and the second number may be determined based on a predetermined range of values, for example, a range of N-2N, where the value of N may be a predetermined value, or may be dynamically configured by a developer.
Illustratively, in the case where the number of target data flows is 5, which is less than the preset number threshold 10, the value of N may be set to a smaller value, e.g., 4, and the number of candidate network device sets may be determined based on N-2N (i.e., 4-8). The number of the candidate network devices is determined to be an integer between 4 and 8, namely 5, namely 4,5, 6, 7 and 8. Likewise, in case the number of target data flows is 20, greater than the preset number threshold 10, the value of N may be set to a larger value, for example 8, and the number of candidate network device sets may be determined based on N-2N (i.e. 8-16). The number, which may be in particular an integer between 8 and 16, i.e. the number of candidate network devices is 8, i.e. 8, 9, 10, 11, 12, 13, 14, 15, 16.
Each candidate network device set includes a plurality of network devices, that is, some or all of the queried candidate network devices. There is a communication link between the source network device and each candidate network device, respectively, which communication link may also be referred to as a next hop path of the source network device. When the number of communication links is any one value K of N-2N, as described above when K is any one of 4, 5, 6, 7, 8, the set of candidate network devices includes their corresponding candidate network devices. For example, when K is 4, there are 4 next hop paths (communication links) between the source network device and the network devices in the candidate network device set, and the number of elements in the candidate network device set is the number of candidate network devices with K communication links. Thus, the control device may obtain a plurality of candidate network device sets according to the number of candidate network device sets and the number of elements contained in each candidate network device set based on the destination network address of the data flow to be scheduled and the routing information of the source network device.
In one possible implementation, the control device may determine the number of communication links between the source network device and each of the candidate network devices queried, e.g. there may be only one communication link between one source network device (source LA device) and one candidate network device (source LC device), or there may be multiple communication links. And the control device may add all the candidate network devices to each candidate network device set, further determine the remaining available bandwidth of the outgoing ports of the communication links between the source network device and each candidate network device, where each occurrence port corresponds to a communication link of a source LA device- > source LC device, and select K communication links according to the order of the remaining available bandwidth from high to low, that is, select the most idle link by default next hop. And reserving the candidate network devices with K communication links in each candidate network device set, and removing the rest candidate network devices to obtain each candidate network device, wherein the K value corresponding to each candidate network device set is different.
The remaining available bandwidth of the outgoing port of each communication link between the source network device and each candidate network device may be determined by the control device by querying the current traffic of the outgoing port of each communication link between the source LA device and each candidate network device (source LC device). The control device may specifically query the current traffic to the port by telemetry (telemetry), telemetry is a technology for acquiring data or information of the device or system by a remote manner, in the embodiment of the present application telemetry may be used to acquire information of a network device (switch), where the network device may periodically actively report the traffic of the network device such as the port, a central processing unit (Central Processing Unit, CPU) or memory data to the acquisition device (acquisition module) through Push Mode (Push Mode), and compared to simple network management protocol (Simple Network Management Protocol, SNMP) through one-to-one interaction of Pull Mode (Pull Mode), telemetry provides a more real-time and higher-speed data acquisition function. Therefore, the control device can acquire the current flow of the outgoing port of each communication link through telemetry, and further can determine the remaining available bandwidth of each communication link according to the total capacity (which is a fixed value and is determined when the HPC network is constructed) of the preset port and the current flow, so that K communication links can be selected, and further a plurality of candidate network device sets are determined.
Further, the control device may determine a target communication link between the source network device and each set of candidate network devices. The control device may obtain the remaining available bandwidths of the communication links between the source network device and each candidate network device set, that is, the control device may obtain the remaining available bandwidths of the communication links between the source network device and each candidate network device set, and then select a set number of communication links as the target communication links according to the order of the remaining available bandwidths from high to low. When determining the plurality of candidate network device sets, although K communication links are determined between the source network device and the candidate network device sets, since there may be at least one communication link between one source network device and one candidate network device, a plurality of communication links between one candidate network device and the source network device may be selected, and if the network device is removed, the number of communication links corresponding to the candidate network set is less than K, so that there may be cases where the number of communication links between the source network device and each candidate network device set may be greater than K.
Furthermore, the control device may select K communication links as target communication links according to the order of the remaining available bandwidths from high to low, where the target communication links are communication links from the source network device (source LA device) to the next hop network device (source LC device). It will be appreciated that K ranges between N-2N, and values of K are illustrated as 4, 5, 6,7 or 8, each corresponding to a respective set of candidate network devices, the number of target communication links between the source network device and the respective set of candidate network devices being 4, 5, 6,7 or 8, respectively.
Further, the control device may calculate, according to the data traffic of the data stream to be scheduled and the number of target communication links between the source network device and each candidate network device set, a traffic scheduling distribution condition when the data stream to be scheduled is transmitted through each candidate network device set. Specifically, the control device may calculate, according to the data traffic of the data flow to be scheduled and the number of target communication links corresponding to each candidate network device set, the link average allocation traffic corresponding to each candidate network device set. The number of the target communication links corresponding to each candidate network device set may be the value of K, and each of the target communication links is an integer value in the N-2N range, and the average distribution flow of the links corresponding to each candidate network device set may be understood as the average received data flow of each target communication link when the scheduling mode corresponding to each candidate network device set is adopted. The calculation formula of the link average allocation traffic can be shown in formula 1:
equation 1
Wherein,Average traffic distribution for links,/>In the case where the data Flow to be scheduled is a plurality of data flows, the Flow is the Sum (Sum) of the data flows of the plurality of data flows, and K is the number of the target communication links. It can be understood that in the case of different K values, a plurality of link average distribution flows can be obtained respectivelyThe average allocated traffic for each link corresponds to each set of candidate network devices.
Further, the control device may simulate a process of transmitting the data stream to be scheduled to each candidate network device set, so as to obtain a link simulation flow corresponding to each candidate network device set, that is, the control device may simulate a process of transmitting the data stream to be scheduled to the next hop network device (source LC device) by the network device (source network device). Specifically, when the network device transmits the data stream, the network device queries the next-hop network device based on the attribute information of the data stream, that is, the target network address (the target IP address) in the five-tuple, and uses the communication link between the queried next-hop network devices as the ECMP group, and further calculates the attribute information of the data stream by adopting a preset hash algorithm to obtain the hash value of the data stream. And further determining a communication link for transmitting the data stream according to the corresponding relation between the hash value and each communication link in the ECMP group and the hash value of the data stream.
In the embodiment of the application, the control device can call the network device hash simulation service to simulate the process of network device routing. For the target communication path corresponding to each candidate network device set, the control device may calculate attribute information of the to-be-scheduled data stream based on a preset hash algorithm to obtain a hash value of the to-be-scheduled data stream, where the attribute information of the to-be-scheduled data stream may be a five-tuple of the to-be-scheduled data stream, and the attribute information includes a target network address (destination IP address) of the to-be-scheduled data stream. Furthermore, the control device may determine a communication path selected by the data flow to be scheduled among the target communication paths corresponding to each candidate network device set according to the corresponding relationship between the hash value and the target communication link corresponding to each candidate network device set and the hash value of the data flow to be scheduled. For example, the number of the target communication links is 5, the identifiers of the target communication links are 0, 1,2,3 and 4, and the hash value of the data stream to be scheduled is 3, and then the control device selects the target communication link corresponding to 3 to transmit the data stream to be scheduled.
Further, the control device may determine the data traffic to be scheduled allocated to the target communication link corresponding to each candidate network device set, that is, the data traffic of the data flow to be scheduled allocated to the target communication link transmitting the data flow to be scheduled. It may be understood that, in the case that the data stream to be scheduled includes a plurality of data streams, the fields in the five-tuple of each data stream are different, the calculated hash values are different, and the selected target communication links are also different, so that the data traffic to be scheduled allocated by each target communication link may be the sum of the data traffic of the transmitted data streams. Furthermore, the control device may determine, according to the data traffic to be scheduled allocated to the target communication link and the occupied traffic of the target communication link, a link analog traffic corresponding to each candidate network device set.
The occupied traffic of the target communication link may be traffic from the source LA device to the outbound port of the source LC device in the target communication link queried through telemetry. The link analog traffic corresponding to each set of candidate network devices may include link analog traffic of each target communication link corresponding to each set of candidate network devices, and may be determined by the control device based on a sum of the occupied traffic and the data traffic of the transmitted data stream. It can be understood that, for each candidate network device set, the control device simulates once respectively to obtain the link simulation flow corresponding to each candidate network device set, that is, the actual distribution situation of the data flow to be scheduled.
Further, after the control device determines the link analog traffic corresponding to each candidate network device set, the traffic scheduling distribution situation corresponding to each candidate network device set may be calculated according to the link average allocation traffic, the link analog traffic and the number of the target communication links corresponding to each candidate network device set. Wherein each flow schedule distribution can be represented by a variance, since the variance can be used to measure the degree of dispersion of a set of data, reflecting the degree of deviation of a set of data from its average. The control device may specifically calculate the link traffic variance corresponding to each candidate network device set according to the link average allocation traffic, the link analog traffic, and the number of target communication links corresponding to each candidate network device set, that is, determine a link traffic variance for each candidate network device set. And taking the link flow variance corresponding to each candidate network equipment set as the flow scheduling distribution condition corresponding to each candidate network equipment set. The calculation formula of the link traffic variance can be shown in formula 2:
Equation 2
Wherein,For link traffic variance,/>For link analog traffic,/>The average traffic distribution for the links may be calculated based on the above formula 1, where K is the number of target communication links corresponding to the candidate network device set. It should be noted that, each candidate network device set corresponds to the number of one target communication link, and then corresponds to each K value, so that each link flow variance value needs to be obtained. Since variance can measure the stability of a set of data, the larger the variance, the greater the volatility, whereas the smaller the variance, the less the volatility. In the embodiment of the present application, the volatility may indicate whether the data traffic allocated to each link is balanced when scheduling according to the number K of target communication links (candidate network device set), that is, whether the traffic distribution after scheduling can be more uniform (load balancing). Thus, the link traffic variance/>The selection weight may also be used as a selection weight for evaluating the number K (candidate network device set) of selection target communication links.
In one possible implementation, in equation 1 above,The number of multiple data streams (streams) included in the data stream to be scheduled can be expressed, then/>, in equation 1An average of the number of data streams may be represented. In equation 2,/>Can represent the actual number of data flows of the link,/>The variance of the number of link data streams is represented. It will be appreciated that the computational variance may be determined in the number dimension to measure whether the number distribution of scheduled data streams is uniform (load balancing).
In one possible implementation manner, the traffic scheduling distribution situation may include a variance of link analog traffic corresponding to each candidate network device set, that is, a link traffic variance corresponding to each candidate network device set. When the control device selects the target candidate network device set from the plurality of candidate network device sets as the first-hop network device according to the traffic scheduling distribution situation corresponding to each candidate network device set, the control device may select the candidate network device set with the smallest variance from the plurality of candidate network device sets as the first-hop network device according to the variance of the link analog traffic corresponding to each candidate network device set. Wherein the set of candidate network devices with the smallest variance represents a scheduling scheme that enables the most balanced traffic distribution after scheduling. The selected candidate network device set is a target candidate network device set, and candidate network devices included in the target candidate network device set are first-hop network devices.
Thus, the number of optimal target communication links can be determined from the number of different target communication links in the above variance manner, and also the optimal first-hop network device is determined, where the first-hop network device is each candidate network device in the target candidate network device set, i.e. the ECMP group, so as to implement optimal evaluation of the first hop.
In one possible implementation manner, after determining that the control device selects the first hop network device after the data stream to be scheduled is sent from the source network device, the identification information of the first hop network device corresponding to the quintuple of the data stream to be scheduled and the target communication link may be aggregated according to the transmission process of the data stream to be scheduled (each data stream in the multiple data streams) through the target candidate network device set, so as to obtain and store the aggregation information for subsequent processing. For example, the aggregate information may be { first hop network device: quintuple }. The aggregation information may include a first hop network device identifier and one or more quintuples corresponding to the data flows, so as to obtain a set of data flows (quintuples) corresponding to each first hop network device (source LC device). Further, the control device may sequentially determine other hop network devices of the data stream to be scheduled in the transmission process based on the target network address and the selected first hop network device.
S302, based on the target network address and the selected first-hop network equipment, other-hop network equipment of the data stream to be scheduled in the transmission process is sequentially determined.
In the embodiment of the application, other hop network devices can be network devices except the first hop network device in each hop network device in the transmission process of the data stream. For example, consider the three-layer fat tree architecture shown in fig. 1, where the transmission link involves four hops, e.g., source LA device- > source LC device- > core layer LC device- > destination LA device. The first hop network device is a source LC device, and the other hop network devices may be a core layer LC device, a destination LC device, and a destination LA device. For another example, consider the two-layer fat tree architecture shown in fig. 2, where the transmission link involves three hops, e.g., source LA device- > LC device- > destination LA device. The first hop network device is still a source LC device, only one convergence layer network device is involved in the two-layer fat tree architecture, so the source LC device is the destination LC device, and the other hop network devices may be destination LA devices.
In one possible implementation, the control device may simulate a process of selecting a next hop network device by each hop network device, so as to determine other hop network devices in the transmission process of the data stream to be scheduled. The i+1th hop network device of the data flow to be scheduled is determined according to the queried routing information of the i hop network device, wherein i is a positive integer. Specifically, the control device may obtain routing information of the ith hop network device of the data stream to be scheduled, and query, from the routing information of the ith hop network device of the data stream to be scheduled, a next hop device associated with a destination network address of the data stream to be scheduled, to obtain a plurality of next hop network devices of the ith hop network device of the data stream to be scheduled. Furthermore, the control device may select the i+1th hop network device of the data stream to be scheduled from a plurality of next hop network devices of the i hop network device of the data stream to be scheduled, so as to obtain other hop network devices of the data stream to be scheduled in the transmission process.
For the ith hop, the control device may use a plurality of next hop network devices of the ith hop network device of the data stream to be scheduled as an ECMP group, and calculate attribute information of the data stream by adopting a preset hash (hash) algorithm, to obtain a hash value of the data stream to be scheduled. And determining the i+1th hop network device of the data stream to be scheduled according to the corresponding relation between the hash value and the communication links of each next hop network device in the ECMP group and the hash value of the data stream. The first-hop network equipment is determined source LC equipment, and the route information of the first-hop network equipment is route information of the source LC equipment. It should be noted that, the data stream to be scheduled includes a plurality of data streams, and each data stream may be based on the above aggregation information { first hop network device: five-tuple }, other network devices are determined, and the control device can determine in parallel when determining other network devices corresponding to each data stream.
It can be understood that, since the control device only sends the scheduling route information to the source network device (source LA device), only the target communication link corresponding to the first hop network device is optimally evaluated, and when other hop network devices are determined, only the next hop network device obtained by hash simulation is performed on the ECMP group obtained by querying the route information according to the ith network device and the target network address of the data flow to be scheduled. The ith hop device may be a source LC device, a core layer LC device, a destination LC device, and a destination LA device in order, where in the destination LA device serving as route information of the ith hop network device, if a port of the LA device whose destination port corresponds to the destination network address is queried, the simulation ends. And the destination port corresponding to the target network address is the destination port in the five-tuple of the data stream to be scheduled, so that other network jump equipment of the data stream to be scheduled in the transmission process is obtained.
S303, determining a scheduling link of the data stream to be scheduled according to each hop of network equipment of the data stream to be scheduled in the transmission process.
In the embodiment of the application, each hop network device of the data stream to be scheduled in the transmission process comprises a first hop network device after the data stream to be scheduled is sent out from the source network device and other hop network devices of the data stream to be scheduled in the transmission process, namely, the hop network devices are combined to obtain a scheduling link of the data stream to be scheduled. For example, the scheduling link for the data stream to be scheduled includes: source LA device, source LC device, core layer LC device, destination LA device. For another example, the scheduling link of the data stream to be scheduled includes: source LA device, LC device, destination LA device.
And S304, if the data flow is matched with the available capacity of the scheduling link, sending scheduling route information corresponding to the data flow to be scheduled to the source network equipment according to the scheduling link.
In the embodiment of the application, the data traffic is the data traffic of the data stream to be scheduled, and the available capacity of the scheduling link can be understood as the remaining available bandwidth of the scheduling link. The control device may acquire an available capacity of a scheduling link, where the available capacity of the scheduling link may be a flow of all outgoing ports of the i+1th hop network device linked by the i hop network device on the scheduling link queried by the control device through telemetry, and further determine, according to a total capacity of a preset port and a flow of outgoing ports between hops, an available capacity of each outgoing port, and determine, as an available capacity of the scheduling link, a minimum available capacity of the available capacities of each outgoing port. Furthermore, the control device may determine that the data traffic matches the available capacity of the scheduling link, in case it is determined that the data traffic of the data stream to be scheduled is less than or equal to the available capacity, i.e. the available capacity is sufficient for transmitting the data stream to be scheduled. The control device determines that the capacity assessment was successful and the scheduling link is valid.
In one possible implementation manner, after the control device obtains the traffic of the outgoing ports between the hops, the data traffic of the data stream to be scheduled may be sequentially added to the traffic of the outgoing ports between the hops, to determine whether the traffic is less than or equal to the preset total capacity of the ports. And under the condition that the total capacity of the preset port is smaller than or equal to the total capacity of the preset port, determining that the data flow is matched with the available capacity of the scheduling link, namely, the available capacity is enough to transmit the data flow to be scheduled. The control device determines that the capacity assessment was successful and the scheduling link is valid.
Therefore, the control equipment can inquire through route information respectively in a switch hash simulation mode, and can also reduce the dependence on topology. And a specific scheduling path is determined, deterministic path planning is provided for routing scheduling, the problems of redundant bandwidth occupation and path selection randomness can be avoided, and deterministic scheduling is realized. And the control device can accurately evaluate the bandwidth of each scheduling path, and does not need to ensure that each idle link can accommodate all scheduled data streams, thereby being beneficial to saving network bandwidth resources, and also ensuring that the capacity of each hop except the first hop is enough to transmit the data streams, and the risk of congestion after scheduling is not increased.
In one possible implementation manner, in the case that the control device determines that the data traffic does not match the available capacity of the scheduling link, the control device needs to acquire multiple candidate network device sets connected to the source network device again, that is, delete (or set to a blacklist) the communication link between the source network device and the first hop network device in the scheduling link on the basis of the previous acquisition, and candidate the k+1th communication link in the order from high to low of the remaining available bandwidth, so as to obtain a new multiple candidate network device sets and target communication links corresponding to each candidate network device set. The control device may further calculate, according to the data traffic of the data stream to be scheduled and the number of target communication links corresponding to each new candidate network device set, a traffic scheduling distribution condition when the data stream to be scheduled is transmitted by each candidate network device set, and re-select, according to the traffic scheduling distribution condition corresponding to each candidate network device set, the target candidate network device set from the multiple candidate network device sets as the first hop network device.
After the new first-hop network equipment is obtained, other hop network equipment of the data stream to be scheduled in the transmission process is sequentially determined based on the target network address and the first-hop network equipment, and an update scheduling link of the data stream to be scheduled is determined. The specific implementation manner of determining the updated scheduling link is the same as the specific implementation manner of determining the scheduling link, and is not repeated here. And the control equipment further determines that the capacity evaluation is successful under the condition that the data flow is matched with the available capacity of the updated scheduling link, and takes the updated scheduling link as an effective updated scheduling link.
Furthermore, the control device may send scheduling route information corresponding to the data stream to be scheduled to the source network device according to the scheduling link. In one implementation manner, the control device may send BGP routing information to the source LA device, and may specifically include scheduling a network address of a first-hop network device in the link, that is, an IP address of each candidate network device in the set of target candidate network devices, and designate a scheduled destination IP address as a target network address of a data flow to be scheduled, so as to implement scheduling by issuing BGP routing.
In another implementation manner, the control device adopts a Policy-Based Routing (PBR) manner to schedule, and the PBR is a Routing mechanism Based on a specific Policy, and can perform Routing selection according to a Policy formulated by a user or a preset Policy, when the network device forwards a data stream (data packet), it will first filter the data stream (five-tuple) according to a configured rule, and if matching is successful, forwarding the data stream according to a certain forwarding Policy. Such rules may be determined based on information such as source address, destination address, protocol type, etc. of the data stream to be scheduled. The PBR is provided with higher flexibility and load balancing capability because the PBR configures policy rules for routing scheduling, so that the distribution of traffic can be controlled more finely, and the network performance is optimized. And based on the configured policy rules, the PBR can also disperse the flow to a plurality of paths, so as to realize load balancing and improve the utilization rate of network resources. The PBR can also implement fault detection and automatic switching, thereby facilitating an improvement in reliability of the network.
It should be noted that, compared with the conventional routing policy, the PBR needs to configure policy rules for the data flow to be scheduled, which is complex in configuration and requires more management and maintenance costs. Moreover, since the PBR needs to perform policy matching and processing on each data flow, the performance overhead of the network device is likely to be increased in a high-traffic scenario. Furthermore, since the PBR needs to configure a large number of policy rules, the scalability of the PBR is low, and it is difficult to cope with rapidly changing network demands to some extent, especially in a large network environment. Thus, the control device can select a specific adopted scheduling mode based on the service scene.
Further, after the control device sends the scheduling routing information to the source network device, taking BGP routing scheduling as an example, in the case of receiving the data stream, the target network address of the data stream may be obtained, and in the case of determining that the obtained target network address is the same as the target network address of the data stream to be scheduled, a preset hash algorithm may be adopted to select, from among the candidate network devices in the target candidate network device set, the first-hop network device after being sent from the source network device, and forward the data stream to the first-hop network device. The first hop network device further determines the next hop network device and forwards the next hop network device until the next hop network device is forwarded to the last hop device, namely the target LA device. The transmission path of the data flow is the same as the scheduling path determined by the control equipment, and because the scheduling link is an idle link, the method of the embodiment of the application can improve the scheduling accuracy, optimize the communication efficiency of the HPC network, and complete the response of link traffic overrun and ECN counting abnormal alarm, thereby further improving the stability and efficiency of the network.
Referring to fig. 4 together, fig. 4 is a schematic diagram of a method for processing scheduling routing information according to an embodiment of the present application. As shown in fig. 4, after receiving the alarm event, the data processing platform performs format unified processing on the alarm event, and sends the processed alarm event to a kafka (kafka) message queue. The alarm event may be an alarm event of each cluster, for example, cluster 1 (e.g., M23 cluster), cluster 2 (e.g., M24 cluster), and cluster 3 (e.g., M31 cluster), each cluster may correspond to a message queue, and the control device may respond to the alarm event of each cluster in parallel. After the control device acquires the alarm events, the processing of alarm pre-analysis, scheduling flow selection, scheduling device selection, hop-by-hop flow path calculation and scheduling execution can be performed for each alarm event.
Wherein the alert preamble analysis may include a process of verifying an alert event, the scheduling traffic selection may include a process of determining a data flow to be scheduled, and the scheduling device selection may include a process of determining a source network device (source LA device). The hop-by-hop flow path computation may include a process of determining each hop network device of the data flow to be scheduled in the transmission process, and the control device may invoke a routing query service and a switch hash simulation service to implement, and a process of determining a scheduling link of the data flow to be scheduled. The capacity evaluation includes a process of determining whether a data traffic of the data flow to be scheduled matches an available capacity of the scheduling link, in which it may be determined whether the scheduling link needs to determine whether a capacity of an outgoing port of a link formed between two adjacent network devices in the scheduling link matches, and in a case where it is determined that an available capacity of an outgoing port of the penultimate hop network device linked to the last hop network device matches a data traffic of the data flow to be scheduled, the control device may send scheduling routing information to the source network device. In the event that the data traffic does not match the available capacity of the scheduling link (either hop), the control device may re-perform the hop-by-hop path computation until the data traffic matches the available capacity of the scheduling link.
In one possible implementation manner, the control device may also push the alarm event and the scheduling details to the terminal device of the developer (route calculation pushing) at the same time in the case of sending the scheduling route information to the source network device. Referring to fig. 5, fig. 5 is a schematic diagram of a user interface for pushing an alarm event according to an embodiment of the present application. As shown in FIG. 5, the alarm event is taken as a production environment-Shanghai-Songjiang-M17, the alarm type of remote direct memory access (Remote Direct Memory Access, RDMA) ECN counting abnormality (overrun) of the HPC network device is taken as an example for explanation, and M17 is a cluster identifier. The user interface includes link event details including an alarm ID, congestion device management IP, a name of a congested device, a name of a congested port, an alarm event, and an alarm reason, and scheduling details.
The congestion equipment management IP is the IP of a certain network equipment in the congestion link and corresponds to the name of the certain network equipment in the congestion link in the names of the congestion equipment. The processing results include that the control device has identified and processed the alarm event, and the modules, devices and ports involved have been explicitly pointed out. ISDISPATCH true indicates that the alarm event has been distributed for processing, HANDLETYPE Scene_ECN indicates that the processing type is a Scene with ECN count exceptions. The processing result also includes an alarm ID and an alarm time stamp. Thus, based on the link event details, the developer can learn that the ECN value detected on port "Eth200GE52" of device "SH-SJ-070201-G06-TCS94R-GPULC-031" exceeds a threshold, there may be a network congestion problem, and the control device has responded to the alarm event.
In the scheduling details, the scheduling operation device may be understood as a source network device, i.e. a source LA device. The task ID is provided with a jump link, and can jump to scheduling details. The dispatching network section is the information of the issuing network section. The scheduling details can also include the traffic size on the alarm link, namely the data traffic size on the congestion link, and the aggregate traffic size in the exit direction of the scheduling device, wherein the scheduling device is a source network device, namely a source LA device. The affected quintuple corresponds to the quintuple of the target data stream in the embodiment of the application, and is the flow information.
Specifically, the target data flow may be scheduled to 6 links, and 2 target data flows are respectively scheduled to (2) th link and (3) th link, which may be referred to as flow determination paths. The 6 links correspond to target communication links corresponding to the target candidate network device set in the embodiment of the present application, that is, the control device issues a scheduling link indicated by the scheduling route information in the source LA device, so as to plan a deterministic flow path. The 6 links may be referred to as selecting next hop information, and include links of source LA device (port) - > source LC device (port), and utilization of the links, that is, path calculation results. It can be understood that the control device can construct a real-time scheduling topology according to the routing information corresponding to the scheduling network segment, determine a scheduling link for a specific data stream based on a simulated network device hash algorithm, and further perform deterministic path evaluation, so as to obtain an optimal path for scheduling, and can be suitable for 4 different congestion links, such as an uplink of LA device- > LC device, and the like.
Referring to fig. 6 together, fig. 6 is a schematic diagram of a user interface for scheduling details after a jump according to an embodiment of the present application. As shown in fig. 6, the user interface of the scheduling details includes congestion link information, scheduling results, and scheduling details. The user interface of the scheduling details shown in fig. 6 may be based on the jump link jump in fig. 5. The congestion link information includes alarm IP, alarm link, current link traffic, link bandwidth utilization, alarm ID, alarm event, scheduling time, and prescheduled failure time. The alarm device is the congestion device, and the alarm link corresponds to the congestion link in the embodiment of the present application, and includes a link of network device a (port a) - > network device B (port B), where the bandwidth utilization of the link is the traffic occupied by the link. Wherein the pre-schedule expiration time is the time that the schedule may expire. The scheduling result includes the operating device (source network device), injecting the scheduling route information, i.e., the specified 6 next hops, which are the IP of the first hop network device (source LC device).
The scheduling details shown in fig. 6 include the data traffic size on the congested link, the outgoing port aggregate data traffic size of the network devices in the congested link, the link formed by the source network device and the 6 first-hop network devices, the IP address of the first-hop network device, the utilization of each link, and five-tuple details of the data flow, including a jump link that can jump to specifically display the five-tuple details. The scheduling details further include details of 2 target data flows affected by scheduling, namely, partial information in five-tuple of the target data flows, such as a source IP address, a source port, a destination port and a destination IP address. The details of the 2 target data streams also comprise the size of the target data stream and the path details determined after scheduling, and specifically, the jump can be performed based on the jump link 'click display path'. Wherein the control device persists the record and updates the scheduling details.
Referring to fig. 7 together, fig. 7 is a schematic diagram of a user interface for an overview of scheduling situations of clusters in a certain area, where, as shown in fig. 7, the user interface may include a congestion scheduling record of a high-network link, and specifically may include controls that may be operated by a developer, such as controls for refreshing, creating, manually issuing, checking tasks, screening, and selecting areas. The specific scheduling record may include an alarm ID, an alarm link, alarm details, an affiliated cluster, an operating device, an operating network segment (next hop), a scheduling status, a scheduling time, and a revocation time. The alarm link corresponds to a congestion link in the embodiment of the present application, the alarm details may be an alarm type, such as ECN count exception alarm shown in fig. 7, the operation device is a source network device, the operation network segment (next hop) is a target communication link corresponding to a target candidate network device set in the embodiment of the present application, and the scheduling state includes whether the scheduling route information is issued successfully. The 4 th scheduling record in fig. 7, and the scheduling record with the alarm ID of "170193240901900884" shown in fig. 7 corresponds to the alarm event in fig. 5, and also corresponds to the scheduling details in fig. 6.
In the technical solutions provided in some embodiments of the present application, after selecting a first-hop network device after a data stream to be scheduled is sent from a source network device according to a data flow of the data stream to be scheduled and a target network address, a control device may sequentially determine other-hop network devices of the data stream to be scheduled in a transmission process based on the target network address and the selected first-hop network device, where an i+1th-hop network device of the data stream to be scheduled is determined according to the queried routing information of the i-th-hop network device. Furthermore, the control device may determine a scheduling link of the data stream to be scheduled according to each hop network device of the data stream to be scheduled in the transmission process. Therefore, the control device can determine the next hop network device according to the route information of the previous hop network device, is beneficial to avoiding strong dependence on topology, can be suitable for network architectures with different topologies, and improves the adaptability and flexibility of scheduling. And under the condition that the data flow to be scheduled is matched with the available capacity of the scheduling link, the scheduling route information corresponding to the data flow to be scheduled is sent to the source network equipment according to the scheduling link, and under the condition that a specific scheduling link is determined, accurate capacity assessment can be carried out based on the scheduling link, so that waste of network bandwidth resources is avoided, and scheduling accuracy is improved.
The above description describes a specific execution procedure of the processing method of scheduling routing information and a processing system of scheduling routing information adapted to implement the processing method of scheduling routing information. By adopting the processing method of the dispatching routing information, the application can effectively respond to the alarm event of the flow overrun in the HPC network and the alarm event of the ECN counting abnormality. The method has the advantages that over 20000 alarm events are processed in 3 months, scheduling route information is issued to nearly 500 source network devices, and the control device can solve the problems of traffic congestion and unbalanced distribution in the network by issuing the scheduling route information, so that reliable guarantee is provided for the stability of the network and the stable operation of the service. The following describes the scheduling effect of the processing method of the scheduling route information applied in the applicable service scene:
(1) AllReduce scene
AllReduce a scene is a scene where the number of data streams transmitted is small but the data traffic per data stream is large, i.e. "low stream, high single stream". Referring to fig. 8 together, fig. 8 is a schematic diagram of a scheduling effect of AllReduce according to an embodiment of the present application, where, as shown in fig. 8, the abscissa of the graph is time, the ordinate of the upper graph is data traffic size, the ordinate of the lower graph is ECN count value, and the traffic and ECN count value of a certain outgoing port in a congestion link from 25 th of 2023 to 28 th of 2023 are taken as an example. In this scenario, too high a port flow results in an excessively high ECN count, as shown in fig. 8, at which point the flow may be at a peak. The control device may issue scheduling route information of the data stream to be scheduled to the source network device according to the scheduling path corresponding to the data stream to be scheduled, e.g. issue the scheduling route information at 14:51, so that after the source network device receives the data stream in real time, part of the data stream in the originally congested link may be scheduled to other idle links. By this scheduling, the traffic on the congested link drops significantly and the ECN count value also drops significantly, and the scheduled traffic and ECN values are performed as shown in fig. 8, thereby completing the response of the alarm event, which is also contacted in the case that the alarm event includes an ECN count anomaly type.
Therefore, the control equipment determines the scheduling path of the data flow to be scheduled, and can schedule part of the data flow (flow) to other optimal idle links, so that the congestion relief is realized, the load of the links is reduced, the occurrence of congestion is reduced, and the quality and the efficiency of the whole communication are improved.
(2) AlltoAll scene
AlltoAll the scene is a scene where the number of data streams transmitted is large, but the data traffic per data stream is small, i.e. "high stream, small single stream". Referring to fig. 9 together, fig. 9 is a schematic diagram of a scheduling effect of AlltoAll according to an embodiment of the present application, where, as shown in fig. 9, the abscissa of the graph is time, the ordinate is link utilization (%), four graphs respectively indicate link utilization (%) of four links at different moments, and it is seen that, before scheduling, the ECN count of the first link is too high, and the link utilization of the other three links (target link-1, target link-2 and target link-3) is low, such as almost 0, which is an extremely unbalanced traffic distribution condition. After the control equipment receives the alarm event with the abnormal ECN count, the congestion problem can be relieved through three times of scheduling, and the load balance of the flow is realized.
Specifically, in the event of a traffic burst, the initial ECN value of the first link exceeds ten thousand, e.g., 11961.50, indicating that the network is extremely congested, which may severely impact the quality of communication. After the 1 st scheduling, the control device may schedule a portion of the traffic (data flow) in the congested link to the target link-1, thereby reducing the link utilization of the congested link (e.g., the first link in fig. 9), and alleviating the situation that the ECN count is high, to about two thousand. Because the congestion degree is too serious, the congestion condition cannot be completely relieved by the 1 st scheduling, and the load balance of the whole network is realized. Therefore, the control device may perform scheduling for the 2 nd time, and schedule the next largest traffic in the congested link (for example, scheduling the largest traffic in the congested link to the target link-1 for the 1 st time) to the target link-2, so as to further reduce the ECN value, where the ECN value is about 1000.
The control device may periodically check whether the alarm event is recovered, i.e. periodically perform alarm event recovery check, and after the alarm event recovery check is completed, the control device may automatically perform the first complementary scheduling, i.e. the 2 nd scheduling, when determining that the ECN value is still higher (e.g. higher than the preset ECN threshold 500). After the two times of scheduling, the control device can schedule again, i.e. schedule for the 3 rd time, and finally reduce the ECN value to below 500 under the condition that the ECN value of the congested link is still higher than the preset ECN threshold (such as 500), so that the congestion problem of the congested link is solved. In addition, in each scheduling process, the control device may dynamically select the first hop network device according to the traffic distribution situation in the network (i.e. the traffic scheduling distribution situation when the data stream to be scheduled is transmitted by the set of candidate network devices), that is, select the target communication link, and further determine the scheduling link, so as to implement the optimal distribution of the traffic after scheduling. In the AlltoAll scenario shown in fig. 9, the control device selects the optimal first-hop network device every time of scheduling, that is, selects the optimal scheduling link, so as to implement load balancing of global traffic.
The foregoing details of the method of embodiments of the present application are provided for the purpose of better implementing the foregoing aspects of embodiments of the present application, and accordingly, the following provides an apparatus of embodiments of the present application.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a routing information processing apparatus according to an embodiment of the present application, and the routing information processing apparatus 100 may be used to execute corresponding steps in the routing information processing method shown in fig. 3. The processing apparatus 100 of the routing information includes the following units:
A selecting unit 1001, configured to select a first hop network device after the data flow to be scheduled is sent from a source network device according to a data flow of the data flow to be scheduled and a target network address;
a determining unit 1002, configured to sequentially determine other network-hop devices of the data stream to be scheduled in a transmission process based on the target network address and the selected first network-hop device; the i+1th hop network equipment of the data flow to be scheduled is determined according to the queried routing information of the i hop network equipment, wherein i is a positive integer; determining a scheduling link of the data stream to be scheduled according to each hop of network equipment of the data stream to be scheduled in the transmission process;
And a sending unit 1003, configured to send, if the data traffic matches with the available capacity of the scheduling link, scheduling route information corresponding to the data flow to be scheduled to the source network device according to the scheduling link.
In a possible implementation manner, the selecting unit 1001 is configured to select, according to a data traffic of a data flow to be scheduled and a target network address, a first-hop network device after the data flow to be scheduled is sent out from a source network device, specifically configured to:
Acquiring a plurality of candidate network equipment sets connected with the source network equipment according to a target network address of a data stream to be scheduled and the routing information of the source network equipment;
Determining a target communication link between the source network device and each set of candidate network devices;
Calculating the flow scheduling distribution condition of the data flow to be scheduled when the data flow to be scheduled is transmitted through each candidate network equipment set according to the data flow of the data flow to be scheduled and the number of target communication links between the source network equipment and each candidate network equipment set;
and selecting a target candidate network equipment set from the plurality of candidate network equipment sets as the first-hop network equipment according to the traffic scheduling distribution situation corresponding to each candidate network equipment set.
In a possible implementation manner, the selecting unit 1001 is configured to calculate, according to the data traffic of the data flow to be scheduled and the number of target communication links between the source network device and each candidate network device set, a traffic scheduling distribution situation when the data flow to be scheduled is transmitted through each candidate network device set, and specifically is configured to:
Calculating the link average allocation flow corresponding to each candidate network equipment set according to the data flow of the data flow to be scheduled and the number of the target communication links corresponding to each candidate network equipment set;
Simulating the transmission process of the data stream to be scheduled through each candidate network equipment set so as to acquire link simulation flow corresponding to each candidate network equipment set;
And calculating the flow scheduling distribution situation corresponding to each candidate network equipment set according to the link average distribution flow, the link simulation flow and the number of target communication links corresponding to each candidate network equipment set.
In a possible implementation manner, the selecting unit 1001 is configured to calculate the traffic scheduling distribution situation corresponding to the each candidate network device set, and specifically configured to:
Calculating link flow variance corresponding to each candidate network device set according to the link average allocation flow, the link analog flow and the number of target communication links corresponding to each candidate network device set;
and taking the link flow variance corresponding to each candidate network equipment set as the flow scheduling distribution condition corresponding to each candidate network equipment set.
In a possible implementation manner, the traffic scheduling distribution condition includes variances of link simulation traffic corresponding to the candidate network device sets; the selecting unit 1001 is configured to select, according to a traffic scheduling distribution situation corresponding to each candidate network device set, a target candidate network device set from the plurality of candidate network device sets as the first hop network device, where the first hop network device is specifically configured to:
And selecting a candidate network device set with the smallest variance from the plurality of candidate network device sets as the first-hop network device according to the variances of the link simulation traffic corresponding to each candidate network device set.
In a possible implementation manner, the selecting unit 1001 is configured to obtain, according to a destination network address of a data flow to be scheduled and routing information of the source network device, a plurality of candidate network device sets connected to the source network device, and specifically is configured to:
acquiring a target data stream transmitted by the source network device, wherein the target data stream is a data stream with the same target network address as the data stream to be scheduled;
determining the number of the candidate network equipment sets and the number of elements contained in each candidate network equipment set according to the number of the stream numbers of the target data streams;
And acquiring the plurality of candidate network equipment sets according to the number of the candidate network equipment sets and the number of elements contained in each candidate network equipment set based on the target network address of the data flow to be scheduled and the routing information of the source network equipment.
In a possible implementation manner, the selecting unit 1001 is configured to determine a target communication link between the source network device and each candidate network device set, specifically configured to:
Obtaining the residual available bandwidth of each communication link between the source network device and each candidate network device set;
And selecting a set number of communication links as the target communication links in the order of the remaining available bandwidth from high to low.
In a possible implementation manner, the selecting unit 1001 is configured to simulate a process of transmitting the data flow to be scheduled through the candidate network device sets, so as to obtain link analog traffic corresponding to the candidate network device sets, and specifically is configured to:
Calculating attribute information of the data stream to be scheduled based on a preset hash algorithm to obtain a hash value of the data stream to be scheduled; the attribute information of the data flow to be scheduled comprises a target network address of the data flow to be scheduled;
Determining the data flow to be scheduled allocated to the target communication links corresponding to the candidate network equipment sets according to the corresponding relation between the hash values and the target communication links corresponding to the candidate network equipment sets and the hash values of the data flow to be scheduled;
And determining the link simulation flow corresponding to each candidate network equipment set according to the data flow to be scheduled allocated to the target communication link and the occupied flow of the target communication link.
In a possible implementation manner, the determining unit 1002 is configured to sequentially determine, based on the target network address and the selected first hop network device, other hop network devices of the data stream to be scheduled in a transmission process, and specifically is configured to:
Acquiring the routing information of the ith hop network equipment of the data flow to be scheduled;
Inquiring next hop equipment associated with a destination network address of the data flow to be scheduled from the routing information of the ith hop network equipment of the data flow to be scheduled, and obtaining a plurality of next hop network equipment of the ith hop network equipment of the data flow to be scheduled;
and selecting the (i+1) th hop network equipment of the data stream to be scheduled from a plurality of next hop network equipment of the (i) th hop network equipment of the data stream to be scheduled, and obtaining other hop network equipment of the data stream to be scheduled in the transmission process.
In one possible implementation manner, the processing apparatus 100 for routing information further includes:
an obtaining unit 1004, configured to obtain data traffic of each data flow with congestion occurring in a transmission process;
The selecting unit 1001 is further configured to select at least one data flow as the data flow to be scheduled according to the order of the data flows of the respective data flows from large to small.
In a possible implementation manner, the obtaining unit 1004 is further configured to obtain a data traffic of each data flow that is congested in a transmission process;
The selecting unit 1001 is further configured to select, according to a sum of data flows of the respective data flows and a preset flow selection proportion, a data flow matching the preset flow selection proportion from the data flows with congestion, as the data flow to be scheduled.
In a possible implementation manner, the obtaining unit 1004 is further configured to obtain a source network address of the data flow to be scheduled, and determine a data flow sender corresponding to the source network address;
A querying unit 1005, configured to query, in a network device database, a network device accessed by the data flow sender as the source network device.
In a possible implementation manner, the obtaining unit 1004 is further configured to obtain an available capacity of the scheduling link;
The determining unit 1002 is further configured to determine that, when it is determined that the data traffic of the data flow to be scheduled is less than or equal to the available capacity, the data traffic matches the available capacity of the scheduling link.
According to an embodiment of the application, the steps involved in the method shown in fig. 3 may be performed by respective units in the processing device of the routing information shown in fig. 10. For example, step S301 shown in fig. 3 is performed by the selection unit 1001 shown in fig. 10, step S302 is performed by the determination unit 1002 shown in fig. 10, and step S503 is performed by the transmission unit 1003 shown in fig. 10.
According to an embodiment of the present application, each unit in the routing information processing apparatus 100 shown in fig. 10 may be separately or completely combined into one or several additional units, or some unit(s) thereof may be further split into a plurality of units having smaller functions, which may achieve the same operation without affecting the achievement of the technical effects of the embodiment of the present application. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the processing apparatus 100 for routing information may also include other units, and in practical applications, these functions may also be implemented with assistance by other units, and may be implemented by cooperation of multiple units. According to another embodiment of the present application, the processing apparatus 100 of routing information as shown in fig. 10 may be constructed by running a computer program (including program code) capable of executing the steps involved in the respective methods as shown in fig. 3 on a general-purpose computing device of a general-purpose computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), and the like, and a storage element, and a processing method of routing information of the embodiment of the present application is implemented. The computer program may be recorded on a computer-readable storage medium, for example, and loaded into and run in a video processing apparatus of the routing information processing system shown in fig. 2 through the computer-readable storage medium.
Based on the above description of the embodiment of the processing method of the routing information, the embodiment of the present application also discloses a processing device of the routing information, referring to fig. 11, where the processing device 110 of the routing information may at least include a processor 1101, an input device 1102, an output device 1103 and a memory 1104. Wherein the processor 1101, input device 1102, output device 1103 and memory 1104 within the processing device 110 of the routing information can be connected by bus or other means.
The memory 1104 is a memory device in the processing device 110 for routing information, and is used for storing programs and data. It will be appreciated that the memory 1104 herein may include both built-in storage media for the processing device of the routing information and extended storage media supported by the processing device 110 of the routing information. Memory 1104 provides storage space that stores the operating system of processing device 110 for routing information. A computer program (including program code) is stored in the memory space. Note that the computer storage medium herein may be a high-speed RAM memory; alternatively, it may be at least one computer storage medium remote from the aforementioned processor, which may be referred to as a central processing unit (Central Processing Unit, CPU), which is the core of the processing device for routing information and a control center for running the computer program stored in the aforementioned memory 1104.
In one embodiment, the computer program stored in the memory 1104 may be loaded and executed by the processor 1101 to implement the respective steps of the method in the processing method embodiment described above with respect to the routing information; specifically, the processor 1101 loads and executes a computer program stored in the memory 1104, for:
selecting first-hop network equipment after the data flow to be scheduled is sent out from source network equipment according to the data flow of the data flow to be scheduled and a target network address;
sequentially determining other network jumping devices of the data stream to be scheduled in the transmission process based on the target network address and the selected first network jumping device; the i+1th hop network equipment of the data flow to be scheduled is determined according to the queried routing information of the i hop network equipment, wherein i is a positive integer;
Determining a scheduling link of the data stream to be scheduled according to each hop of network equipment of the data stream to be scheduled in the transmission process;
and if the data flow is matched with the available capacity of the scheduling link, sending scheduling route information corresponding to the data flow to be scheduled to the source network equipment according to the scheduling link.
In a possible implementation manner, the processor 1101 loads and executes a computer program stored in the memory 1104 to select a first hop network device after the data flow to be scheduled is sent from the source network device according to the data traffic of the data flow to be scheduled and the target network address, specifically for:
Acquiring a plurality of candidate network equipment sets connected with the source network equipment according to a target network address of a data stream to be scheduled and the routing information of the source network equipment;
Determining a target communication link between the source network device and each set of candidate network devices;
Calculating the flow scheduling distribution condition of the data flow to be scheduled when the data flow to be scheduled is transmitted through each candidate network equipment set according to the data flow of the data flow to be scheduled and the number of target communication links between the source network equipment and each candidate network equipment set;
and selecting a target candidate network equipment set from the plurality of candidate network equipment sets as the first-hop network equipment according to the traffic scheduling distribution situation corresponding to each candidate network equipment set.
In a possible implementation manner, the processor 1101 loads and executes a computer program stored in the memory 1104 to calculate, according to the data traffic of the data flow to be scheduled and the number of target communication links between the source network device and each candidate network device set, a traffic scheduling distribution situation when the data flow to be scheduled is transmitted through each candidate network device set, where the computer program is specifically configured to:
Calculating the link average allocation flow corresponding to each candidate network equipment set according to the data flow of the data flow to be scheduled and the number of the target communication links corresponding to each candidate network equipment set;
Simulating the transmission process of the data stream to be scheduled through each candidate network equipment set so as to acquire link simulation flow corresponding to each candidate network equipment set;
And calculating the flow scheduling distribution situation corresponding to each candidate network equipment set according to the link average distribution flow, the link simulation flow and the number of target communication links corresponding to each candidate network equipment set.
In a possible implementation manner, the processor 1101 loads and executes a computer program stored in the memory 1104 to calculate a traffic scheduling distribution situation corresponding to each candidate network device set, where the traffic scheduling distribution situation is specifically used to:
Calculating link flow variance corresponding to each candidate network device set according to the link average allocation flow, the link analog flow and the number of target communication links corresponding to each candidate network device set;
and taking the link flow variance corresponding to each candidate network equipment set as the flow scheduling distribution condition corresponding to each candidate network equipment set.
In one possible implementation manner, the traffic scheduling distribution condition includes variances of link simulation traffic corresponding to the candidate network device sets; the processor 1101 loads and executes a computer program stored in the memory 1104, where the computer program is configured to select a target candidate network device set from the plurality of candidate network device sets as the first hop network device according to a traffic scheduling distribution situation corresponding to each candidate network device set, and is specifically configured to:
And selecting a candidate network device set with the smallest variance from the plurality of candidate network device sets as the first-hop network device according to the variances of the link simulation traffic corresponding to each candidate network device set.
In a possible implementation manner, the processor 1101 loads and executes a computer program stored in the memory 1104 to obtain a plurality of candidate network device sets connected to the source network device according to a destination network address of a data stream to be scheduled and routing information of the source network device, specifically for:
acquiring a target data stream transmitted by the source network device, wherein the target data stream is a data stream with the same target network address as the data stream to be scheduled;
determining the number of the candidate network equipment sets and the number of elements contained in each candidate network equipment set according to the number of the stream numbers of the target data streams;
And acquiring the plurality of candidate network equipment sets according to the number of the candidate network equipment sets and the number of elements contained in each candidate network equipment set based on the target network address of the data flow to be scheduled and the routing information of the source network equipment.
In one possible implementation, the processor 1101 loads and executes a computer program stored in the memory 1104 for determining a target communication link between the source network device and each set of candidate network devices, in particular for:
Obtaining the residual available bandwidth of each communication link between the source network device and each candidate network device set;
And selecting a set number of communication links as the target communication links in the order of the remaining available bandwidth from high to low.
In a possible implementation manner, the processor 1101 loads and executes a computer program stored in the memory 1104 to simulate a process of transmitting the data stream to be scheduled through the candidate network device sets, so as to obtain link analog traffic corresponding to the candidate network device sets, specifically for:
Calculating attribute information of the data stream to be scheduled based on a preset hash algorithm to obtain a hash value of the data stream to be scheduled; the attribute information of the data flow to be scheduled comprises a target network address of the data flow to be scheduled;
Determining the data flow to be scheduled allocated to the target communication links corresponding to the candidate network equipment sets according to the corresponding relation between the hash values and the target communication links corresponding to the candidate network equipment sets and the hash values of the data flow to be scheduled;
And determining the link simulation flow corresponding to each candidate network equipment set according to the data flow to be scheduled allocated to the target communication link and the occupied flow of the target communication link.
In a possible implementation manner, the processor 1101 loads and executes a computer program stored in the memory 1104 to sequentially determine other hop network devices of the data stream to be scheduled in the transmission process based on the target network address and the selected first hop network device, specifically:
Acquiring the routing information of the ith hop network equipment of the data flow to be scheduled;
Inquiring next hop equipment associated with a destination network address of the data flow to be scheduled from the routing information of the ith hop network equipment of the data flow to be scheduled, and obtaining a plurality of next hop network equipment of the ith hop network equipment of the data flow to be scheduled;
and selecting the (i+1) th hop network equipment of the data stream to be scheduled from a plurality of next hop network equipment of the (i) th hop network equipment of the data stream to be scheduled, and obtaining other hop network equipment of the data stream to be scheduled in the transmission process.
In one possible implementation, the processor 1101 loads and executes a computer program stored in the memory 1104, which is further configured to:
Acquiring data flow of each data flow with congestion in the transmission process;
and selecting at least one data stream as the data stream to be scheduled according to the sequence from the big data flow to the small data flow of each data stream.
In one possible implementation, the processor 1101 loads and executes a computer program stored in the memory 1104, which is further configured to:
Acquiring data flow of each data flow with congestion in the transmission process;
And selecting a data stream matched with the preset flow selection proportion from the data streams with congestion according to the sum of the data flows of the data streams and the preset flow selection proportion, and taking the data stream as the data stream to be scheduled.
In one possible implementation, the processor 1101 loads and executes a computer program stored in the memory 1104, which is further configured to:
Acquiring a source network address of the data stream to be scheduled, and determining a data stream sender corresponding to the source network address;
And querying the network equipment accessed by the data flow sender in a network equipment database to serve as the source network equipment.
In one possible implementation, the processor 1101 loads and executes a computer program stored in the memory 1104, which is further configured to:
Acquiring the available capacity of the scheduling link;
and under the condition that the data flow of the data flow to be scheduled is smaller than or equal to the available capacity, determining that the data flow is matched with the available capacity of the scheduling link.
It should be appreciated that in embodiments of the present application, the Processor 1101 may be a central processing unit (Central Processing Unit, CPU), the Processor 1101 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application SPECIFIC INTEGRATED Circuits (ASICs), off-the-shelf Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In an embodiment of the present application, a computer readable storage medium is provided, where the computer readable storage medium stores a computer program, where the computer program includes program instructions that, when executed by a processor, perform the steps performed in all the embodiments described above.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium, which when executed by a processor of a computer device, perform the method of all embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of computer programs, which may be stored on a computer-readable storage medium, and which, when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), or the like.
The above disclosure is only a preferred embodiment of the present invention, and it should be understood that the scope of the invention is not limited thereto, and those skilled in the art will appreciate that all or part of the procedures described above can be performed according to the equivalent changes of the claims, and still fall within the scope of the present invention.
It should be further noted that, when the above embodiments of the present application are applied to specific products or technologies, if user data needs to be obtained, permission or consent of the user needs to be obtained, and the collection, use and processing of relevant data needs to comply with relevant laws and regulations and standards of relevant countries and regions.

Claims (17)

1. A method for processing scheduling routing information, comprising:
selecting first-hop network equipment after the data flow to be scheduled is sent out from source network equipment according to the data flow of the data flow to be scheduled and a target network address;
sequentially determining other network jumping devices of the data stream to be scheduled in the transmission process based on the target network address and the selected first network jumping device; the i+1th hop network equipment of the data flow to be scheduled is determined according to the queried routing information of the i hop network equipment, wherein i is a positive integer;
Determining a scheduling link of the data stream to be scheduled according to each hop of network equipment of the data stream to be scheduled in the transmission process;
and if the data flow is matched with the available capacity of the scheduling link, sending scheduling route information corresponding to the data flow to be scheduled to the source network equipment according to the scheduling link.
2. The method of claim 1, wherein the selecting the first hop network device after the data flow to be scheduled is sent out from the source network device according to the data traffic of the data flow to be scheduled and the destination network address comprises:
Acquiring a plurality of candidate network equipment sets connected with the source network equipment according to a target network address of a data stream to be scheduled and the routing information of the source network equipment;
Determining a target communication link between the source network device and each set of candidate network devices;
Calculating the flow scheduling distribution condition of the data flow to be scheduled when the data flow to be scheduled is transmitted through each candidate network equipment set according to the data flow of the data flow to be scheduled and the number of target communication links between the source network equipment and each candidate network equipment set;
and selecting a target candidate network equipment set from the plurality of candidate network equipment sets as the first-hop network equipment according to the traffic scheduling distribution situation corresponding to each candidate network equipment set.
3. The method according to claim 2, wherein calculating the traffic scheduling distribution of the data flow to be scheduled when the data flow to be scheduled is transmitted through each candidate network device set according to the data traffic of the data flow to be scheduled and the number of target communication links between the source network device and each candidate network device set comprises:
Calculating the link average allocation flow corresponding to each candidate network equipment set according to the data flow of the data flow to be scheduled and the number of the target communication links corresponding to each candidate network equipment set;
Simulating the transmission process of the data stream to be scheduled through each candidate network equipment set so as to acquire link simulation flow corresponding to each candidate network equipment set;
And calculating the flow scheduling distribution situation corresponding to each candidate network equipment set according to the link average distribution flow, the link simulation flow and the number of target communication links corresponding to each candidate network equipment set.
4. The method of claim 3, wherein calculating the traffic scheduling distribution corresponding to each candidate network device set according to the link average allocation traffic, the link analog traffic, and the number of target communication links corresponding to each candidate network device set comprises:
Calculating link flow variance corresponding to each candidate network device set according to the link average allocation flow, the link analog flow and the number of target communication links corresponding to each candidate network device set;
and taking the link flow variance corresponding to each candidate network equipment set as the flow scheduling distribution condition corresponding to each candidate network equipment set.
5. The method of claim 2, wherein the traffic scheduling profile comprises variances of link analog traffic corresponding to the respective candidate network device sets; the selecting a target candidate network device set from the plurality of candidate network device sets as the first hop network device according to the traffic scheduling distribution situation corresponding to each candidate network device set includes:
And selecting a candidate network device set with the smallest variance from the plurality of candidate network device sets as the first-hop network device according to the variances of the link simulation traffic corresponding to each candidate network device set.
6. The method according to claim 2, wherein the obtaining a plurality of candidate network device sets connected to the source network device according to the destination network address of the data flow to be scheduled and the routing information of the source network device includes:
acquiring a target data stream transmitted by the source network device, wherein the target data stream is a data stream with the same target network address as the data stream to be scheduled;
determining the number of the candidate network equipment sets and the number of elements contained in each candidate network equipment set according to the number of the stream numbers of the target data streams;
And acquiring the plurality of candidate network equipment sets according to the number of the candidate network equipment sets and the number of elements contained in each candidate network equipment set based on the target network address of the data flow to be scheduled and the routing information of the source network equipment.
7. The method of claim 2, wherein the determining a target communication link between the source network device and each set of candidate network devices comprises:
Obtaining the residual available bandwidth of each communication link between the source network device and each candidate network device set;
And selecting a set number of communication links as the target communication links in the order of the remaining available bandwidth from high to low.
8. The method of claim 3, wherein simulating the transmission of the data stream to be scheduled through the respective candidate network device sets to obtain the link simulation traffic corresponding to the respective candidate network device sets comprises:
Calculating attribute information of the data stream to be scheduled based on a preset hash algorithm to obtain a hash value of the data stream to be scheduled; the attribute information of the data flow to be scheduled comprises a target network address of the data flow to be scheduled;
Determining the data flow to be scheduled allocated to the target communication links corresponding to the candidate network equipment sets according to the corresponding relation between the hash values and the target communication links corresponding to the candidate network equipment sets and the hash values of the data flow to be scheduled;
And determining the link simulation flow corresponding to each candidate network equipment set according to the data flow to be scheduled allocated to the target communication link and the occupied flow of the target communication link.
9. The method of claim 1, wherein the sequentially determining other hop network devices of the data stream to be scheduled during transmission based on the target network address and the selected first hop network device comprises:
Acquiring the routing information of the ith hop network equipment of the data flow to be scheduled;
Inquiring next hop equipment associated with a destination network address of the data flow to be scheduled from the routing information of the ith hop network equipment of the data flow to be scheduled, and obtaining a plurality of next hop network equipment of the ith hop network equipment of the data flow to be scheduled;
and selecting the (i+1) th hop network equipment of the data stream to be scheduled from a plurality of next hop network equipment of the (i) th hop network equipment of the data stream to be scheduled, and obtaining other hop network equipment of the data stream to be scheduled in the transmission process.
10. The method according to claim 1, wherein the method further comprises:
Acquiring data flow of each data flow with congestion in the transmission process;
and selecting at least one data stream as the data stream to be scheduled according to the sequence from the big data flow to the small data flow of each data stream.
11. The method according to claim 1, wherein the method further comprises:
Acquiring data flow of each data flow with congestion in the transmission process;
And selecting a data stream matched with the preset flow selection proportion from the data streams with congestion according to the sum of the data flows of the data streams and the preset flow selection proportion, and taking the data stream as the data stream to be scheduled.
12. The method according to any one of claims 1 to 11, wherein before selecting the first hop network device after the data flow to be scheduled is sent out from the source network device, according to the data traffic of the data flow to be scheduled and the destination network address, the method further comprises:
Acquiring a source network address of the data stream to be scheduled, and determining a data stream sender corresponding to the source network address;
And querying the network equipment accessed by the data flow sender in a network equipment database to serve as the source network equipment.
13. The method according to any one of claims 1 to 11, wherein before the sending, according to the scheduling link, scheduling route information corresponding to the data flow to be scheduled to the source network device, the method further comprises:
Acquiring the available capacity of the scheduling link;
and under the condition that the data flow of the data flow to be scheduled is smaller than or equal to the available capacity, determining that the data flow is matched with the available capacity of the scheduling link.
14. A processing apparatus for routing information, comprising:
a selecting unit, configured to select a first hop network device after the data flow to be scheduled is sent from a source network device according to the data flow of the data flow to be scheduled and a target network address;
A determining unit, configured to sequentially determine other network-hop devices of the data stream to be scheduled in a transmission process based on the target network address and the selected first network-hop device; the i+1th hop network equipment of the data flow to be scheduled is determined according to the queried routing information of the i hop network equipment, wherein i is a positive integer; determining a scheduling link of the data stream to be scheduled according to each hop of network equipment of the data stream to be scheduled in the transmission process;
and the sending unit is used for sending the scheduling route information corresponding to the data flow to be scheduled to the source network equipment according to the scheduling link if the data flow is matched with the available capacity of the scheduling link.
15. An electronic device, comprising:
One or more processors;
a memory for storing one or more computer programs that, when executed by the one or more processors, cause the electronic device to implement the method of processing routing information of any of claims 1-13.
16. A computer readable medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of processing routing information according to any one of claims 1 to 13.
17. A computer program product, characterized in that the computer program product comprises a computer program stored in a computer-readable storage medium, from which computer-readable storage medium a processor of an electronic device reads and executes the computer program, causing the electronic device to execute the method of processing routing information according to any one of claims 1 to 13.
CN202410335042.4A 2024-03-22 2024-03-22 Method, device, equipment, storage medium and product for processing scheduling route information Active CN117938750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410335042.4A CN117938750B (en) 2024-03-22 2024-03-22 Method, device, equipment, storage medium and product for processing scheduling route information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410335042.4A CN117938750B (en) 2024-03-22 2024-03-22 Method, device, equipment, storage medium and product for processing scheduling route information

Publications (2)

Publication Number Publication Date
CN117938750A CN117938750A (en) 2024-04-26
CN117938750B true CN117938750B (en) 2024-06-07

Family

ID=90752465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410335042.4A Active CN117938750B (en) 2024-03-22 2024-03-22 Method, device, equipment, storage medium and product for processing scheduling route information

Country Status (1)

Country Link
CN (1) CN117938750B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019020032A1 (en) * 2017-07-25 2019-01-31 新华三技术有限公司 Data stream transmission
CN109873762A (en) * 2017-12-05 2019-06-11 中国电信股份有限公司 Path dispatching method, device and computer readable storage medium
CN116939035A (en) * 2022-03-29 2023-10-24 腾讯科技(深圳)有限公司 Data processing method, device, electronic equipment and storage medium
CN117201365A (en) * 2023-09-05 2023-12-08 杭州阿里巴巴飞天信息技术有限公司 Flow determination method, device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019020032A1 (en) * 2017-07-25 2019-01-31 新华三技术有限公司 Data stream transmission
CN109873762A (en) * 2017-12-05 2019-06-11 中国电信股份有限公司 Path dispatching method, device and computer readable storage medium
CN116939035A (en) * 2022-03-29 2023-10-24 腾讯科技(深圳)有限公司 Data processing method, device, electronic equipment and storage medium
CN117201365A (en) * 2023-09-05 2023-12-08 杭州阿里巴巴飞天信息技术有限公司 Flow determination method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN117938750A (en) 2024-04-26

Similar Documents

Publication Publication Date Title
US11863458B1 (en) Reflected packets
US11601359B2 (en) Resilient network communication using selective multipath packet flow spraying
US10218642B2 (en) Switch arbitration based on distinct-flow counts
CN107634912B (en) Load balancing method, device and equipment
US11888744B2 (en) Spin-leaf network congestion control method, node, system, and storage medium
WO2021244247A1 (en) Data message forwarding method, network node, system, and storage medium
CN111181873B (en) Data transmission method, data transmission device, storage medium and electronic equipment
CN116389365B (en) Switch data processing method and system
CN112565102B (en) Load balancing method, device, equipment and medium
US11863322B2 (en) Communication method and apparatus
CN113328953B (en) Method, device and storage medium for network congestion adjustment
Cheng et al. An in-switch rule caching and replacement algorithm in software defined networks
CN112087382B (en) Service routing method and device
CN112910778A (en) Network security routing method and system
CN117938750B (en) Method, device, equipment, storage medium and product for processing scheduling route information
CN116723154A (en) Route distribution method and system based on load balancing
AlShammari et al. BL‐Hybrid: A graph‐theoretic approach to improving software‐defined networking‐based data center network performance
Lin et al. Proactive multipath routing with a predictive mechanism in software‐defined networks
CN112822107A (en) Wide area network optimization method based on artificial intelligence
CN111585894A (en) Network routing method and device based on weight calculation
Leite et al. Planning of adhoc and iot networks under emergency mode of operation
Tavasoli et al. An SDN-based algorithm for caching, routing, and load balancing in ICN
CN111817906B (en) Data processing method, device, network equipment and storage medium
CN107113244B (en) Data forwarding method, device and system
Duan et al. GLIB: A Global and Local Integrated Load Balancing Scheme for Datacenter Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant