CN114237846A - Global flow scheduling system, method and storage medium based on virtual multilink technology - Google Patents
Global flow scheduling system, method and storage medium based on virtual multilink technology Download PDFInfo
- Publication number
- CN114237846A CN114237846A CN202111512848.9A CN202111512848A CN114237846A CN 114237846 A CN114237846 A CN 114237846A CN 202111512848 A CN202111512848 A CN 202111512848A CN 114237846 A CN114237846 A CN 114237846A
- Authority
- CN
- China
- Prior art keywords
- edge node
- pops
- traffic
- flow
- scheduling instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000005516 engineering process Methods 0.000 title claims abstract description 22
- 239000002957 persistent organic pollutant Substances 0.000 claims abstract description 132
- 238000004458 analytical method Methods 0.000 claims abstract description 43
- 230000005540 biological transmission Effects 0.000 abstract description 17
- 230000008569 process Effects 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 20
- 238000004590 computer program Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 5
- 102100035475 Blood vessel epicardial substance Human genes 0.000 description 4
- 101001094636 Homo sapiens Blood vessel epicardial substance Proteins 0.000 description 4
- 101000608194 Homo sapiens Pyrin domain-containing protein 1 Proteins 0.000 description 4
- 101000595404 Homo sapiens Ribonucleases P/MRP protein subunit POP1 Proteins 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 101000942586 Homo sapiens CCR4-NOT transcription complex subunit 8 Proteins 0.000 description 1
- 101001094629 Homo sapiens Popeye domain-containing protein 2 Proteins 0.000 description 1
- 101000608230 Homo sapiens Pyrin domain-containing protein 2 Proteins 0.000 description 1
- 102100035482 Popeye domain-containing protein 2 Human genes 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The embodiment of the application discloses a global traffic scheduling system, a method and a storage medium based on a virtual multilink technology, which are used for realizing global traffic load balance of virtual multilink transmission, the whole scheduling process is fast and accurate, and the operation cost can be saved. The system of the embodiment of the application comprises: the system comprises a plurality of flow collectors, a server, a first edge node and a second edge node; the flow collectors are used for collecting flow bandwidth information of a plurality of corresponding point-of-presence POPs and sending the flow bandwidth information of the plurality of point-of-presence POPs to the server; the server is used for receiving the flow bandwidth information of the plurality of point-of-presence POPs sent by the plurality of flow collectors, performing analysis decision according to the flow bandwidth information of the plurality of point-of-presence POPs, and issuing a flow scheduling instruction to the first edge node; and the first edge node is used for sending a data packet to the second edge node according to the flow scheduling instruction.
Description
Technical Field
The present application relates to the field of computers, and in particular, to a system, a method, and a storage medium for global traffic scheduling based on a virtual multilink technique.
Background
In the existing global traffic scheduling technology, when the traffic bandwidth of a Point-of-Presence (POP) node reaches an upper threshold (runs high), a subsequent request link is scheduled to other redundant POP nodes, and meanwhile, the connection of data transmission is maintained, and all connections are disconnected after the data transmission is completed, so as to ensure the service continuity. This approach may save traffic costs to a large extent, but there are also risk points for exceeding the threshold. And the global traffic scheduling accuracy is not sufficient, and the raised POP node generates extra traffic bandwidth cost.
Disclosure of Invention
The embodiment of the application provides a global traffic scheduling system, a method and a storage medium based on a virtual multilink technology, which are used for realizing global traffic load balancing of virtual multilink transmission, the whole scheduling process is fast and accurate, and the operation cost can be saved.
A first aspect of an embodiment of the present application provides a global traffic scheduling system based on a virtual multilink technology, which may include:
the system comprises a plurality of flow collectors, a server, a first edge node and a second edge node;
the flow collectors are used for collecting flow bandwidth information of a plurality of corresponding point-of-presence POPs and sending the flow bandwidth information of the plurality of point-of-presence POPs to the server, and the flow collectors are in one-to-one correspondence with the plurality of point-of-presence POPs;
the server is used for receiving the flow bandwidth information of the plurality of point-of-presence POPs sent by the plurality of flow collectors, performing analysis decision according to the flow bandwidth information of the plurality of point-of-presence POPs, and issuing a flow scheduling instruction to the first edge node;
and the first edge node is used for receiving the traffic scheduling instruction sent by the server and sending a data packet to the second edge node according to the traffic scheduling instruction.
Optionally, the server is specifically configured to receive traffic bandwidth information of the multiple point-of-presence POPs sent by the multiple traffic collectors, perform analysis and decision according to the traffic bandwidth information of the multiple point-of-presence POPs and round trip time RTT data between the respective point-of-presence POPs, and issue a traffic scheduling instruction to the first edge node.
Optionally, the server is specifically configured to receive traffic bandwidth information of the multiple point-of-presence POPs sent by the multiple traffic collectors, perform analysis and decision according to the traffic bandwidth information of the multiple point-of-presence POPs, round trip time RTT data between the respective point-of-presence POPs, and geographic locations of the multiple point-of-presence POPs, and issue a traffic scheduling instruction to the first edge node.
Optionally, the first edge node is specifically configured to receive a traffic scheduling instruction sent by the server, unpack the traffic scheduling instruction through the first virtual link manager, mark a label index, and send an unpacked data packet to the second edge node according to the traffic scheduling instruction.
Optionally, the second edge node is specifically configured to receive the unpacked data packet sent by the first edge node, and splice the unpacked data packet into a complete data packet according to the label index through a second virtual link manager.
Optionally, the server is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple POP points, and if it is determined that the traffic bandwidth value of the first POP is greater than or equal to a first threshold, issue a first target traffic scheduling instruction to the first edge node;
the first edge node is specifically configured to receive the first target traffic scheduling instruction sent by the server, divide a first data packet into other POPs according to the first target traffic scheduling instruction, and send the first data packet to the second edge node through the other POPs, so that a traffic bandwidth value of the first POP is smaller than the first threshold.
Optionally, the server is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple POP points, and if it is determined that the traffic bandwidth value of the first POP is smaller than or equal to a second threshold, issue a second target traffic scheduling instruction to the first edge node;
the first edge node is specifically configured to receive the second target traffic scheduling instruction sent by the server, divide data packets of other POPs into second data packets to the first POP according to the second target traffic scheduling instruction, and send the second data packets to the second edge node through the first POP, so that a traffic bandwidth value of the first POP is greater than the second threshold.
Optionally, the server is configured to receive traffic bandwidth information of the multiple point-of-presence POPs sent by the multiple traffic collectors, perform an analysis decision according to the traffic bandwidth information of the multiple point-of-presence POPs, determine that a first POP fails, and issue a third target traffic scheduling instruction to the first edge node;
and the first edge node is used for receiving the third target traffic scheduling instruction sent by the server and sending a data packet to the second edge node through other POPs according to the third target traffic scheduling instruction.
A second aspect of the present application provides a global traffic scheduling method based on a virtual multilink technology, which may include:
receiving flow bandwidth information of a plurality of point-of-presence POPs sent by a plurality of flow collectors, wherein the plurality of flow collectors are in one-to-one correspondence with the plurality of point-of-presence POPs;
and analyzing and deciding according to the flow bandwidth information of the POPs, and issuing a flow scheduling instruction to the first edge node, wherein the flow scheduling instruction is used for the first edge node to send a data packet to the second edge node according to the flow scheduling instruction.
A third aspect of the application provides a computer-readable storage medium comprising instructions which, when executed on a processor, cause the processor to perform the method according to the second aspect of the application.
A further aspect of the invention discloses a computer program product for causing a computer to perform the method of the second aspect of the application when the computer program product runs on the computer.
In a further aspect, the present invention discloses an application publishing platform for publishing a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform the method of the second aspect of the present application.
According to the technical scheme, the embodiment of the application has the following advantages:
the system of the embodiment of the application comprises: the system comprises a plurality of flow collectors, a server, a first edge node and a second edge node; the flow collectors are used for collecting flow bandwidth information of a plurality of corresponding point-of-presence POPs and sending the flow bandwidth information of the plurality of point-of-presence POPs to the server, and the flow collectors are in one-to-one correspondence with the plurality of point-of-presence POPs; the server is used for receiving the flow bandwidth information of the plurality of point-of-presence POPs sent by the plurality of flow collectors, performing analysis decision according to the flow bandwidth information of the plurality of point-of-presence POPs, and issuing a flow scheduling instruction to the first edge node; the first edge node is configured to receive a traffic scheduling instruction sent by the server, and send a data packet to the second edge node according to the traffic scheduling instruction; and the second edge node is used for receiving the data packet sent by the first edge node. The server analyzes and decides according to the flow bandwidth information of the POPs, issues a flow scheduling instruction, and sends a data packet according to the flow scheduling instruction, so that the method can be used for realizing the global flow load balance of virtual multilink transmission, the whole scheduling process is rapid and accurate, and the operation cost can be saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following briefly introduces the embodiments and the drawings used in the description of the prior art, and obviously, the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained according to the drawings.
Fig. 1 is a schematic diagram of a POP node broadband curve in the embodiment of the present application;
fig. 2 is a schematic diagram of a global traffic scheduling system based on a virtual multilink technology in an embodiment of the present application;
fig. 3 is another schematic diagram of a global traffic scheduling system based on virtual multilink technology in an embodiment of the present application;
fig. 4 is a schematic diagram of an embodiment of a global traffic scheduling method based on a virtual multilink technology in an embodiment of the present application;
fig. 5A is another schematic diagram of a global traffic scheduling method based on a virtual multilink technology in an embodiment of the present application;
fig. 5B is a schematic flow diagram of a certain POP node in the embodiment of the present application;
FIG. 6 is a schematic diagram of an embodiment of a server in an embodiment of the present application;
FIG. 7 is a schematic diagram of an embodiment of an edge node in an embodiment of the present application;
FIG. 8 is a schematic diagram of another embodiment of a server in the embodiment of the present application;
fig. 9 is a schematic diagram of another embodiment of an edge node in the embodiment of the present application.
Detailed Description
The embodiment of the application provides a global traffic scheduling system, a method and a storage medium based on a virtual multilink technology, which are used for realizing global traffic load balancing of virtual multilink transmission, the whole scheduling process is fast and accurate, and the operation cost can be saved.
For a person skilled in the art to better understand the present application, the technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. The embodiments in the present application shall fall within the protection scope of the present application.
In distributed network services, the computing, network, and storage resources at different nodes are different, and the device state, the network state, and the like may be in a normal or fault state, or different. In the service process, the use preplanning needs to be performed for resources of different nodes, the planning can be considered comprehensively by combining the geographic position of a user, the resource condition of the nodes and the like, in addition, the equipment and the network state in the distributed network need to be monitored immediately, the resource use condition of each node also needs to be monitored, and when insufficient resources or faults occur, other nodes need to be scheduled and added in time or replaced by other nodes. The node resource usage, that is, the node bandwidth usage, is usually said, and these Point-of-Presence1 (POP) node resources usually have a requirement on the upper limit of bandwidth usage, and if the bandwidth resources of the POP node can be fully utilized, the traffic bandwidth of the POP node is made to approach the upper limit running amount as much as possible, so that the best use of things is achieved, the procurement cost of an enterprise can be greatly reduced, and therefore, it is important for an enterprise to establish a set of accurate and automatic global traffic scheduling system.
In the existing global traffic scheduling technology, when the traffic bandwidth of a Point-of-Presence (POP) node reaches an upper threshold (runs high), a subsequent request link is scheduled to other redundant POP nodes, and meanwhile, the connection of data transmission is maintained, and all connections are disconnected after the data transmission is completed, so as to ensure the service continuity. This approach may save traffic costs to a large extent, but there are also risk points for exceeding the threshold.
Fig. 1 is a schematic diagram of a POP node broadband curve in the embodiment of the present application. Referring to fig. 1, assuming that the threshold of the running-up bandwidth of a node is Top _ a and the threshold of the running-down bandwidth is Top _ B, in the operation process, assuming that when T1 is reached, the monitoring module finds that the flow of the POP node runs up and exceeds the threshold Top _ a, the scheduling center issues a scheduling instruction to schedule a new user request to another POP node, and during the period from T1 to T2, because the continuity of the service needs to be maintained, the connection must be gradually disconnected until the data transmission is completed, so that a certain potential risk of running up is generated, for example, the shaded portion in fig. 1 is the cost excess.
The main problems of the conventional traffic scheduling scheme are: the global traffic scheduling accuracy is not sufficient, and the elevated POP node generates extra traffic bandwidth cost. When the node fails, the switching efficiency is not high enough, which is not favorable for stable and smooth service.
Aiming at the problem, the application provides a global flow scheduling system based on an Overlay virtualization multilink technology. The general framework of the virtualization technology mode is to realize the load bearing applied to the network without modifying the basic network in a large scale and can be separated from other network services. The Overlay network refers to a virtual network established on an existing network, and the logical nodes and the logical links form the Overlay network.
As shown in fig. 2, a schematic diagram of a global traffic scheduling system based on virtual multi-link technology in the embodiment of the present application may include:
a plurality of traffic collectors 201, a server 202, a first edge node 203, and a second edge node 204;
a plurality of traffic collectors 201, configured to collect traffic bandwidth information of a plurality of corresponding point-of-presence POPs, and send the traffic bandwidth information of the plurality of point-of-presence POPs to the server 202, where the plurality of traffic collectors 201 correspond to the plurality of point-of-presence POPs one to one;
the server 202 is configured to receive traffic bandwidth information of the multiple point-of-presence POPs sent by the multiple traffic collectors 201, perform analysis and decision according to the traffic bandwidth information of the multiple point-of-presence POPs, and issue a traffic scheduling instruction to the first edge node 203;
the first edge node 203 is configured to receive a traffic scheduling instruction sent by the server 202, and send a data packet to the second edge node 204 according to the traffic scheduling instruction.
Optionally, the second edge node is configured to receive a data packet sent by the first edge node.
In the embodiment of the application, the server performs analysis and decision according to the traffic bandwidth information of the multiple POPs, issues the traffic scheduling instruction, and sends the data packet according to the traffic scheduling instruction, so that the method can be used for realizing the global traffic load balance of virtual multi-link transmission, the whole scheduling process is fast and accurate, and the operation cost can be saved.
Optionally, the server 202 is specifically configured to receive traffic bandwidth information of the multiple point-of-presence POPs sent by the multiple traffic collectors 201, perform analysis and decision according to the traffic bandwidth information of the multiple point-of-presence POPs and round trip time RTT data between the multiple point-of-presence POPs, and issue a traffic scheduling instruction to the first edge node 203.
Optionally, the server 202 is specifically configured to receive traffic bandwidth information of the multiple point-of-presence POPs sent by the multiple traffic collectors 201, perform analysis and decision according to the traffic bandwidth information of the multiple point-of-presence POPs, round trip time RTT data between the multiple point-of-presence POPs, and geographic locations of the multiple point-of-presence POPs, and issue a traffic scheduling instruction to the first edge node 203.
Optionally, the first edge node 203 is specifically configured to receive a traffic scheduling instruction sent by the server 202, unpack the packet through the first virtual link manager 2031, mark a label index, and send the unpacked data packet to the second edge node 204 according to the traffic scheduling instruction.
Optionally, the second edge node 204 is specifically configured to receive the unpacked data packet sent by the first edge node 203, and splice the unpacked data packet into a complete data packet according to the label index through the second virtual link manager 2032.
Fig. 3 is another schematic diagram of a global traffic scheduling system based on virtual multilink technology in the embodiment of the present application.
Optionally, the server 202 is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple POP points, and if it is determined that the traffic bandwidth value of the first POP is greater than or equal to a first threshold, send a first target traffic scheduling instruction to the first edge node 203;
the first edge node 203 is specifically configured to receive the first target traffic scheduling instruction sent by the server 202, divide the first packet into other POPs according to the first target traffic scheduling instruction, and send the first packet to the second edge node 204 through the other POPs, so that the traffic bandwidth value of the first POP is smaller than the first threshold.
Optionally, the server 202 is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple POP points, and if it is determined that the traffic bandwidth value of the first POP is smaller than or equal to a second threshold, send a second target traffic scheduling instruction to the first edge node 203;
the first edge node 203 is specifically configured to receive the second target traffic scheduling instruction sent by the server 202, divide the data packets of other POPs into second data packets to the first POP according to the second target traffic scheduling instruction, and send the second data packets to the second edge node 204 through the first POP, so that the traffic bandwidth value of the first POP is greater than the second threshold.
Optionally, the server 202 is configured to receive traffic bandwidth information of the multiple point-of-presence POPs sent by the multiple traffic collectors 201, perform analysis and decision according to the traffic bandwidth information of the multiple point-of-presence POPs, determine that the first POP fails, and issue a third target traffic scheduling instruction to the first edge node 203;
the first edge node 203 is configured to receive the third target traffic scheduling instruction sent by the server 202, and send a data packet to the second edge node 204 through another POP according to the third target traffic scheduling instruction.
As shown in fig. 4, a schematic diagram of an embodiment of a global traffic scheduling method based on a virtual multi-link technology in the embodiment of the present application may include:
401. the plurality of traffic collectors collect traffic bandwidth information of the plurality of point-of-presence POPs.
The plurality of flow collectors correspond to the plurality of point of presence POPs one to one.
402. And the flow collectors send flow bandwidth information of the POPs to the server.
The flow collectors collect the flow bandwidth information of the POPs of the network points and send the flow bandwidth information to the server, and the server receives the flow bandwidth information of the POPs of the network points sent by the flow collectors.
403. And the server analyzes and decides according to the traffic bandwidth information of the POPs and sends a traffic scheduling instruction to the first edge node, wherein the traffic scheduling instruction is used for the first edge node to send a data packet to the second edge node according to the traffic scheduling instruction.
Optionally, the server performs an analysis decision according to the traffic bandwidth information of the multiple point-of-presence POPs, and issues a traffic scheduling instruction to the first edge node, where the method includes: and the server analyzes and decides according to the traffic bandwidth information of the plurality of point-of-presence POPs and the round trip time RTT data among the point-of-presence POPs, and sends a traffic scheduling instruction to the first edge node.
Optionally, the server performs an analysis decision according to the traffic bandwidth information of the multiple point-of-presence POPs and round trip time RTT data between the point-of-presence POPs, and issues a traffic scheduling instruction to the first edge node, where the analysis decision may include: and the server analyzes and decides according to the flow bandwidth information of the plurality of point-of-presence POPs, round trip time RTT data among the point-of-presence POPs and the geographic positions of the plurality of point-of-presence POPs, and transmits a flow scheduling instruction to the first edge node.
It is to be understood that the global traffic system deployment specification: installing a flow collector in all POP transfer nodes and taking charge of collecting flow data of the POP nodes in real time; the server is provided with a scheduling decision module and a global flow monitoring module, the global flow monitoring module is responsible for receiving data of a flow collector in each POP node and providing the data to the scheduling decision module for carrying out comprehensive analysis decision (the comprehensive analysis here means that the scheduling decision module also carries out comprehensive analysis decision according to RTT data and the like between the POP nodes), and after the scheduling decision module carries out comprehensive analysis to obtain the decision, the scheduling decision module issues a flow scheduling instruction to a first virtual link manager of a first edge node.
Virtual multilink data transmission specification: edge nodes at data outlets of a sending end (such as a Personal Computer (PC) -send end) and a receiving end (such as a personal computer (PC-rec end)) are both provided with virtual link managers and are responsible for managing data transmission virtual links, a data packet is unpacked by a first virtual link manager of a first edge node and then is marked with a label index, data transmission is carried out through a plurality of constructed virtual links, the data packet with the index label reaches a second virtual link manager of a second edge node of the PC-rec end, and the second virtual link manager splices the data packet into a complete data packet according to the label index and then sends the complete data packet to the PC-rec end. Among them, the Personal Computer is also called a Personal Computer (PC)
Fig. 5A is another schematic diagram of a global traffic scheduling method based on a virtual multi-link technology in the embodiment of the present application. The first edge node is exemplified by edge node a, and the second edge node is exemplified by edge node B. The first virtual link manager is exemplified by a virtual link manager a, and the second virtual link manager is exemplified by a virtual link manager B.
404. And the first edge node sends a data packet to the second edge node according to the flow scheduling instruction.
Optionally, the second edge node receives the data packet sent by the first edge node.
Optionally, the server performs an analysis decision according to the traffic bandwidth information of the multiple point-of-presence POPs, and if it is determined that the traffic bandwidth value of the first POP is greater than or equal to a first threshold, issues a first target traffic scheduling instruction to the first edge node; and the first edge node receives the first target flow scheduling instruction sent by the server, divides the first data packet into other POPs according to the first target flow scheduling instruction, and sends the first data packet to the second edge node through the other POPs, so that the flow bandwidth value of the first POP is smaller than the first threshold value.
Optionally, the server performs an analysis decision according to the traffic bandwidth information of the multiple point-of-presence POPs, and if it is determined that the traffic bandwidth value of the first POP is smaller than or equal to a second threshold, a second target traffic scheduling instruction is issued to the first edge node; and the first edge node receives the second target flow scheduling instruction sent by the server, divides the data packets of other POPs into second data packets to the first POP according to the second target flow scheduling instruction, and sends the second data packets to the second edge node 204 through the first POP, so that the flow bandwidth value of the first POP is greater than the second threshold value.
Optionally, the server receives the traffic bandwidth information of the multiple point-of-presence POPs sent by the multiple traffic collectors, performs analysis and decision according to the traffic bandwidth information of the multiple point-of-presence POPs, determines that the first POP fails, and issues a third target traffic scheduling instruction to the first edge node; and the first edge node receives the third target flow scheduling instruction sent by the server, and sends a data packet to the second edge node through other POPs according to the third target flow scheduling instruction.
Optionally, the first edge node receives a traffic scheduling instruction sent by the server, unpacks the traffic scheduling instruction through the first virtual link manager, marks a label index, and sends an unpacked data packet to the second edge node according to the traffic scheduling instruction.
Optionally, the second edge node receives the unpacked data packet sent by the first edge node, and splices the data packet into a complete data packet according to the label index through the second virtual link manager.
Exemplarily, as shown in fig. 5B, a schematic diagram of traffic of a certain POP node in this embodiment is shown. In fig. 5B, T1, T3, T5: the POP node runs up at a time point; t2, T4: the POP node runs down at a time point; top _ A: a POP node high point threshold; top _ B: POP node low threshold.
Time T1: when the global traffic scheduling system finds that the traffic bandwidth value of a certain POP node (supposing the node POP1) reaches Top _ A, the global traffic scheduling system sends a scheduling instruction to the virtual link manager of the edge node A.
Period T1 to T2: after receiving the scheduling command, the virtual link manager of the edge node a directly disconnects the data packet being transmitted and transmits the data packet via another POP2 or POP 3. Subsequent user request packets are also transmitted via other POPs 2 or 3.
Time T2: when the bandwidth of the POP1 node is found to be lower than a certain threshold value Top _ B, the global traffic scheduling system issues a scheduling instruction to the virtual link manager of the edge node A, and schedules the request traffic of other POP nodes back until the peak value is reached again.
Time T3: the same strategy as T1, when the global traffic scheduling system finds that the traffic bandwidth value of POP1 reaches Top _ A, it issues the scheduling command to the virtual link manager of edge node A.
Time T4: in the same strategy as T2, when finding that the bandwidth of the POP1 node runs below a certain threshold Top _ B, the global traffic scheduling system issues a scheduling instruction to the virtual link manager of the edge node A, and schedules back the request traffic of other POP nodes until the peak value is reached again.
In the embodiment of the application, a server receives flow bandwidth information of a plurality of point-of-presence POPs sent by a plurality of flow collectors; the server analyzes and decides according to the traffic bandwidth information of the POPs and sends a traffic scheduling instruction to the first edge node, wherein the traffic scheduling instruction is used for the first edge node to send a data packet to the second edge node according to the traffic scheduling instruction; and the first edge node sends a data packet to the second edge node according to the flow scheduling instruction. A virtual multilink based global traffic scheduling may be performed. The global flow load balance is achieved, and the operation cost is saved; and the instantaneous scheduling capability is strong, when the node fails, the fast switching can be carried out, better service guarantee is provided for the client, and better service maintenance capability is realized.
A set of global flow scheduling system and a method are established based on a virtualization multilink technology, a plurality of flow collectors are installed on POP nodes and report flow bandwidth information of the POP nodes to a server in real Time, and after the server receives the flow bandwidth information, the flow collectors comprehensively analyze and decide by combining Round Trip Time (RTT) data among the POP nodes and information such as the geographical positions of the POP nodes to obtain target scheduling and send flow scheduling instructions to edge nodes. Because the transmission link is multilink transmission established on the basis of virtualization, the edge node can directly send the request data packet to other redundant POP nodes after receiving the flow scheduling instruction, the whole flow scheduling process is accurately controlled, and extra running-up flow cost is avoided. When the POP node fails, the data packet transmission of the POP node can be instantly disconnected, and the data packet transmission can be immediately carried out through the POP nodes on other virtual links, so that better service guarantee and better service maintenance capability are provided for clients.
As shown in fig. 6, a schematic diagram of an embodiment of a server in the embodiment of the present application may include:
the system comprises a global flow monitoring module 601, a flow acquisition module and a flow management module, wherein the global flow monitoring module 601 is used for receiving flow bandwidth information of a plurality of point of presence POPs sent by a plurality of flow acquisition devices;
and the scheduling decision module 602 is configured to perform an analysis decision according to the traffic bandwidth information of the multiple point of presence POPs, and issue a traffic scheduling instruction to the first edge node, where the traffic scheduling instruction is used for the first edge node to send a data packet to the second edge node according to the traffic scheduling instruction.
Optionally, the scheduling decision module 602 is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple point-of-presence POPs and round trip time RTT data between the point-of-presence POPs, and issue a traffic scheduling instruction to the first edge node.
Optionally, the scheduling decision module 602 is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple point-of-presence POPs, round trip time RTT data between the multiple point-of-presence POPs, and geographic locations of the multiple point-of-presence POPs, and issue a traffic scheduling instruction to the first edge node.
Optionally, the scheduling decision module 602 is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple POP points, and if it is determined that the traffic bandwidth value of the first POP is greater than or equal to a first threshold, issue a first target traffic scheduling instruction to a first edge node; and the first target flow scheduling instruction is used for distributing a first data packet to other POPs by the first edge node according to the first target flow scheduling instruction, and sending the first data packet to a second edge node through the other POPs, so that the flow bandwidth value of the first POP is smaller than the first threshold value.
Optionally, the scheduling decision module 602 is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple POP points, and if it is determined that the traffic bandwidth value of the first POP is smaller than or equal to a second threshold, issue a second target traffic scheduling instruction to the first edge node; and the second target flow scheduling instruction is used for the first edge node to divide the data packets of other POPs into second data packets according to the second target flow scheduling instruction, and the second data packets are sent to the second edge node through the first POP, so that the flow bandwidth value of the first POP is larger than the second threshold value.
Optionally, the scheduling decision module 602 is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple point-of-presence POPs, determine that the first POP fails, and issue a third target traffic scheduling instruction to the first edge node; and the third target flow scheduling instruction is used for the first edge node to send a data packet to a second edge node through other POPs according to the third target flow scheduling instruction.
As shown in fig. 7, a schematic diagram of an embodiment of an edge node in the embodiment of the present application may include:
a receiving module 701, configured to receive a traffic scheduling instruction sent by a server;
a processing module 702, configured to send a data packet to the second edge node according to the traffic scheduling instruction and the traffic scheduling instruction.
Optionally, the processing module 702 is specifically configured to unpack and then mark a label index through the first virtual link manager, and send an unpacked data packet to the second edge node according to the traffic scheduling instruction, where the unpacked data packet is used for splicing the second edge node into a complete data packet according to the label index through the second virtual link manager.
Optionally, the receiving module 701 is specifically configured to receive a first target traffic scheduling instruction sent by the server;
the processing module 702 is specifically configured to divide a first data packet into other POPs according to the first target traffic scheduling instruction, and send the first data packet to a second edge node through the other POPs, so that a traffic bandwidth value of the first POP is smaller than the first threshold.
Optionally, the receiving module 701 is specifically configured to receive a second target traffic scheduling instruction sent by the server;
the processing module 702 is specifically configured to divide the data packets of other POPs into second data packets to the first POP according to the second target traffic scheduling instruction, and send the second data packets to a second edge node through the first POP, so that the traffic bandwidth value of the first POP is greater than the second threshold.
Optionally, the receiving module 701 is specifically configured to receive a third target traffic scheduling instruction sent by the server;
the processing module 702 is specifically configured to send a data packet to the second edge node through another POP according to the third target traffic scheduling instruction.
As shown in fig. 8, a schematic diagram of another embodiment of the server in the embodiment of the present application may include:
a memory 801 in which executable program code is stored;
and a processor 802 and a transceiver 803 coupled to the memory 801;
a transceiver 803, configured to receive traffic bandwidth information of multiple point-of-presence POPs sent by multiple traffic collectors;
the processor 802 is configured to perform an analysis decision according to the traffic bandwidth information of the multiple point of presence POPs, and issue a traffic scheduling instruction to the first edge node, where the traffic scheduling instruction is used for the first edge node to send a data packet to the second edge node according to the traffic scheduling instruction.
Optionally, the processor 802 is specifically configured to perform analysis and decision according to the traffic bandwidth information of the multiple point-of-presence POPs and round trip time RTT data between the point-of-presence POPs, and issue a traffic scheduling instruction to the first edge node.
Optionally, the processor 802 is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple point-of-presence POPs, round trip time RTT data between the multiple point-of-presence POPs, and geographic locations of the multiple point-of-presence POPs, and issue a traffic scheduling instruction to the first edge node.
Optionally, the processor 802 is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple POP points, and if it is determined that the traffic bandwidth value of the first POP is greater than or equal to a first threshold, issue a first target traffic scheduling instruction to a first edge node; and the first target flow scheduling instruction is used for distributing a first data packet to other POPs by the first edge node according to the first target flow scheduling instruction, and sending the first data packet to a second edge node through the other POPs, so that the flow bandwidth value of the first POP is smaller than the first threshold value.
Optionally, the processor 802 is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple POP points, and if it is determined that the traffic bandwidth value of the first POP is smaller than or equal to a second threshold, issue a second target traffic scheduling instruction to the first edge node; and the second target flow scheduling instruction is used for the first edge node to divide the data packets of other POPs into second data packets according to the second target flow scheduling instruction, and the second data packets are sent to the second edge node through the first POP, so that the flow bandwidth value of the first POP is larger than the second threshold value.
Optionally, the processor 802 is specifically configured to perform an analysis decision according to the traffic bandwidth information of the multiple point-of-presence POPs, determine that the first POP has a fault, and issue a third target traffic scheduling instruction to the first edge node; and the third target flow scheduling instruction is used for the first edge node to send a data packet to a second edge node through other POPs according to the third target flow scheduling instruction.
As shown in fig. 9, a schematic diagram of another embodiment of an edge node in the embodiment of the present application may include:
a memory 901 in which executable program code is stored;
and a processor 902 and a transceiver 903 coupled to the memory 901;
a transceiver 903, configured to receive a traffic scheduling instruction sent by a server;
and a processor 902, configured to send a data packet to the second edge node according to the traffic scheduling instruction and the traffic scheduling instruction.
Optionally, the processor 902 is specifically configured to unpack and then mark a label index through the first virtual link manager, send an unpacked data packet to the second edge node according to the traffic scheduling instruction, where the unpacked data packet is used for splicing the second edge node into a complete data packet according to the label index through the second virtual link manager.
Optionally, the transceiver 903 is specifically configured to receive a first target traffic scheduling instruction sent by the server;
the processor 902 is specifically configured to assign a first packet to another POP according to the first target traffic scheduling instruction, and send the first packet to a second edge node through the other POP, so that a traffic bandwidth value of the first POP is smaller than the first threshold.
Optionally, the transceiver 903 is specifically configured to receive a second target traffic scheduling instruction sent by the server;
the processor 902 is specifically configured to divide the data packets of other POPs into second data packets to the first POP according to the second target traffic scheduling instruction, and send the second data packets to a second edge node through the first POP, so that a traffic bandwidth value of the first POP is greater than the second threshold.
Optionally, the transceiver 903 is specifically configured to receive a third target traffic scheduling instruction sent by the server;
the processor 902 is specifically configured to send a data packet to the second edge node through the other POP according to the third target traffic scheduling instruction.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (10)
1. A global traffic scheduling system based on virtual multilink technology, comprising:
the system comprises a plurality of flow collectors, a server, a first edge node and a second edge node;
the flow collectors are used for collecting flow bandwidth information of a plurality of corresponding point-of-presence POPs and sending the flow bandwidth information of the plurality of point-of-presence POPs to the server, and the flow collectors are in one-to-one correspondence with the plurality of point-of-presence POPs;
the server is used for receiving the flow bandwidth information of the plurality of point-of-presence POPs sent by the plurality of flow collectors, performing analysis decision according to the flow bandwidth information of the plurality of point-of-presence POPs, and issuing a flow scheduling instruction to the first edge node;
and the first edge node is used for receiving the traffic scheduling instruction sent by the server and sending a data packet to the second edge node according to the traffic scheduling instruction.
2. The global traffic scheduling system according to claim 1,
the server is specifically configured to receive traffic bandwidth information of the multiple point-of-presence POPs sent by the multiple traffic collectors, perform analysis and decision according to the traffic bandwidth information of the multiple point-of-presence POPs and round trip time RTT data between the point-of-presence POPs, and issue a traffic scheduling instruction to the first edge node.
3. The global traffic scheduling system according to claim 2,
the server is specifically configured to receive traffic bandwidth information of the multiple point-of-presence POPs sent by the multiple traffic collectors, perform analysis and decision according to the traffic bandwidth information of the multiple point-of-presence POPs, round trip time RTT data between the multiple point-of-presence POPs, and geographic locations of the multiple point-of-presence POPs, and issue a traffic scheduling instruction to the first edge node.
4. The global traffic scheduling system according to any one of claims 1-3,
the first edge node is specifically configured to receive a traffic scheduling instruction sent by the server, unpack the traffic scheduling instruction through a first virtual link manager, mark a label index, and send an unpacked data packet to the second edge node according to the traffic scheduling instruction.
5. The global traffic scheduling system according to claim 4,
the second edge node is specifically configured to receive the unpacked data packet sent by the first edge node, and splice the unpacked data packet into a complete data packet according to the label index through a second virtual link manager.
6. The global traffic scheduling system according to any one of claims 1-3,
the server is specifically configured to perform analysis decision according to traffic bandwidth information of the multiple point-of-presence POPs, and issue a first target traffic scheduling instruction to the first edge node if it is determined that a traffic bandwidth value of the first POP is greater than or equal to a first threshold;
the first edge node is specifically configured to receive the first target traffic scheduling instruction sent by the server, divide a first data packet into other POPs according to the first target traffic scheduling instruction, and send the first data packet to the second edge node through the other POPs, so that a traffic bandwidth value of the first POP is smaller than the first threshold.
7. The global traffic scheduling system according to any one of claims 1-3,
the server is specifically configured to perform analysis decision according to the traffic bandwidth information of the multiple point-of-presence POPs, and issue a second target traffic scheduling instruction to the first edge node if it is determined that the traffic bandwidth value of the first POP is smaller than or equal to a second threshold;
the first edge node is specifically configured to receive the second target traffic scheduling instruction sent by the server, divide data packets of other POPs into second data packets to the first POP according to the second target traffic scheduling instruction, and send the second data packets to the second edge node through the first POP, so that a traffic bandwidth value of the first POP is greater than the second threshold.
8. The global traffic scheduling system according to any one of claims 1-3,
the server is used for receiving the flow bandwidth information of the plurality of point-of-presence POPs sent by the plurality of flow collectors, carrying out analysis decision according to the flow bandwidth information of the plurality of point-of-presence POPs, determining that a first POP fails, and issuing a third target flow scheduling instruction to the first edge node;
and the first edge node is used for receiving the third target traffic scheduling instruction sent by the server and sending a data packet to the second edge node through other POPs according to the third target traffic scheduling instruction.
9. A global flow scheduling method based on virtual multilink technology is characterized by comprising the following steps:
receiving flow bandwidth information of a plurality of point-of-presence POPs sent by a plurality of flow collectors, wherein the plurality of flow collectors are in one-to-one correspondence with the plurality of point-of-presence POPs;
and analyzing and deciding according to the flow bandwidth information of the POPs, and issuing a flow scheduling instruction to the first edge node, wherein the flow scheduling instruction is used for the first edge node to send a data packet to the second edge node according to the flow scheduling instruction.
10. A computer-readable storage medium comprising instructions that, when executed on a processor, cause the processor to perform the method of claim 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111512848.9A CN114237846A (en) | 2021-12-11 | 2021-12-11 | Global flow scheduling system, method and storage medium based on virtual multilink technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111512848.9A CN114237846A (en) | 2021-12-11 | 2021-12-11 | Global flow scheduling system, method and storage medium based on virtual multilink technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114237846A true CN114237846A (en) | 2022-03-25 |
Family
ID=80754899
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111512848.9A Pending CN114237846A (en) | 2021-12-11 | 2021-12-11 | Global flow scheduling system, method and storage medium based on virtual multilink technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114237846A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116069766A (en) * | 2023-03-14 | 2023-05-05 | 天云融创数据科技(北京)有限公司 | Data scheduling optimization method and system based on big data |
CN116360301A (en) * | 2022-12-02 | 2023-06-30 | 国家工业信息安全发展研究中心 | Industrial control network flow acquisition and analysis system and method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108306827A (en) * | 2017-01-12 | 2018-07-20 | 华为技术有限公司 | The method and server of transmission data |
CN113259391A (en) * | 2021-06-25 | 2021-08-13 | 北京华云安信息技术有限公司 | Data transmission method and device applied to multi-level node network |
CN113438155A (en) * | 2021-06-25 | 2021-09-24 | 北京网聚云联科技有限公司 | Intelligent and reliable UDP (user Datagram protocol) transmission method, device and equipment for virtual multilink |
-
2021
- 2021-12-11 CN CN202111512848.9A patent/CN114237846A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108306827A (en) * | 2017-01-12 | 2018-07-20 | 华为技术有限公司 | The method and server of transmission data |
CN113259391A (en) * | 2021-06-25 | 2021-08-13 | 北京华云安信息技术有限公司 | Data transmission method and device applied to multi-level node network |
CN113438155A (en) * | 2021-06-25 | 2021-09-24 | 北京网聚云联科技有限公司 | Intelligent and reliable UDP (user Datagram protocol) transmission method, device and equipment for virtual multilink |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116360301A (en) * | 2022-12-02 | 2023-06-30 | 国家工业信息安全发展研究中心 | Industrial control network flow acquisition and analysis system and method |
CN116360301B (en) * | 2022-12-02 | 2023-12-12 | 国家工业信息安全发展研究中心 | Industrial control network flow acquisition and analysis system and method |
CN116069766A (en) * | 2023-03-14 | 2023-05-05 | 天云融创数据科技(北京)有限公司 | Data scheduling optimization method and system based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108696428B (en) | Tunnel technology-based route detection method, route node and central server | |
US9760429B2 (en) | Fractional reserve high availability using cloud command interception | |
CN114237846A (en) | Global flow scheduling system, method and storage medium based on virtual multilink technology | |
US8463737B2 (en) | Realtime unification management information data conversion and monitoring apparatus and method for thereof | |
US11636016B2 (en) | Cloud simulation and validation system | |
CN103731295A (en) | Method and system for operating virtual consolidated appliance | |
US10764165B1 (en) | Event-driven framework for filtering and processing network flows | |
CN103685368A (en) | Method and system for migrating data | |
US20150169339A1 (en) | Determining Horizontal Scaling Pattern for a Workload | |
US20130219021A1 (en) | Predictive caching for telecommunication towers using propagation of identification of items of high demand data at a geographic level | |
CN110535919B (en) | Network access method and device of concentrator and power peak regulation system | |
Dorsch et al. | Enabling hard service guarantees in Software-Defined Smart Grid infrastructures | |
US9391916B2 (en) | Resource management system, resource management method, and computer product | |
US20110310757A1 (en) | Method of selecting a destination node, node and recording medium | |
CN110888734A (en) | Fog computing resource processing method and device, electronic equipment and storage medium | |
CN113676351A (en) | Session processing method and device, electronic equipment and storage medium | |
US11977450B2 (en) | Backup system, method therefor, and program | |
US11979306B2 (en) | Network system, information acquisition device, information acquisition method, and program | |
CN105681311B (en) | A kind of rocket ground network heterogeneous system based on cloud computing technology | |
CN112688984A (en) | Method, device and medium for issuing and executing instruction to network node | |
US9094321B2 (en) | Energy management for communication network elements | |
US11949557B2 (en) | Device, method, and program for ICT resource management using service management information | |
CN115774580A (en) | Cloud-side data transmission control system, method and storage medium | |
CN112653626A (en) | High-delay link determining method, route publishing method and device | |
CN104811317A (en) | Online charging method for always-online IP connection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |