CN108667727B - Network link fault processing method and device and controller - Google Patents

Network link fault processing method and device and controller Download PDF

Info

Publication number
CN108667727B
CN108667727B CN201810396284.9A CN201810396284A CN108667727B CN 108667727 B CN108667727 B CN 108667727B CN 201810396284 A CN201810396284 A CN 201810396284A CN 108667727 B CN108667727 B CN 108667727B
Authority
CN
China
Prior art keywords
path
link
node
current
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810396284.9A
Other languages
Chinese (zh)
Other versions
CN108667727A (en
Inventor
孙庆恭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Shanwei Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangdong Power Grid Co Ltd
Shanwei Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Power Grid Co Ltd, Shanwei Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangdong Power Grid Co Ltd
Priority to CN201810396284.9A priority Critical patent/CN108667727B/en
Publication of CN108667727A publication Critical patent/CN108667727A/en
Application granted granted Critical
Publication of CN108667727B publication Critical patent/CN108667727B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing

Abstract

The invention provides a network link fault processing method, a device and a controller, which are applied to a Software Defined Network (SDN) and relate to the technical field of communication, wherein the method comprises the following steps: when a current link of the SDN fails, a pre-stored standby link corresponding to the current link is obtained; selecting a target link from the standby links; and switching the current link to the target link. According to the network link fault processing method, the network link fault processing device and the network link fault processing controller, link binding is not needed, so that the configuration, management and operation and maintenance costs are low, and the flexibility and the expansibility are good; because the network does not need to be converged again to calculate a new route when the link sends a fault, a large amount of fault repairing calculation time is saved, the processing speed of the network link fault is improved, and the influence on the upper layer virtual network is reduced.

Description

Network link fault processing method and device and controller
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method, an apparatus, and a controller for processing a network link failure.
Background
The network virtualization technology supports multiple virtual networks on a physical network, so that different users are allowed to share or monopolize slices of network resources, the utilization rate of the network resources is improved, and an elastic network is realized. An SDN (Software Defined Network) is a Network innovation architecture of an Emulex Network, and is an implementation manner of Network virtualization, and a core technology OpenFlow separates a control plane and a data plane of a Network device, so that flexible control of Network flow is achieved, and a Network becomes more intelligent as a pipeline. The SDN technology enables network virtualization to be easily achieved, and has better flexibility and expansibility.
Since a plurality of upper-layer virtual networks share a lower-layer physical network, link failures of the lower-layer physical network can affect the upper-layer virtual network in a large scale, so that the fast switching and repairing of the link failures of the physical network become more important.
In the prior art, the following two methods are generally adopted to handle network link failure: firstly, binding two or more links into a logical link through link binding, and directly switching to other available links when one link fails; and secondly, after the link failure occurs, the network is converged again to calculate a new route to obtain a new link, and then the new link is switched. However, the configuration, management and operation and maintenance costs of link binding are high, flexibility and expansibility are poor, and network equipment of the same manufacturer is often required; the method of reconverging the network to calculate a new route takes a lot of computation time for fault recovery, resulting in a large impact on the upper layer virtual network.
Disclosure of Invention
In view of this, an object of the present invention is to provide a method, an apparatus, and a controller for processing a network link failure, so as to reduce configuration, management, operation and maintenance costs, improve flexibility and extensibility, save computation time for failure recovery, and improve processing speed of a network link failure, thereby reducing influence on an upper layer virtual network.
In a first aspect, an embodiment of the present invention provides a network link failure processing method, which is applied to a software defined network SDN, and the method includes:
when a current link of the SDN fails, acquiring a pre-stored standby link corresponding to the current link;
selecting a target link from the standby links;
and switching the current link to the target link.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the obtaining a prestored backup link corresponding to the current link includes:
acquiring a source node and a destination node of the current link;
inquiring the SDN path in a preset path record table according to the source node and the destination node;
and taking the searched path which is not the current link as a standby link.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the path record table is established by the following method:
acquiring a network topology structure, a source node, a destination node and a preset path number of the SDN; wherein the network topology comprises connection paths between nodes of the SDN and weights of the connection paths;
according to each connecting path and the weight value thereof, obtaining a plurality of redundant paths which have the minimum moving cost and no loop between the source node and the destination node, and establishing a path record table according to each redundant path;
the number of the redundant paths is equal to the preset number of paths, the connection paths included in each redundant path are not overlapped, and the movement cost of the redundant path is the sum of the weights of the connection paths included in the redundant path.
With reference to the second possible implementation manner of the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the obtaining, according to each of the connection paths and the weight thereof, multiple redundant paths where a moving cost between the source node and the destination node is minimum and a loop does not exist, and establishing a path record table according to each of the redundant paths includes:
adding the source node, the moving path corresponding to the source node and the moving cost thereof as a path unit into a priority queue to establish the priority queue and a blank path record table; the priority queue comprises the added path units, and the path units comprise nodes, moving paths corresponding to the nodes and moving costs of the moving paths; the moving path corresponding to the ith node is a path moving from the source node to the ith node, and i is a positive integer;
obtaining a redundant path: taking out and deleting the path unit with the minimum moving cost from the priority queue as a current path unit, and judging whether a current node in the current path unit is the destination node or not;
if the current node is the destination node, judging whether a loop exists in a current moving path in the current path unit and whether an overlapped connecting path exists between the current moving path and a path in the path record table;
if the current moving path does not have a loop and an overlapped connecting path does not exist, adding the current moving path as a redundant path into the path record table;
and repeating the step of obtaining the redundant paths until the number of the redundant paths in the path record table reaches the preset path number, so as to obtain the path record table.
With reference to the third possible implementation manner of the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the method further includes:
and if the current node is not the destination node, acquiring each next node connected with the current node, and updating the priority queue according to each next node.
With reference to the fourth possible implementation manner of the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the updating the priority queue according to each of the next nodes includes:
acquiring a next moving path corresponding to the next node, and judging whether the next moving path has a loop or not and whether the next moving path and a path in the path record table have an overlapped connecting path or not;
if the next moving path has no loop and no overlapped connecting path, adding the next node, the next moving path and the moving cost thereof as a new path unit to the priority queue;
and sequencing the path units according to the size relationship of the movement cost to obtain an updated priority queue.
With reference to the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the selecting a target link from the standby links includes:
screening out normal links which are not affected by the fault from the standby links;
and selecting the normal link with the minimum moving cost as the target link.
With reference to the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where the switching the current link to the target link includes:
and issuing a flow table corresponding to the target link to switch to the target link.
In a second aspect, an embodiment of the present invention further provides a network link failure processing apparatus, which is applied to a software defined network SDN, and the apparatus includes:
the standby link acquisition module is used for acquiring a prestored standby link corresponding to the current link when the current link of the SDN fails;
the target link selection module is used for selecting a target link from the standby links;
and the link switching module is used for switching the current link to the target link.
In a third aspect, an embodiment of the present invention further provides a controller, which includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor executes the computer program to implement the method according to the first aspect or any possible implementation manner thereof.
The embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, when a current link of an SDN fails, a pre-stored standby link corresponding to the current link is obtained; selecting a target link from the standby links; and switching the current link to the target link. According to the network link fault processing method, the network link fault processing device and the network link fault processing controller, link binding is not needed, so that the configuration, management and operation and maintenance costs are low, and the flexibility and the expansibility are good; because the network does not need to be converged again to calculate a new route when the link sends a fault, a large amount of fault repairing calculation time is saved, the processing speed of the network link fault is improved, and the influence on the upper layer virtual network is reduced.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of a network link failure processing method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a process of establishing a path record table according to an embodiment of the present invention;
fig. 3 is a network topology diagram according to an embodiment of the present invention;
FIG. 4 is a first state diagram of a set up path record table based on the network topology of FIG. 3;
FIG. 5 is a second state diagram for building a path record table based on the network topology diagram of FIG. 3;
FIG. 6 is a third state diagram for creating a path record table based on the network topology of FIG. 3;
FIG. 7 is a fourth state diagram for creating a path record table based on the network topology of FIG. 3;
FIG. 8 is a fifth state diagram for creating a path record table based on the network topology of FIG. 3;
FIG. 9 is a sixth state diagram for creating a path record table based on the network topology of FIG. 3;
FIG. 10 is a seventh state diagram for creating a path record table based on the network topology of FIG. 3;
FIG. 11 is an eighth state diagram for creating a path record table based on the network topology of FIG. 3;
FIG. 12 is a ninth state diagram for creating a path record table based on the network topology of FIG. 3;
FIG. 13 is a tenth state diagram for creating a path record table based on the network topology of FIG. 3;
FIG. 14 is an eleventh state diagram for creating a path record table based on the network topology of FIG. 3;
FIG. 15 is a twelfth state diagram for creating a path record table based on the network topology of FIG. 3;
fig. 16 is a schematic structural diagram of a network link failure processing apparatus according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of a controller according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The configuration, management and operation and maintenance cost of the current link binding is higher, and the flexibility and expansibility are poorer; the method of reconverging the network to calculate a new route takes a lot of computation time for fault recovery, resulting in a large impact on the upper layer virtual network. Due to the adoption of the SDN technology, a control plane and a forwarding plane are separated, the control of the whole network is centralized in a controller, the topology and link information of the whole network can be taken in the controller, and a plurality of non-overlapped links can be easily calculated to serve as standby links. Based on this, according to the characteristics of the SDN, the method, the device, and the controller for processing the network link failure provided in the embodiments of the present invention prepare a plurality of standby links in advance, and once a link failure occurs, immediately issue a flow table to switch to an available standby link, so that lower configuration, management, and operation and maintenance costs can be achieved, better flexibility and extensibility are achieved, a new route does not need to be calculated by reconvergence of the network, a large amount of calculation time for failure recovery is saved, and the upper layer virtual network is not affected or the influence is reduced to the minimum.
For the convenience of understanding the present embodiment, a detailed description will be first given of a network link failure processing method disclosed in the present embodiment.
The first embodiment is as follows:
according to the network link fault processing method provided by the embodiment of the invention, route calculation requiring a large amount of calculation time is prepared before a fault occurs, and a plurality of non-overlapped links are obtained through calculation and are used as standby links. The method may be performed by a controller.
Fig. 1 is a schematic flowchart of a network link failure processing method according to an embodiment of the present invention, where the method is applied to an SDN, as shown in fig. 1, the method includes the following steps:
step S102, when the current link of the SDN fails, a pre-stored standby link corresponding to the current link is obtained.
When a current link of the SDN fails, a standby link may be acquired by: acquiring a source node and a destination node of a current link; inquiring the SDN path in a preset path record table according to the source node and the destination node; and taking the searched path of the non-current link as a standby link. The path record table stores a plurality of redundant paths corresponding to the source node and the destination node, the redundant paths have mobile costs (the mobile costs are larger, the expenses are larger), and the redundant paths are not overlapped (namely, an overlapped connection path does not exist, and the connection path refers to a connection edge between two nodes); a node is a specific physical device, such as a switch or router.
And step S104, selecting a target link from the standby links.
Specifically, a normal link not affected by the failure is selected from the backup links, and then a target link is selected from the normal links. Selecting a target link from normal links in a random mode; in order to reduce the cost, the normal link with the minimum moving cost can be selected as the target link.
And step S106, switching the current link to the target link.
After the target link is selected, the controller can switch the current link with the fault to the target link by issuing the flow table corresponding to the target link, that is, the controller issues the flow table according to the selected path (target link), and switches the path quickly. The specific handover process may refer to the prior art, and is not described herein.
In the embodiment of the invention, when a current link of an SDN fails, a pre-stored standby link corresponding to the current link is obtained; selecting a target link from the standby links; and switching the current link to the target link. According to the network link fault processing method provided by the embodiment of the invention, because link binding is not required, lower configuration, management and operation and maintenance costs can be achieved, and better flexibility and expansibility are achieved; because the network does not need to be converged again to calculate a new route when the link sends a fault, a large amount of fault repairing calculation time is saved, the processing speed of the network link fault is improved, and the influence on the upper layer virtual network is reduced.
Fig. 2 is a schematic flowchart of a process for establishing a path record table according to an embodiment of the present invention, and as shown in fig. 2, the path record table may be established through the following steps:
step S202, acquiring a network topology structure, a source node, a destination node and a preset path number of the SDN; the network topology structure comprises connection paths among nodes of the SDN and weights of the connection paths.
The preset path number is input and set by a user in advance, and refers to the number of paths needing to be calculated between the source node and the destination node. The preset path number is a positive integer, and K represents the preset path number.
Next, through steps S204 to S214, according to the connection paths and their weights, multiple redundant paths are obtained, where the moving cost between the source node and the destination node is minimum and no loop exists, and a path record table is established according to the redundant paths. The number of the redundant paths is equal to the preset number K of the paths, the connection paths contained in each redundant path are not overlapped, and the movement cost of the redundant path is the sum of the weights of the connection paths contained in the redundant path.
Step S204, adding the source node, the moving path corresponding to the source node and the moving cost thereof as a path unit into the priority queue to establish the priority queue, and establishing a blank path record table.
The priority queue comprises added path units, and the path units comprise nodes, moving paths corresponding to the nodes and moving costs of the moving paths; the moving path corresponding to the ith node is a path moving from the source node to the ith node, and i is a positive integer. Before adding the path unit, the moving cost of the source node is calculated, and obviously, the moving cost of the source node is 0.
The following is the acquisition of redundant paths.
Step S206, the path unit with the minimum moving cost is taken out from the priority queue and deleted as the current path unit, and it is determined whether the current node in the current path unit is the destination node.
If not, executing step S208; if so, step S210 is performed. For example, if the current node is the source node, step S208 is performed.
Step S208, each next node connected with the current node is obtained, and the priority queue is updated according to each next node. And then returns to step S206.
Specifically, in some possible embodiments, a next moving path corresponding to each next node is obtained, the moving cost of each next moving path is calculated, and it is determined whether a loop exists in the next moving path and whether an overlapped connection path exists between the next moving path and a path in the path record table, where the next moving path is obtained by adding the next node to the moving path of the current node. If the next moving path has no loop and no overlapped connecting path, the next node, the next moving path and the moving cost thereof are taken as new path units to be added into the priority queue, otherwise, the next moving path is not added into the priority queue. And then, sequencing the path units according to the magnitude relation of the moving cost, for example, sequencing the path units from small to large according to the moving cost to obtain an updated priority queue.
In step S210, it is determined whether there is an overlapping connection path between the current movement path in the current path unit and the path in the path record table.
If not, executing step S212; if so, return to step S206. For example, if the current path record table is blank, the current movement path does not have an overlapping connection path.
In step S212, it is determined whether a loop exists in the current moving path.
If not, go to step S214; if so, return to step S206.
It should be noted that the step S210 and the step S212 are not executed in sequence.
Step S214, adding the current moving path as a redundant path to the path record table, and determining whether the number of redundant paths in the path record table reaches the preset path number K.
If not, returning to the step S206; if so, the process ends.
And repeating the step of obtaining the redundant paths until the number of the redundant paths in the path record table reaches the preset path number to obtain the path record table.
Fig. 3 is a network topology diagram provided by an embodiment of the present invention, and to facilitate understanding of the above-mentioned establishment of the path record table, based on fig. 3, 3 non-overlapping loop-free paths (i.e., K is 3) from the source node V0 to the destination node V5 are solved, where a moving cost of moving from the source node to the current node is denoted as G, and a path (moving path) corresponding to the node is denoted as path.
(1) As shown in fig. 4, during initialization, the path record table is empty, the area formed by the two horizontal lines is a queue (priority queue) that can be sorted according to the value of G, the node with the smallest G is placed at the head of the queue, the node with the largest G is placed at the tail of the queue, and only the first node is taken out from the queue each time. First, the source node V0 is added to the queue, since there is no movement from the source node V0 to the current node V0, the actual movement cost G is 0, path: { V0 }.
(2) The first node V0 is removed and deleted from the head of the queue in fig. 4, and it is checked whether the path of the removed node has reached the destination node and has edge overlap (connection path overlap) with the path in the path record table, and if none is found, the node connected to the node V0 is added to the queue. That is, as shown in fig. 5, V1, V2, and V3 are added to the queue, and the newly added node in fig. 5 is marked new (the same applies to the following). Calculating the value and path of G at the same time, for example, calculating V1, and the actual moving cost G from the source node V0 to the current node V1 is 70; path is the path value taken from the deleted node V0 and added to V1, i.e., path: { V0, V1 }. The V2 and V3 nodes are calculated by the same method, and are not described in detail here.
(3) The node V2 is taken out of the queue in FIG. 5, and the nodes V0, V1, V3, and V5 connected thereto are added to the queue in the same procedure as in (2) above, and the result shown in FIG. 6 below is obtained. Since the newly generated path of node V0 has a loop, V0 is discarded.
(4) The V3 node is pulled from the queue of FIG. 6, resulting in the following results shown in FIG. 7. The newly generated node V0 is also dropped because its path has a loop.
(5) The V5 node is taken from the queue of fig. 7. Since the destination node has been reached without its path overlapping the path in the path record table and without a loop, the path { V0, V2, V5} is added to the path record table, as shown in fig. 8. Since the number of paths in the path record table is still less than 3, the solution is continued.
(6) The V1 node is taken out of the queue in fig. 8 and the result shown in fig. 9 is calculated.
(7) The first node V1 is taken out from the head of the queue in fig. 9, because the path of the node has link overlap with the path in the path record table, that is, its path: { V0, V2, V1} overlaps { V0, V2, V5} in the path record table by V0 to V2, so the node is discarded. The calculation results are shown in fig. 10.
(8) The first node V3 is taken out from the head of queue in fig. 10, and since its path has link overlap with the path in the path record table, that is, its path: { V0, V2, V3} overlaps { V0, V2, V5} in the path record table by V0 to V2, so the node is discarded. The calculation results are shown in fig. 11.
(9) Node V4 is taken from the head of the queue in fig. 11 and the result shown in fig. 12 is calculated. The newly generated node V3 is dropped because its path has a loop.
(10) The node V5 is taken from the queue in fig. 12, and since the destination node has been reached without its path overlapping the path in the path record table and without a loop, the path { V0, V1, V5} is added to the path record table, as shown in fig. 13. Since the number of paths in the path record table is still less than 3, the solution is continued.
(11) Node V2 is taken from the head of the queue in fig. 13 and the result shown in fig. 14 is calculated. The newly generated nodes V0 and V3 are dropped because their paths have loops. And the newly generated node V5 is discarded due to the overlap with the path in the path record table.
(12) Node V5 is taken from the head of queue in FIG. 14, and since the destination node has been reached without its path overlapping the path in the path record table and without a loop, the path { V0, V3, V4, V5} is added to the path record table, as shown in FIG. 15. And (4) as the number of the paths in the path record table is not less than 3, the calculation requirement is met, and the calculation is finished.
Example two:
fig. 16 is a schematic structural diagram of a network link failure processing apparatus according to an embodiment of the present invention, where the apparatus is applied to an SDN, and as shown in fig. 16, the apparatus includes:
a standby link obtaining module 32, configured to, when a current link of the SDN fails, obtain a pre-stored standby link corresponding to the current link;
a target link selection module 34, configured to select a target link from the backup links;
and a link switching module 36, configured to switch the current link to the target link.
The spare link acquiring module 32 is specifically configured to: acquiring a source node and a destination node of a current link; according to a source node and a destination node, inquiring a path of the SDN in a preset path record table; and taking the searched path of the non-current link as a standby link.
The target link selection module 34 is specifically configured to: screening out normal links which are not affected by the fault from the standby links; and selecting the normal link with the minimum moving cost as the target link.
The link switching module 36 is specifically configured to: and issuing a flow table corresponding to the target link to switch to the target link.
Further, the apparatus further includes an establishing module, which is connected to the backup link obtaining module 32. The establishing module comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a network topology structure, a source node, a destination node and the number of preset paths of the SDN; the network topology structure comprises connection paths among all nodes of the SDN and weights of the connection paths;
the establishing unit is connected with the obtaining unit and used for obtaining a plurality of redundant paths which have the minimum moving cost and no loop between the source node and the destination node according to each connecting path and the weight thereof and establishing a path record table according to each redundant path;
the number of the redundant paths is equal to the number of the preset paths, the connection paths contained in each redundant path are not overlapped, and the movement cost of the redundant path is the sum of the weights of the connection paths contained in the redundant path.
The establishing unit is specifically configured to:
adding a source node, a moving path corresponding to the source node and the moving cost thereof as a path unit into a priority queue to establish the priority queue, and establishing a blank path record table; the priority queue comprises added path units, and the path units comprise nodes, moving paths corresponding to the nodes and moving costs of the moving paths; the moving path corresponding to the ith node is a path moving from the source node to the ith node, and i is a positive integer;
obtaining a redundant path: taking out and deleting the path unit with the minimum moving cost from the priority queue as a current path unit, and judging whether a current node in the current path unit is a destination node or not;
if the current node is the destination node, judging whether a loop exists in the current moving path in the current path unit and whether an overlapped connecting path exists between the current moving path and the path in the path record table;
if the current moving path has no loop and no overlapped connecting path, taking the current moving path as a redundant path and adding the redundant path into a path record table;
and repeating the step of obtaining the redundant paths until the number of the redundant paths in the path record table reaches the preset path number to obtain the path record table.
The establishing unit is further configured to: and if the current node is not the destination node, acquiring each next node connected with the current node, and updating the priority queue according to each next node.
Further, the establishing unit is further specifically configured to:
acquiring a next moving path corresponding to the next node, and judging whether the next moving path has a loop or not and whether the next moving path and a path in the path record table have an overlapped connecting path or not;
if the next moving path has no loop and no overlapped connecting path, adding the next node, the next moving path and the moving cost thereof as a new path unit into the priority queue;
and sequencing the path units according to the magnitude relation of the movement cost to obtain an updated priority queue.
In the embodiment of the invention, when a current link of an SDN fails, a pre-stored standby link corresponding to the current link is obtained; selecting a target link from the standby links; and switching the current link to the target link. The network link fault processing device provided by the embodiment of the invention does not need to bind the link, so that the network link fault processing device can achieve lower configuration, management and operation and maintenance cost and has better flexibility and expansibility; because the network does not need to be converged again to calculate a new route when the link sends a fault, a large amount of fault repairing calculation time is saved, the processing speed of the network link fault is improved, and the influence on the upper layer virtual network is reduced.
Example three:
referring to fig. 17, an embodiment of the present invention further provides a controller 100, including: a processor 40, a memory 41, a bus 42 and a communication interface 43, wherein the processor 40, the communication interface 43 and the memory 41 are connected through the bus 42; the processor 40 is arranged to execute executable modules, such as computer programs, stored in the memory 41.
The memory 41 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 43 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
The bus 42 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 17, but that does not indicate only one bus or one type of bus.
The memory 41 is used for storing a program, the processor 40 executes the program after receiving an execution instruction, and the method executed by the apparatus defined by the flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 40, or implemented by the processor 40.
The processor 40 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 40. The Processor 40 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory 41, and the processor 40 reads the information in the memory 41 and completes the steps of the method in combination with the hardware thereof.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatus and controller may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The network link failure processing device and the controller provided by the embodiment of the invention have the same technical characteristics as the network link failure processing method provided by the embodiment, so the same technical problems can be solved, and the same technical effects are achieved.
In all examples shown and described herein, any particular value should be construed as merely exemplary, and not as a limitation, and thus other examples of example embodiments may have different values.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product for performing the network link failure processing method provided in the embodiment of the present invention includes a computer-readable storage medium storing a nonvolatile program code executable by a processor, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, which is not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A network link failure processing method is applied to a Software Defined Network (SDN), and comprises the following steps:
when a current link of the SDN fails, acquiring a pre-stored standby link corresponding to the current link;
selecting a target link from the standby links;
switching the current link to the target link;
the acquiring of the pre-stored standby link corresponding to the current link includes:
acquiring a source node and a destination node of the current link;
inquiring the SDN path in a preset path record table according to the source node and the destination node;
taking the searched path which is not the current link as a standby link;
the path record table is established by the following method:
acquiring a network topology structure, a source node, a destination node and a preset path number of the SDN; wherein the network topology comprises connection paths between nodes of the SDN and weights of the connection paths;
according to each connecting path and the weight value thereof, obtaining a plurality of redundant paths which have the minimum moving cost and no loop between the source node and the destination node, and establishing a path record table according to each redundant path;
the number of the redundant paths is equal to the preset number of paths, the connection paths included in each redundant path are not overlapped, and the movement cost of the redundant path is the sum of the weights of the connection paths included in the redundant path.
2. The method according to claim 1, wherein the obtaining, according to each of the connection paths and the weight thereof, a plurality of redundant paths having a minimum moving cost and no loop between the source node and the destination node, and establishing a path record table according to each of the redundant paths includes:
adding the source node, the moving path corresponding to the source node and the moving cost thereof as a path unit into a priority queue to establish the priority queue and a blank path record table; the priority queue comprises the added path units, and the path units comprise nodes, moving paths corresponding to the nodes and moving costs of the moving paths; the moving path corresponding to the ith node is a path moving from the source node to the ith node, and i is a positive integer;
obtaining a redundant path: taking out and deleting the path unit with the minimum moving cost from the priority queue as a current path unit, and judging whether a current node in the current path unit is the destination node or not;
if the current node is the destination node, judging whether a loop exists in a current moving path in the current path unit and whether an overlapped connecting path exists between the current moving path and a path in the path record table;
if the current moving path does not have a loop and an overlapped connecting path does not exist, adding the current moving path as a redundant path into the path record table;
and repeating the step of obtaining the redundant paths until the number of the redundant paths in the path record table reaches the preset path number, so as to obtain the path record table.
3. The method of claim 2, further comprising:
and if the current node is not the destination node, acquiring each next node connected with the current node, and updating the priority queue according to each next node.
4. The method of claim 3, wherein said updating the priority queue according to each of the next nodes comprises:
acquiring a next moving path corresponding to the next node, and judging whether the next moving path has a loop or not and whether the next moving path and a path in the path record table have an overlapped connecting path or not;
if the next moving path has no loop and no overlapped connecting path, adding the next node, the next moving path and the moving cost thereof as a new path unit to the priority queue;
and sequencing the path units according to the size relationship of the movement cost to obtain an updated priority queue.
5. The method of claim 1, wherein the selecting the target link from the backup links comprises:
screening out normal links which are not affected by the fault from the standby links;
and selecting the normal link with the minimum moving cost as the target link.
6. The method of claim 1, wherein the switching the current link to the target link comprises:
and issuing a flow table corresponding to the target link to switch to the target link.
7. A network link failure handling apparatus applied to a software defined network SDN, the apparatus comprising:
the standby link acquisition module is used for acquiring a prestored standby link corresponding to the current link when the current link of the SDN fails;
the target link selection module is used for selecting a target link from the standby links;
a link switching module, configured to switch the current link to the target link;
the standby link acquisition module is configured to: acquiring a source node and a destination node of a current link; according to a source node and a destination node, inquiring a path of the SDN in a preset path record table; taking the searched path of the non-current link as a standby link;
the device also comprises an establishing module, the establishing module is connected with the standby link obtaining module, and the establishing module comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a network topology structure, a source node, a destination node and the number of preset paths of the SDN; the network topology structure comprises connection paths among all nodes of the SDN and weights of the connection paths;
the establishing unit is connected with the obtaining unit and used for obtaining a plurality of redundant paths which have the minimum moving cost and no loop between the source node and the destination node according to each connecting path and the weight thereof and establishing a path record table according to each redundant path;
the number of the redundant paths is equal to the number of the preset paths, the connection paths contained in each redundant path are not overlapped, and the movement cost of the redundant path is the sum of the weights of the connection paths contained in the redundant path.
8. A controller comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the processor implements the method of any of claims 1-6 when executing the computer program.
CN201810396284.9A 2018-04-27 2018-04-27 Network link fault processing method and device and controller Active CN108667727B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810396284.9A CN108667727B (en) 2018-04-27 2018-04-27 Network link fault processing method and device and controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810396284.9A CN108667727B (en) 2018-04-27 2018-04-27 Network link fault processing method and device and controller

Publications (2)

Publication Number Publication Date
CN108667727A CN108667727A (en) 2018-10-16
CN108667727B true CN108667727B (en) 2021-03-16

Family

ID=63781245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810396284.9A Active CN108667727B (en) 2018-04-27 2018-04-27 Network link fault processing method and device and controller

Country Status (1)

Country Link
CN (1) CN108667727B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109361545A (en) * 2018-11-01 2019-02-19 郑州云海信息技术有限公司 A kind of method and device of software defined network SDN controller control link switching
CN109951331B (en) * 2019-03-15 2021-08-20 北京百度网讯科技有限公司 Method, device and computing cluster for sending information
CN110933697B (en) * 2019-11-19 2022-12-20 Oppo(重庆)智能科技有限公司 Network state detection method and device, storage medium and electronic equipment
CN113099321B (en) * 2019-12-23 2022-09-30 中国电信股份有限公司 Method, device and computer readable storage medium for determining communication path
CN111404734B (en) 2020-03-06 2021-03-19 北京邮电大学 Cross-layer network fault recovery system and method based on configuration migration
CN111782137A (en) * 2020-06-17 2020-10-16 杭州宏杉科技股份有限公司 Path fault processing method and device
CN112491700B (en) * 2020-12-14 2023-05-02 成都颜创启新信息技术有限公司 Network path adjustment method, system, device, electronic equipment and storage medium
CN114244689A (en) * 2021-12-13 2022-03-25 中国电信股份有限公司 SDN network maintenance method and device, electronic equipment and readable medium
CN116760763B (en) * 2023-08-16 2024-01-09 苏州浪潮智能科技有限公司 Link switching method, device, computing system, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301146A (en) * 2014-10-23 2015-01-21 杭州华三通信技术有限公司 Link switching method and device in software defined network
US9559948B2 (en) * 2012-02-29 2017-01-31 Dell Products, Lp System and method for managing unknown flows in a flow-based switching device
CN106385363A (en) * 2016-09-18 2017-02-08 北京邮电大学 SDN data plane data-flow backup method and device
CN107465611A (en) * 2017-09-05 2017-12-12 北京东土科技股份有限公司 The pretection switch method and device of SDN controllers and Switch control link

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9559948B2 (en) * 2012-02-29 2017-01-31 Dell Products, Lp System and method for managing unknown flows in a flow-based switching device
CN104301146A (en) * 2014-10-23 2015-01-21 杭州华三通信技术有限公司 Link switching method and device in software defined network
CN106385363A (en) * 2016-09-18 2017-02-08 北京邮电大学 SDN data plane data-flow backup method and device
CN107465611A (en) * 2017-09-05 2017-12-12 北京东土科技股份有限公司 The pretection switch method and device of SDN controllers and Switch control link

Also Published As

Publication number Publication date
CN108667727A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
CN108667727B (en) Network link fault processing method and device and controller
US9608900B2 (en) Techniques for flooding optimization for link state protocols in a network topology
CN108833271B (en) Power grid wide area control service communication path selection method and server
KR20170017903A (en) Proactive handling of network faults
EP3214800A1 (en) Method and device for implementing capacity planning
US10831630B2 (en) Fault analysis method and apparatus based on data center
CN110971521B (en) Routing path calculation method, system, device and computer readable storage medium
KR102153814B1 (en) Stochastic Routing Algorithm for Load-balancing Interconnection Network System
CN112217727B (en) Multi-metric-dimension routing method and device, computer equipment and storage medium
EP3125478B1 (en) Method, device, and system for determining intermediate routing node
JP2012015837A (en) Path calculation device, data transfer device, path calculation method, data transfer method and program
JP6581546B2 (en) Reliability evaluation method, reliability evaluation apparatus and program
JP5102804B2 (en) Congestion impact evaluation apparatus, link traffic calculation method and program thereof
CN111884932B (en) Link determining method, device, equipment and computer readable storage medium
CN105812160B (en) A kind of seamless redundant network mode adaptive method and device
CN110417576B (en) Deployment method, device, equipment and storage medium of hybrid software custom network
JP6384481B2 (en) Network design support apparatus, network design method and program
CN114172817A (en) Virtual network function deployment method and system for edge computing
CN112637053A (en) Method and device for determining backup forwarding path of route
CN112910781B (en) Network fault switching method, device, system and storage medium
CN104579963A (en) Method and device for optimizing routes of nodes
JP6418167B2 (en) Network control device, network system, network control method, and program
CN111650878B (en) Method for optimizing programmability of flow when multiple controllers in software defined network fail
JP5618268B2 (en) Network design method and program
JP6233059B2 (en) Network reconfiguration system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant