CN114422529B - Data processing method, device and medium - Google Patents

Data processing method, device and medium Download PDF

Info

Publication number
CN114422529B
CN114422529B CN202210073403.3A CN202210073403A CN114422529B CN 114422529 B CN114422529 B CN 114422529B CN 202210073403 A CN202210073403 A CN 202210073403A CN 114422529 B CN114422529 B CN 114422529B
Authority
CN
China
Prior art keywords
controller
database
information
switch
controllers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210073403.3A
Other languages
Chinese (zh)
Other versions
CN114422529A (en
Inventor
张杰明
李伟哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202210073403.3A priority Critical patent/CN114422529B/en
Publication of CN114422529A publication Critical patent/CN114422529A/en
Application granted granted Critical
Publication of CN114422529B publication Critical patent/CN114422529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks

Abstract

The data processing method, device and medium provided by the application comprise the following steps: the first controller receives role response information sent by the first switch and updates a locally stored first database according to the role response information. The first controller adds first digest information including the first lifetime of the first database to the first content digest information list and broadcasts the first digest information to the other controllers. And a second controller which receives the first summary information and has the time earlier than the first life time in the other controllers sends a first database update request to the first controller, and the first controller sends update data of the first database to the second controller so that the second controller updates the locally stored second database. The method and the device establish summary information containing survival time, avoid that all controllers receive the summary information, disperse database synchronization pressure in each controller, and reduce database synchronization pressure of the first controller.

Description

Data processing method, device and medium
Technical Field
The present disclosure relates to the field of network communications technologies, and in particular, to a data processing method, device, and medium.
Background
In a software defined network (Software Defined Network, simply SDN), its core technology open flow (Openflow) separates data from control. The control layer comprises a logic centralized and programmable controller, can master global network information, and is convenient for a user to manage and configure a network and deploy a new protocol. At the data layer, including switches that forward data, matching packets can be processed quickly. The controller issues unified standard rules to the switch through the standard interface, and the switch executes corresponding actions according to the rules.
In order to increase popularity of an SDN network, data consistency of an SDN controller needs to be ensured. In the prior art, a controller redundancy and load balancing scheme is used, when an Openflow controller is connected with an Openflow switch, the controller is divided into three roles of a peer-to-peer controller (EQUAL), a MASTER controller (MASTER) and a SLAVE controller (SLAVE), the MASTER controller performs the authority of complete operation on the Openflow switch, the SLAVE controller can only read the Openflow switch, and when the MASTER controller is down, other SLAVE controllers generate a new MASTER controller through election. The other controllers synchronize data to the new master controller.
That is, the currently used controller redundancy and load balancing schemes, where all controllers synchronize data to the master controller, can cause the master controller to become overloaded.
Disclosure of Invention
The application provides a data processing method, equipment and medium, which are used for solving the problem that overload occurs to a main controller caused by that all controllers synchronize data to the main controller.
In a first aspect, the present application provides a data processing method, the method being applied to a first controller, the method comprising:
receiving role response information sent by a first switch, and updating a first locally stored database according to the role response information; wherein the role response information is generated when a main controller of the first switch changes;
adding the first abstract information of the first database to a first content abstract information list, and broadcasting the first abstract information to other controllers; wherein the first summary information includes a first lifetime;
receiving a first database update request sent by a second controller, wherein the second controller is a controller which receives the first summary information in a first life time in other controllers;
And sending the update data of the first database to the second controller so that the second controller updates the locally stored second database according to the update data of the first database.
In a specific embodiment, before the first controller receives the role response information sent by the first switch, the method further includes:
receiving a controlled request sent by the first switch, and acquiring the load quantity of each other controller in the first database and the load quantity of the first controller;
when the load quantity of the first controller is smaller than or equal to the load quantity of any controller in other controllers, generating a role competition request;
and sending the role competitive request to the first switch so that the first switch generates role response information according to the time of receiving the role competitive request sent by the first controller and the time of receiving other competitive requests, wherein the other competitive requests are sent by controllers with the same load quantity as the first controller.
In a specific embodiment, after the first controller sends the update data of the first database to the second controller, the method further includes:
Acquiring neighbor link information of an LLDP protocol in the first database to calculate a first shortest path of data transmission between a second switch and a third switch;
generating a consensus request according to the first shortest path, and sending the consensus request to other controllers;
and receiving authentication information sent by other controllers, and storing the first shortest path in a blockchain when the authentication information meets the consensus condition.
In a specific embodiment, when the authentication information meets the consensus condition, the method specifically includes:
the first shortest path is smaller than all paths in a reference path set, and the reference path set path is the shortest path of data transmission between the second switch and the third switch generated by other controllers; or alternatively
The first shortest path is equal to the path with the smallest path in the reference path set, the calculation time of the first controller is smaller than all calculation time in the reference time set, and the calculation time in the reference time set is the calculation time corresponding to the path with the smallest path in the reference path set.
In a second aspect, the present application provides a data processing method, the method being applied to a second controller, the method comprising:
Receiving first abstract information of a first database broadcasted by a first controller; wherein the first summary information is added to a first content summary information list after the first controller updates a locally stored first database according to character response information generated when a master controller of a first switch changes;
generating a first database update request when the time of receiving the first summary information is earlier than a first lifetime;
sending a first database update request to the first controller, so that the first controller sends update data of the first database to the second controller when receiving the first database update request;
and updating the locally stored second database according to the updating data of the first database.
In one specific embodiment, after the second controller receives the first summary information of the first database broadcasted by the first controller, the method further includes:
receiving a second database update request sent by a third controller, wherein the third controller is a controller which does not receive the first abstract information in a first life time in other controllers, receives second abstract information in a second life time and is similar to the second controller;
And sending the update data of the second database to the third controller so that the third controller updates the third database stored locally according to the update data of the second database.
In one embodiment, the method further comprises:
updating the first abstract information to obtain the second abstract information; wherein a first lifetime of the first summary information is different from a second lifetime of the second summary information;
and adding the second abstract information into a second content abstract information list, and broadcasting the second abstract information.
In a third aspect, the present application provides a data processing apparatus comprising:
the receiving module is used for receiving role response information sent by the first switch;
the processing module is used for updating a first database stored locally according to the role response information; wherein the role response information is generated when a main controller of the first switch changes;
the processing module is further configured to add first summary information of the first database to a first content summary information list;
the sending module is used for broadcasting the first abstract information to other controllers; wherein the first summary information includes a first lifetime;
The receiving module is further configured to receive a first database update request sent by a second controller, where the second controller is a controller that receives the first summary information in a first lifetime in other controllers;
the sending module is further configured to send update data of the first database to the second controller, so that the second controller updates the locally stored second database according to the update data of the first database.
In a fourth aspect, the present application provides a data processing apparatus comprising:
the receiving module is used for receiving first abstract information of a first database broadcasted by the first controller; wherein the first summary information is added to a first content summary information list after the first controller updates a locally stored first database according to character response information generated when a master controller of a first switch changes;
the processing module is used for generating a first database update request when the time for receiving the first abstract information is earlier than the first life time;
the sending module is used for sending a first database update request to the first controller so that the first controller sends update data of the first database to the second controller when receiving the first database update request;
And the processing module is used for updating the locally stored second database according to the updating data of the first database.
In a fifth aspect, the present application provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the data processing method according to any one of the first or second aspects.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for implementing the data processing method of any one of the first or second aspects when executed by a processor.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the data processing method of any one of the first or second aspects.
The application provides a data processing method, device and medium. Compared with the prior art, the first switch generates the role response information when the main controller of the first switch changes, and the first controller receives the role response information sent by the first switch and updates the locally stored first database according to the role response information. The first controller adds first digest information including the first lifetime of the first database to the first content digest information list and broadcasts the first digest information to the other controllers. The controller, such as the second controller, that receives the first summary information within the first lifetime sends a first database update request to the first controller. The first controller sends the update data of the first database to the second controller, and the second controller updates the locally stored second database according to the update data of the first database. The method and the system create abstract information, distribute database synchronization pressure to each controller, reduce consumption of network bandwidth by data synchronization, reduce database synchronization pressure of the first controller, and avoid the problem that all controllers synchronize data to the first controller, which causes overload of the first controller.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, it being obvious that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a schematic application scenario diagram of a data processing method provided in the present application;
FIG. 2 is an interaction diagram of a first embodiment of a data processing method provided in the present application;
FIG. 3 is a flowchart illustrating a second embodiment of a data processing method according to the present application;
FIG. 4 is a schematic diagram of a data processing method provided in the present application;
FIG. 5 is a schematic diagram of a data processing method provided in the present application;
FIG. 6 is a schematic diagram of a data processing method provided herein;
FIG. 7 is a schematic diagram of a data processing method provided in the present application;
FIG. 8 is a schematic diagram of a data processing method provided herein;
FIG. 9 is a schematic diagram of a data processing method provided herein;
FIG. 10 is a schematic diagram of a first embodiment of a data processing apparatus according to the present application;
FIG. 11 is a schematic diagram of a second embodiment of a data processing apparatus according to the present application;
fig. 12 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which a person of ordinary skill in the art would have, based on the embodiments in this application, come within the scope of protection of this application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The terms referred to in this application are explained first:
software defined network (Software Defined Network, SDN for short): a novel network innovation architecture, which is an implementation mode of network virtualization. The core technology open flow (Openflow) separates data from control by using a layered idea, so that flexible control of network flow is realized, the network becomes more intelligent as a pipeline, and a good platform is provided for innovation of the core network and application. The control plane comprises a logic centralized and programmable controller, can grasp global network information, and is convenient for a user to manage and configure a network and deploy a new protocol. In the data plane, including switches that forward data, matching packets can be processed quickly. The interface in SDN has openness, taking the controller as a logic center, the south interface, namely a control-data-plane interface (CDPI) is responsible for communicating with a data plane, and the north interface (northbound interface, NBI) is responsible for communicating with an application plane, and the east-west interface is responsible for communicating among multiple controllers. The most popular southbound interface CDPI employs the OpenFlow protocol. The OpenFlow is most basically characterized in that forwarding rules are matched based on a Flow (Flow) concept, each switch maintains a Flow Table (Flow Table), forwarding is performed according to the forwarding rules in the Flow Table, and establishment, maintenance and issuing of the Flow Table are completed by a controller. Aiming at the northbound interface, the application program invokes various required network resources through northbound interface programming, so that quick configuration and deployment of the network are realized. The east-west interface enables the controller to have expandability, and provides technical support for load balancing and performance improvement.
Open flow (Openflow): a network communication protocol belongs to a data link layer and can control a forwarding plane of a network switch or a router, thereby changing a network path taken by a network data packet. The OpenFlow network is composed of three parts, namely a switch (OpenFlow switch), a network virtualization layer (FlowVisor) and a Controller (Controller). The exchanger carries out the forwarding of the data layer; the network virtualization layer virtualizes a network; the controller performs centralized control on the network to realize the function of a control layer.
And (3) a controller: responsible for flow control to ensure the intelligent network. The OpenFlow controller decides the transmission path of all packets in the network.
The switch: is a core component of the whole OpenFlow network, and mainly manages the forwarding of a data layer. The OpenFlow switch is composed of three parts, namely a Flow Table (Flow Table), a Secure Channel (Secure Channel), and a protocol (OpenFlow Protocol).
Flow Table (Flow Table): consists of a plurality of flow entries, each of which is a forwarding rule. The data packet entering the switch obtains the forwarded destination port by looking up the flow table. The flow table item consists of a header field, a counter and an operation; wherein the header field is a ten-tuple, which is the identity of the flow table entry; the counter is used for calculating the statistical data of the flow table items; the operation identifies the operation that the packet matching the flow entry should perform.
Secure Channel (Secure Channel): the secure channel is an interface that connects the OpenFlow switch to the controller. The controller controls and manages the switch through this interface while the controller receives events from the switch and sends data packets to the switch. The switch and controller communicate over a secure channel and all information must be performed in a format specified by the OpenFlow protocol.
Peer-to-Peer (P2P for short): is a type of network that allows a group of users to connect to each other and obtain files directly from the users' hard disk. The Peer-to-Peer network is an application running on a personal computer that shares files among users over the network. The P2P network shares files by connecting to a personal computer rather than through a central server. P2P is a distributed network where participants share some of the hardware resources they possess (processing power, storage power, network connectivity, printers, etc.), which shared resources need to be served and content by the network, and can be accessed directly by other peer nodes (peers) without going through intermediate entities. The participants in this network are both resource (service and content) providers (servers) and resource (service and content) acquisitors (clients). Characteristics of P2P: there is no central server; and the users are interconnected and share files.
A Proof of workload mechanism (Proof-of-Work, pow): the contribution size is demonstrated with the workload results, and the billing rights and rewards are determined based on the contribution size.
Link layer discovery protocol (Link Layer Discovery Protocol, LLDP): is a data link layer protocol. The network device may advertise the status of the other devices themselves by sending a data link layer protocol unit (Link Layer Discovery Protocol Data Unit, called LLDPDU) in the home network. Is a protocol that enables devices in a network to discover and advertise status, interaction information with each other.
In order to increase popularity of an SDN network, data consistency of an SDN controller needs to be ensured. In the prior art, a redundancy and load balancing scheme of a controller is used, and after the controller establishes a connection with a switch, the controller is divided into three roles, namely a peer controller (EQUAL), a MASTER controller (MASTER) and a SLAVE controller (SLAVE). The master controller has the right to fully operate the switch, the slave controllers can only read the switch, and when the master controller is down, the other slave controllers generate a new master controller through election. Other controllers need to synchronize data with a new main controller, and the main controller can generate overload phenomenon.
Aiming at the problems in the prior art, the inventor finds out that if the P2P protocol which is fully distributed and can tolerate single-point faults of the blockchain network is introduced into the SDN controller cluster system in the process of researching the SDN controller cluster system, the SDN controller synchronizes the database of the SDN controller by utilizing the P2P protocol of the blockchain network, creates summary information, the main controller sends the summary information with survival time to other controllers, and the controllers which receive the summary information in the survival time of the summary information in the other controllers actively synchronize specific information of the database to the main controller. The summary information can be automatically timed when the summary information is sent out, and the summary information is automatically deleted when the timed time reaches the survival time, so that some controllers in other controllers do not receive the summary information, the controllers which do not receive the summary information cannot actively synchronize the specific information of the database to the master controller, but synchronize the database to the controllers which update the database, so that the database synchronization pressure is dispersed in each controller, the consumption of the data synchronization to the network bandwidth is reduced, and the database synchronization pressure of the master controller is reduced. Based on the above inventive concept, a data processing scheme in the present application is designed.
The data processing scheme of the present application is described in detail below.
Fig. 1 is a schematic application scenario diagram of a data processing method provided in the present application, as shown in fig. 1, the application scenario may include: the first controller 101, the second controller 102, the third controller 103, the fourth controller 104 and at least one switch (seven switches are shown in fig. 1, which are respectively a first switch 105, a second switch 106, a seventh switch 107, a fourth switch 108, a fifth switch 109, a sixth switch 110 and a third switch 111), wherein the first controller 101 is a master controller of the second switch 106, the second controller 102 is a master controller of the seventh switch 107 and the fourth switch 108, the third controller 103 is a master controller of the fifth switch 109 and the sixth switch 110, and the fourth controller 104 is a master controller of the third switch 111).
Illustratively, in the application scenario shown in fig. 1, the first switch 105 is in a state without a master controller, when the first switch 105 receives the role competition request of the first controller 101 and the fourth controller, the first switch 105 generates role response information according to the time when the role competition request sent by the first controller 101 is received and the time when the role competition request sent by the fourth controller 104 is received, and the first switch 105 sends the role response information to the new master controller of the first switch 105 and the slave controller of the first switch 104.
The first controller 101 receives the character response information transmitted from the first switch 105, and updates the locally stored first database according to the character response information. The first controller 101 adds first digest information including a first lifetime of the first database to the first content digest information list, and broadcasts the first digest information to the other controllers, the second controller 102, the third controller 103, and the fourth controller 104. Further, after the first controller 101 receives the first database update request transmitted from the second controller 102, update data of the first database is transmitted to the second controller.
The second controller 102 of the other controllers, which receives the first summary information within the first lifetime of the first summary information, sends a first database update request to the first controller 101, and after receiving the update data of the first database sent by the first controller 101, the second controller 102 updates the locally stored second database according to the update data of the first database. The second controller 102 adds the second digest information of the second database to the second content digest information list and broadcasts the second digest information to the other controllers. Further, after the second controller 102 receives the second database update request transmitted from the third controller 103, update data of the second database is transmitted to the third controller.
The third controller 103 does not receive the first summary information sent by the first controller 101 in the first lifetime of the first summary information, receives the second summary information sent by the second controller 102 in the second lifetime of the second summary information, sends a second database update request to the second controller 102, and after receiving the update data of the second database sent by the second controller 102, the third controller 103 updates the third database stored locally according to the update data of the second database. The third controller 103 adds the third digest information of the third database to the third content digest information list, and broadcasts the third digest information to the other controllers. Further, after the third controller 103 receives the third database update request transmitted from the fourth controller 104, update data of the third database is transmitted to the fourth controller.
The fourth controller 104 does not receive the first summary information sent by the first controller 101 in the first lifetime of the first summary information, does not receive the second summary information sent by the second controller 102 in the second lifetime of the second summary information, and receives the third summary information sent by the third controller 103 in the third lifetime of the third summary information. The fourth controller 104 transmits a third database update request to the third controller 103, and after receiving update data of the third database transmitted by the third controller 103, the fourth controller 104 updates the locally stored fourth database according to the update data of the third database.
Based on the above process, the first switch generates role response information when the main controller of the first switch changes, and the first controller receives the role response information sent by the first switch and updates the locally stored first database according to the role response information. The first controller adds first digest information including the first lifetime of the first database to the first content digest information list and broadcasts the first digest information to the other controllers. And a second controller which receives the first abstract information in the first life time of the first abstract information in other controllers sends a first database update request to the first controller, the first controller sends update data of the first database to the second controller, and the second controller updates a locally stored second database according to the update data of the first database. The method and the system create abstract information, distribute database synchronization pressure to each controller, reduce consumption of network bandwidth by data synchronization, reduce database synchronization pressure of the first controller, and avoid the problem that all controllers synchronize data to the first controller, which causes overload of the first controller.
It should be noted that, fig. 1 is only a schematic diagram of an application scenario provided by the embodiment of the present application, the embodiment of the present application does not limit the actual forms of the various devices included in fig. 1, and does not limit the interaction manner between the devices in fig. 1, and in a specific application of the solution, the application may be set according to actual requirements.
The following describes the technical scheme of the present application in detail through specific embodiments. It should be noted that the following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is an interaction schematic diagram of a first embodiment of a data processing method provided in the present application. As shown in fig. 2, the data processing method specifically includes the following steps:
step S201: the first switch transmits character response information to the first controller.
Wherein the character response information is generated when the master controller of the first switch changes.
In the same openflow switch network, a main controller of a first switch is in a down state, the first switch controlled by the main controller is in a state without the main controller, after a data packet arrives at the first switch, the first switch seeks a method for processing the data packet to the main controller, and because the main controller of the first switch is in down state, the first switch sends a controlled request to other controllers in the openflow switch network.
The method comprises the steps that a first controller receives a controlled request sent by a first switch, and the load quantity of each other controller in a first database and the load quantity of the first controller are obtained;
when the load quantity of the first controller is smaller than or equal to the load quantity of any controller in other controllers, the first controller generates a role competition request;
the first controller sends a role competition request to the first switch, so that the first switch generates role response information according to the time of receiving the role competition request sent by the first controller and the time of receiving other competition requests, wherein the other competition requests are sent by the controllers with the same load quantity as the first controller, the load quantity of the fourth controller is the same as the load quantity of the first controller in the embodiment, the first switch receives the role competition requests sent by the first controller and the fourth controller, and the first switch generates the role response information according to the time of receiving the role competition request sent by the first controller and the time of receiving the role competition request sent by the fourth controller, and the first switch sends the role response information to the first controller and the fourth controller.
Specifically, each controller receives a controlled request sent by a first switch, four controllers are exemplified here, and the first controller receives the controlled request sent by the first switch, acquires the load quantity of each other controller in a first database and the load quantity of the first controller, and determines that the load quantity of the first controller is smaller than or equal to the load quantity of any controller in other controllers; the fourth controller receives the controlled request sent by the first switch, obtains the load quantity of each other controller in a fourth database of the fourth controller and the load quantity of the fourth controller, and determines that the load quantity of the fourth controller is the same as the load quantity of the first controller, namely the load quantity of the fourth controller is smaller than or equal to the load quantity of any controller in other controllers. Illustratively, as shown in fig. 1, the first controller is the master controller of one switch, the second controller is the master controller of two switches, the third controller is the master controller of two switches, and the fourth controller is the master controller of one switch, i.e., the number of loads of the first controller and the fourth controller is smaller than the number of loads of the second controller and the third controller, so the first controller and the fourth controller generate a ROLE election REQUEST, ofpt_row_request. Fig. 4 is a schematic diagram of a data processing method provided in the present application, and as shown in fig. 4, the first controller and the fourth controller send a ROLE selection REQUEST ofpt_row_request to the first switch. The time when the first switch receives the role competition request sent by the first controller is earlier than the time when the first switch receives the role competition request sent by the fourth controller, and the first switch generates role response information according to the time when the first switch receives the role competition request sent by the first controller and the time when the fourth switch receives the role competition request sent by the fourth controller. The first switch returns ROLE response information ofpt_role_reply to the first controller and the fourth controller, wherein the ROLE field indicates the current ROLE of the controller. Illustratively, the ROLE response information ofpt_row_reply returned by the first switch to the first controller indicates that the first controller is a master controller of the first switch, and the ROLE response information ofpt_row_reply returned by the first switch to the fourth controller indicates that the fourth controller is a slave controller of the first switch. Fig. 5 is a schematic diagram of a data processing method provided in the present application, as shown in fig. 5, a first controller 501 is a master controller of a first switch 505, and a second controller 502, a third controller 503, and a fourth controller 504 are slave controllers of the first switch.
Step S202: the first controller updates a locally stored first database according to the received character response information.
The first controller receives ROLE response information sent by the first switch, wherein the ROLE response information OFPT_ROLE_REPLY indicates that the first controller is a master controller of the first switch. And the first controller updates the data related to the first switch in the first database locally stored by the first controller according to the role response information sent by the first switch.
Specifically, a local database stores a database of basic information about the whole network switch; a flow table, a group table and a meter database of the whole network switch; an LLDP exchanger neighbor link information database of the whole network; an ECMP database of the whole network exchanger; full network switch port mapping databases, etc.
Step S203: the first controller adds the first summary information of the first database to a first list of content summary information.
After the first controller updates the locally stored first database according to the character response information, the first controller generates first summary information according to the change of the first database, and the first controller adds the first summary information of the first database to the first content summary information list. Wherein the first summary information includes a first lifetime.
Step S204: the first controller sends first summary information of the first database to the second controller.
In a P2P network of a blockchain, any two controllers can directly conduct data transmission and communication. The first controller broadcasts first summary information to other controllers, wherein the first controller sends the first summary information to the second controller. The first summary information is a small packet, wherein the first summary information contains a first lifetime, which is the maximum time the first summary information exists on the network, beyond which the first summary information is discarded. The first summary information may also include a time of the first database update.
Step S205: the second controller generates a first database update request when the first summary information is received earlier than the first lifetime.
After the first controller broadcasts the first abstract information to other controllers, the second controller receives the first abstract information of the first database broadcasted by the first controller, and the time for the second controller to receive the first abstract information is earlier than the first life time of the first abstract information. The summary information is timed, when the timed time reaches the survival time, the summary information is automatically deleted, other controllers do not receive the first summary information before the first summary information is automatically deleted, and the first summary information cannot be received after the first summary information is automatically deleted. After receiving the first abstract information of the first database sent by the first controller, the second controller determines that the first database of the first controller changes according to the first abstract information, and generates a first database update request according to the received first abstract information of the first database sent by the first controller.
Step S206: the second controller sends a first database update request to the first controller.
After the second controller generates the first database update request, the first database update request is sent to the first controller to update the second database locally stored by the second controller according to the specific information of the first database data update.
Step S207: the first controller sends update data of the first database to the second controller.
After receiving the first database update request sent by the second controller, the first controller sends update data of the first database to the second controller.
Step S208: the second controller updates the locally stored second database according to the update data of the first database.
The second controller receives the update data of the first database sent by the first controller, and updates the locally stored second database according to the update data of the first database.
After the second controller updates the locally stored second database, the second controller updates the first summary information to obtain second summary information; wherein the second survival time of the second summary information is different from the first survival time of the first summary information; optionally, the second summary information may further include an update time of the second database.
The second controller adds the second digest information to the second content digest information list and broadcasts the second digest information.
After the second controller broadcasts the second abstract information to other controllers, the third controller receives the second abstract information of the second database broadcasted by the second controller, and the time for the third controller to receive the second abstract information is earlier than the second survival time of the second abstract information. And the third controller generates a second database update request according to the received second abstract information of the second database broadcasted by the second controller. The third controller sends a second database update request to the second controller. The third controller is a controller which does not receive the first abstract information in the first lifetime of the first abstract information in other controllers, receives the second abstract information in the second lifetime of the second abstract information, and is similar to the second controller.
The second controller receives a second database update request sent by the third controller, and sends update data of the second database to the third controller so that the third controller updates the third database stored locally according to the update data of the second database.
After the third controller updates the third database stored locally according to the update data of the second database, the third controller updates the second abstract information to obtain third abstract information; wherein a third survival time of the third summary information is different from a second survival time of the second summary information; optionally, the third summary information may further include an update time of the third database. The third controller adds the third digest information to the third content digest information list and broadcasts the third digest information.
The fourth controller receives third summary information of a third database broadcast by the third controller. And the fourth controller generates a third database update request according to the received third abstract information of the third database broadcasted by the third controller. The fourth controller sends a third database update request to the third controller. The fourth controller is a controller which does not receive the first abstract information in the first lifetime of the first abstract information, does not receive the second abstract information in the second lifetime of the second abstract information, receives the third abstract information in the third lifetime of the third abstract information, and is similar to the third controller.
The data processing method provided in this embodiment is executed by the first switch sending the role response information to the first controller. The first controller updates a locally stored first database according to the received character response information, the first controller adds first summary information of the first database to a first content summary information list, and the first controller sends the first summary information of the first database to the second controller. The second controller generates a first database update request when the time of receiving the first summary information is earlier than the first lifetime, and the second controller sends the first database update request to the first controller. The first controller sends update data of the first database to the second controller. The second controller updates the locally stored second database according to the update data of the first database. The method and the device establish the abstract information, set the survival time of the abstract information, automatically delete the abstract information after reaching the survival time, avoid all controllers from receiving the abstract information, and divide the controllers into the controllers which receive the abstract information in the survival time of the abstract information and the controllers which do not receive the abstract information in the survival time of the abstract information. The controller which receives the abstract information in the living time of the abstract information synchronizes the database to the controller which sends the abstract information, and the controller which does not receive the abstract information in the living time of the abstract information synchronizes the database to other controllers which receive the abstract information and already synchronize the database. According to the method and the system, the database synchronization pressure is distributed to each controller, so that the consumption of the data synchronization to the network bandwidth is reduced, and the pressure of the first controller for synchronizing the data is reduced.
Fig. 3 is a flow chart of a second embodiment of a data processing method provided in the present application. As shown in fig. 3, after the first controller transmits the update data of the first database to the second controller in step S207", the data processing method further includes the steps of:
step S301: the first controller acquires neighbor link information of an LLDP protocol in a first database, and calculates a first shortest path of data transmission between the second switch and the third switch.
Fig. 6 is a schematic diagram of a data processing method provided in the present application, as shown in fig. 6, when only one controller exists in an openflow network, the controller controls a switch of the whole network, and when a shortest path is to be calculated, the controller calculates the shortest path, coordinates the switches in the network to generate a flow table of the shortest path, so that a data packet is forwarded according to the shortest path. When there are multiple controllers in the network and load balancing is performed simultaneously, each controller is self-contained, as shown in fig. 7, fig. 7 is a schematic diagram of the data processing method provided in the present application, where the first controller 701 controls the second switch 705, the second controller 702 controls the seventh switch 706 and the fourth switch 709, the third controller 703 controls the fifth switch 707 and the sixth switch 710, and the fourth controller 704 controls the third switch 708. When the shortest path from the second switch to the third switch needs to be calculated, the first controller, the second controller, the third controller, and the fourth controller need to be coordinated.
The first controller acquires neighbor link information of an LLDP protocol in a first database, and calculates a first shortest path of data transmission between the second switch and the third switch.
Specifically, the first controller receives adjacent network topology link information collected by the second switch through the LLDP protocol, stores LLDP neighbor link information in the first database, stores LLDP network link information of the whole network through the database synchronization process of the controller of the first embodiment, and calculates a first shortest path of data transmission between the second switch and the third switch according to the neighbor link information of the LLDP protocol in the first database.
Fig. 8 is a schematic diagram of a data processing method provided in the present application, as shown in fig. 8, and similarly, other controllers calculate the shortest path of data transmission between the second switch and the third switch. Specifically, the controller receives the adjacent network topology link information collected by the switch controlled by the controller through the LLDP protocol, stores the LLDP neighbor link information in the local database, and through the database synchronization process of the controller of the first embodiment, the controller stores the LLDP network link information of the whole network, and the controller calculates the shortest path of data transmission between the second switch and the third switch according to the neighbor link information of the LLDP protocol in the local database.
Optionally, the application program issues a shortest path requirement to all controllers in the network through the northbound interface, the shortest path requirement being used to request the controllers to calculate a shortest path from the second switch to the third switch.
Optionally, after the data packet arrives at the second switch, the second switch sends a data packet processing request to a first controller of a master controller of the second switch, the first controller determines that a destination address of the data packet is a third switch, and the first controller sends a shortest path requirement to other controllers.
Step S302: the first controller generates a consensus request according to the first shortest path and sends the consensus request to other controllers.
After the first controller calculates a first shortest path of data transmission between the second switch and the third switch, determining calculation time of the first controller for calculating the first shortest path and cost value of the first path. The first controller generates a consensus request according to the first shortest path and sends the consensus request to other controllers, wherein the consensus request comprises the calculation time of the first shortest path and the cost value of the first path. The second controller also calculates a second shortest path for data transmission between the second switch to the third switch, and determines a calculation time for the second controller to calculate the second shortest path and a cost value for the shortest path. The second controller generates a consensus request according to the second shortest path and sends the consensus request to other controllers, and similarly, the third controller and the fourth controller execute the same steps as the second controller to generate the consensus request, which is not described herein.
Step S303: the first controller receives authentication information sent by other controllers and stores the first shortest path in the blockchain when the authentication information meets a consensus condition.
The first controller receives consensus requests sent by other controllers, uses the path cost value and the calculation time as workload certification through a workload certification mechanism, compares the cost value and the calculation time of the path submitted by each controller, determines authentication information, and sends the authentication information to the other controllers. The first controller receives authentication information sent by other controllers, and when the authentication information meets the consensus condition, the first controller obtains the authority of block chain accounting, and the first controller writes the first shortest path into the block chain. The controller obtains a first shortest path by reading data on the blockchain, coordinates the relevant switches to generate a specific relevant flow table, and forwards the data packet on the shortest path.
The authentication information satisfies consensus conditions, and specifically includes:
the first shortest path is smaller than all paths in a reference path set, and the reference path set path is the shortest path of data transmission between the second switch and the third switch generated by other controllers; or alternatively
The first shortest path is equal to the smallest path in the reference path set, and the calculation time of the first controller is smaller than all calculation time in the reference time set, wherein the calculation time in the reference time set is the calculation time corresponding to the smallest path in the reference path set.
Alternatively, the blockchain may be a state machine driven with respect to optimal path changes that, upon receiving a path change update in a state, will transition to a new state that is chronological. Starting from the first created block, the controller calculates a new shortest path each time to generate a new block, and a series of new blocks form a blockchain. Fig. 9 is a schematic diagram of a data processing method provided in the present application, and as shown in fig. 9, a block is composed of a block header and a block body. The block header contains the following information:
hash value of parent block: data used for connecting the previous block and indexing the hash value of the father block;
timestamp: the time stamp generated by the blocks is used for sequencing the blocks, and is also convenient for the controller to select the latest shortest path;
hash value of shortest path: the hash value of the zone block is used to verify the correct integrity of the data.
The zone block contains the following information:
shortest path: complete information of the shortest path;
cost value: the cost value of the shortest path;
calculating time: calculation time of the shortest path.
Alternatively, the controller may write the new shortest path to the blockchain.
The data processing method provided in this embodiment is executed by the first controller obtaining neighbor link information of an LLDP protocol in the first database, and calculating a first shortest path of data transmission between the second switch and the third switch. The first controller generates a consensus request according to the first shortest path and sends the consensus request to other controllers. The first controller receives authentication information sent by other controllers and stores the first shortest path in the blockchain when the authentication information meets a consensus condition. According to the method and the system, the shortest path is determined by each controller through the consensus mechanism, the problems of overall coordination calculation and synchronization of the controller paths are effectively solved, the path strategy is written into the block chain, and the non-falsifiability and the non-falsification of data are guaranteed.
FIG. 10 is a schematic diagram of a first embodiment of a data processing apparatus according to the present application; as shown in fig. 10, the data processing apparatus 10 includes:
a receiving module 11, configured to receive role response information sent by the first switch;
a processing module 12, configured to update a locally stored first database according to the role response information; wherein the character response information is generated when the main controller of the first switch changes;
the processing module 12 is further configured to add the first summary information of the first database to the first content summary information list;
a sending module 13, configured to broadcast the first summary information to other controllers; wherein the first summary information includes a first lifetime;
the receiving module 11 is further configured to receive a first database update request sent by a second controller, where the second controller is a controller that receives first summary information in a first lifetime in other controllers;
the sending module 13 is further configured to send update data of the first database to the second controller, so that the second controller updates the locally stored second database according to the update data of the first database.
Further, the receiving module 11 is further configured to receive a controlled request sent by the first switch;
Further, the processing module 13 is further configured to obtain the load number of each other controller in the first database and the load number of the first controller;
and when the load quantity of the first controller is smaller than or equal to the load quantity of any controller in other controllers, generating a role competition request.
Further, the sending module 13 is further configured to send a role competition request to the first switch, so that the first switch generates role response information according to a time when the role competition request sent by the first controller is received and a time when other competition requests are received, where the other competition requests are sent by the controllers with the same load number as the first controller.
Further, the processing module 12 is further configured to, after the first controller sends the update data of the first database to the second controller,
acquiring neighbor link information of an LLDP protocol in a first database, and calculating a first shortest path of data transmission between a second switch and a third switch;
and generating a consensus request according to the first shortest path.
Further, the sending module 13 is further configured to send a consensus request to other controllers;
further, the receiving module 11 is further configured to receive authentication information sent by another controller, and store the first shortest path in the blockchain when the authentication information meets a consensus condition.
Further, when the authentication information satisfies the consensus condition, the method specifically includes:
the first shortest path is smaller than all paths in a reference path set, and the reference path set path is the shortest path of data transmission between the second switch and the third switch generated by other controllers; or alternatively
The first shortest path is equal to the smallest path in the reference path set, and the calculation time of the first controller is smaller than all calculation time in the reference time set, wherein the calculation time in the reference time set is the calculation time corresponding to the smallest path in the reference path set.
The data processing device provided in this embodiment is configured to execute the technical scheme of the first controller in any one of the foregoing method embodiments, and its implementation principle and technical effect are similar, and are not described herein again.
FIG. 11 is a schematic diagram of a second embodiment of a data processing apparatus according to the present application; as shown in fig. 11, the apparatus 20 includes:
a receiving module 21, configured to receive first summary information of a first database broadcasted by a first controller; the first abstract information is added to a first content abstract information list after the first controller updates a locally stored first database according to role response information, and the role response information is generated when a main controller of the first switch changes;
A processing module 22, configured to generate a first database update request when the time of receiving the first summary information is earlier than the first lifetime;
a sending module 23, configured to send a first database update request to the first controller, so that the first controller sends update data of the first database to the second controller when receiving the first database update request;
the processing module 22 is configured to update the locally stored second database according to the update data of the first database.
Further, the receiving module 21 is further configured to, after the second controller receives the first summary information of the first database broadcasted by the first controller,
and receiving a second database updating request sent by a third controller, wherein the third controller is a controller which does not receive the first summary information in the first life time and receives the second summary information in the second life time and is similar to the second controller in other controllers.
Further, the sending module 23 is further configured to send update data of the second database to the third controller, so that the third controller updates the third database stored locally according to the update data of the second database.
Further, the processing module 22 is also configured to,
Updating the first abstract information to obtain second abstract information; wherein the first survival time of the first abstract information is different from the second survival time of the second abstract information;
and adding the second summary information to a second content summary information list and broadcasting the second summary information.
The data processing device provided in this embodiment is configured to execute the technical scheme of the second controller in any one of the foregoing method embodiments, and its implementation principle and technical effect are similar, and are not described herein again.
Fig. 12 is a schematic structural diagram of an electronic device provided in the present application. As shown in fig. 12, the electronic device 30 includes a memory 31 and a processor 32.
Wherein the memory 31 is for storing computer instructions executable by the processor;
the processor 32, when executing the computer instructions, implements the steps of the method in the above-described embodiments. Reference may be made in particular to the relevant description of the embodiments of the method described above.
Alternatively, the memory 31 may be separate or integrated with the processor 32. When the memory 31 is provided separately, the controller further comprises a bus for connecting the memory 31 and the processor 32.
The electronic device is configured to execute the technical scheme in any of the foregoing method embodiments, and its implementation principle and technical effects are similar, and are not described herein again.
The embodiment of the application further provides a computer readable storage medium, in which computer executable instructions are stored, which when executed by a processor are configured to implement the steps of the method in the above embodiment.
Embodiments of the present application also provide a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of the above embodiments.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features can be replaced equivalently; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (9)

1. A method of data processing, the method being applied to a first controller, the method comprising:
receiving role response information sent by a first switch, and updating a first locally stored database according to the role response information; wherein the role response information is generated when a main controller of the first switch changes;
adding the first abstract information of the first database to a first content abstract information list, and broadcasting the first abstract information to other controllers; wherein the first summary information includes a first lifetime;
receiving a first database update request sent by a second controller, wherein the second controller is a controller which receives the first summary information in a first life time in other controllers;
and sending the update data of the first database to the second controller so that the second controller updates the locally stored second database according to the update data of the first database.
2. The data processing method according to claim 1, wherein before the first controller receives the role response information sent by the first switch, the method further comprises:
Receiving a controlled request sent by the first switch, and acquiring the load quantity of each other controller in the first database and the load quantity of the first controller;
when the load quantity of the first controller is smaller than or equal to the load quantity of any controller in other controllers, generating a role competition request;
and sending the role competitive request to the first switch so that the first switch generates role response information according to the time of receiving the role competitive request sent by the first controller and the time of receiving other competitive requests, wherein the other competitive requests are sent by controllers with the same load quantity as the first controller.
3. The data processing method according to claim 2, wherein after the first controller transmits the update data of the first database to the second controller, the method further comprises:
acquiring neighbor link information of an LLDP protocol in the first database to calculate a first shortest path of data transmission between a second switch and a third switch;
generating a consensus request according to the first shortest path, and sending the consensus request to other controllers;
And receiving authentication information sent by other controllers, and storing the first shortest path in a blockchain when the authentication information meets the consensus condition.
4. A data processing method according to claim 3, wherein when the authentication information satisfies a consensus condition, specifically comprising:
the first shortest path is smaller than all paths in a reference path set, and the reference path set path is the shortest path of data transmission between the second switch and the third switch generated by other controllers; or alternatively
The first shortest path is equal to the path with the smallest path in the reference path set, the calculation time of the first controller is smaller than all calculation time in the reference time set, and the calculation time in the reference time set is the calculation time corresponding to the path with the smallest path in the reference path set.
5. A method of data processing, the method being applied to a second controller, the method comprising:
receiving first abstract information of a first database broadcasted by a first controller; wherein the first summary information is added to a first content summary information list after the first controller updates a locally stored first database according to character response information generated when a master controller of a first switch changes;
Generating a first database update request when the time of receiving the first summary information is earlier than a first lifetime;
sending a first database update request to the first controller, so that the first controller sends update data of the first database to the second controller when receiving the first database update request;
and updating the locally stored second database according to the updating data of the first database.
6. The data processing method of claim 5, wherein after the second controller receives the first summary information of the first database broadcast by the first controller, the method further comprises:
receiving a second database update request sent by a third controller, wherein the third controller is a controller which does not receive the first abstract information in a first life time in other controllers, receives second abstract information in a second life time and is similar to the second controller;
and sending the update data of the second database to the third controller so that the third controller updates the third database stored locally according to the update data of the second database.
7. A data processing method according to claim 5 or 6, characterized in that the method further comprises:
updating the first abstract information to obtain the second abstract information; wherein a first lifetime of the first summary information is different from a second lifetime of the second summary information;
and adding the second abstract information into a second content abstract information list, and broadcasting the second abstract information.
8. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the data processing method of any one of claims 1 to 4 or any one of claims 5 to 7.
9. A computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, which when executed by a processor, is adapted to implement the data processing method according to any one of claims 1 to 4 or any one of claims 5 to 7.
CN202210073403.3A 2022-01-21 2022-01-21 Data processing method, device and medium Active CN114422529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210073403.3A CN114422529B (en) 2022-01-21 2022-01-21 Data processing method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210073403.3A CN114422529B (en) 2022-01-21 2022-01-21 Data processing method, device and medium

Publications (2)

Publication Number Publication Date
CN114422529A CN114422529A (en) 2022-04-29
CN114422529B true CN114422529B (en) 2023-07-11

Family

ID=81274417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210073403.3A Active CN114422529B (en) 2022-01-21 2022-01-21 Data processing method, device and medium

Country Status (1)

Country Link
CN (1) CN114422529B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017142516A1 (en) * 2016-02-16 2017-08-24 Hewlett Packard Enterprise Development Lp Software defined networking for hybrid networks
CN108833293A (en) * 2018-06-20 2018-11-16 北京邮电大学 A kind of data center's jamming control method and device based on software defined network SDN
CN109617820A (en) * 2019-02-15 2019-04-12 中国联合网络通信集团有限公司 A kind of SDN system and route renewing method
CN109728932A (en) * 2017-10-31 2019-05-07 中兴通讯股份有限公司 Setting method, controller, interchanger and the computer readable storage medium of SDN
CN109905251A (en) * 2017-12-07 2019-06-18 北京金山云网络技术有限公司 Network management, device, electronic equipment and storage medium
CN113472891A (en) * 2021-07-15 2021-10-01 浪潮思科网络科技有限公司 SDN controller cluster data processing method, device and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017142516A1 (en) * 2016-02-16 2017-08-24 Hewlett Packard Enterprise Development Lp Software defined networking for hybrid networks
CN109728932A (en) * 2017-10-31 2019-05-07 中兴通讯股份有限公司 Setting method, controller, interchanger and the computer readable storage medium of SDN
CN109905251A (en) * 2017-12-07 2019-06-18 北京金山云网络技术有限公司 Network management, device, electronic equipment and storage medium
CN108833293A (en) * 2018-06-20 2018-11-16 北京邮电大学 A kind of data center's jamming control method and device based on software defined network SDN
CN109617820A (en) * 2019-02-15 2019-04-12 中国联合网络通信集团有限公司 A kind of SDN system and route renewing method
CN113472891A (en) * 2021-07-15 2021-10-01 浪潮思科网络科技有限公司 SDN controller cluster data processing method, device and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于哈希链的软件定义网络路径安全;李兆斌;刘泽一;魏占祯;韩禹;;计算机应用(第05期);全文 *
基于流量特征的OpenFlow南向接口开销优化技术;郑鹏;胡成臣;李昊;;计算机研究与发展(第02期);全文 *

Also Published As

Publication number Publication date
CN114422529A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
US9635107B2 (en) System and method for managing data delivery in a peer-to-peer network
US20130322451A1 (en) System and Method for a Context Layer Switch
Feng et al. Postcard: Minimizing costs on inter-datacenter traffic with store-and-forward
KR20060079117A (en) Virtual multicast routing for a cluster having state synchronization
CN111800758B (en) Unmanned aerial vehicle swarm layered consensus method based on block chain
WO2006120946A1 (en) Tree-type network system, node device, broadcast system, broadcast method, etc.
CA2897118A1 (en) System and method for providing p2p based reconfigurable computing and structured data distribution
CN101651708A (en) Topological construction method of P2P streaming media network
CN114422529B (en) Data processing method, device and medium
CN111800516A (en) Internet of things equipment management method and device based on P2P
Kulkarni et al. Badumna: A decentralised network engine for virtual environments
Salta et al. Improving P2P video streaming in wireless mesh networks
CN101605094B (en) Ring model based on point-to-point network and routing algorithm thereof
Kanemitsu et al. KadRTT: Routing with network proximity and uniform ID arrangement in Kademlia
Apolonia et al. SELECT: A distributed publish/subscribe notification system for online social networks
CN101369915B (en) P2P operating network resource management system
CN102104518B (en) Hybrid Pastry network for voice over Internet protocol (VoIP) service
Guo et al. Source selection problem in multi-source multi-destination multicasting
CN102986196A (en) Access to a network of nodes distributed over a communication architecture, using a topology server with multi-criteria selection
CN102035894B (en) Distance-based state synchronization method
CN107800567B (en) Method for establishing P2P streaming media network topology model of mixed mode
JP5673268B2 (en) Communication device and program
Li et al. Controller Cluster-Based Interconnecting for Multi-domain SDN Networks
Chen et al. Rainbow: A locality-aware peer-to-peer overlay multicast system
CN115242646B (en) Block chain-based network slice application method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant