WO2021227947A1 - 网络控制方法及设备 - Google Patents

网络控制方法及设备 Download PDF

Info

Publication number
WO2021227947A1
WO2021227947A1 PCT/CN2021/092099 CN2021092099W WO2021227947A1 WO 2021227947 A1 WO2021227947 A1 WO 2021227947A1 CN 2021092099 W CN2021092099 W CN 2021092099W WO 2021227947 A1 WO2021227947 A1 WO 2021227947A1
Authority
WO
WIPO (PCT)
Prior art keywords
network node
network
resource
path
module
Prior art date
Application number
PCT/CN2021/092099
Other languages
English (en)
French (fr)
Inventor
王凤华
徐晖
侯云静
覃晨
Original Assignee
大唐移动通信设备有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大唐移动通信设备有限公司 filed Critical 大唐移动通信设备有限公司
Priority to EP21803432.0A priority Critical patent/EP4152703A4/en
Priority to US17/998,717 priority patent/US20230388215A1/en
Publication of WO2021227947A1 publication Critical patent/WO2021227947A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/06Deflection routing, e.g. hot-potato routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/30Routing of multiclass traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/72Admission control; Resource allocation using reservation actions during connection setup
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/72Admission control; Resource allocation using reservation actions during connection setup
    • H04L47/722Admission control; Resource allocation using reservation actions during connection setup at the destination endpoint, e.g. reservation of terminal resources or buffer space
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/83Admission control; Resource allocation based on usage prediction

Definitions

  • the embodiments of the present disclosure relate to the field of communication technology, and in particular to a network control method and device.
  • the DetNet working group of the Internet Engineering Task Force currently focuses on the overall architecture, data platform specifications, data flow information model, and YANG model; however, no new specifications have been proposed for network control, but they have been used.
  • the control plane collects the topology of the network system, and the management plane monitors faults and real-time information of network equipment.
  • the control plane calculates the path according to the information of the network system's topology and management plane, and generates a flow table. In the whole process, resource occupation is not considered, and deterministic performance such as zero packet loss, zero jitter, and low delay cannot be guaranteed.
  • One purpose of the embodiments of the present disclosure is to provide a network control method and device, which solves the problem that deterministic performance such as zero packet loss, zero jitter, and low delay cannot be guaranteed due to the lack of consideration of resource occupation.
  • the embodiment of the present disclosure provides a network control method, which is applied to a network node, and includes:
  • the working status parameters include one or more of the following: network device type; inherent bandwidth; allocable bandwidth; best-effort bandwidth; allocated bandwidth; remaining allocated bandwidth; inherent buffer; allocatable buffer; best-effort Service buffer; allocated buffer; remaining allocated buffer.
  • sending the working state parameter of the network node to the control device includes:
  • the working state parameter of the network node is sent to the control device through a periodic heartbeat message.
  • the method further includes:
  • the method further includes:
  • the resource reservation or cancellation is performed according to the flow identifier, and the execution result of the resource reservation is obtained;
  • the method further includes:
  • resource reservation is performed on the network node.
  • the method before selecting a flow table according to the level of the data flow and performing matching, the method further includes:
  • the method further includes:
  • the network node If the network node is the last hop, analyze whether it is a duplicate packet according to the packet sequence number in the flow identifier, and if it is a duplicate packet, delete the duplicate packet;
  • the sending timer expires, the data stream is sent to the next hop.
  • embodiments of the present disclosure provide a network control method, which is applied to a control device, and includes:
  • the network topology and resource view are updated according to the working state parameters of the network node.
  • the working status parameters include one or more of the following: network device type; inherent bandwidth; allocable bandwidth; best-effort bandwidth; allocated bandwidth; remaining allocated bandwidth; inherent buffer; allocatable buffer; best-effort Service buffer; allocated buffer; remaining allocated buffer.
  • said obtaining the working state parameter of the network node includes:
  • the method further includes:
  • the first message includes one or more of the following: source end information, destination end information, data stream information, service application type, and service application category identifiers.
  • generating a flow table according to the first message includes:
  • the service analysis module identifies the service type applied by the application device according to the first message
  • the service analysis module sends a second message to the path calculation module
  • the path calculation module obtains the network topology and resource view and the reserved resources of the network node from the topology management module according to the second message;
  • the path calculation module performs path calculation according to the network topology and resource view and the reserved resources of the network node, and estimates the end-to-end delay of each path;
  • the path calculation module sends a path set that is less than the maximum delay of the data stream to the resource calculation module;
  • the resource calculation module obtains the network topology and resource view and the reserved resources of the network node from the topology management module, performs resource estimation on the paths in the path set, selects paths that meet the resource requirements, and integrates the path information Send flow table generation module;
  • the flow table generating module generates a flow table according to the path information.
  • it also includes:
  • the path calculation module notifies the service analysis module of the result
  • the service analysis module feeds back the result to the application device.
  • it also includes:
  • the service analysis module receives a third message from the application device, the third message indicates bearer cancellation, and the third message carries a data flow identifier;
  • the service analysis module notifies the topology management module to release the resources related to the data flow identifier, and updates the network topology and resource view;
  • the topology management module notifies the flow table generation module to delete the flow table entry related to the data flow identifier.
  • the path calculation module sending the path set less than the maximum delay of the data stream to the resource calculation module includes:
  • the path calculation module determines a path set that is less than the maximum delay of the data stream
  • the path calculation module determines the difference between the delay of each path in the path set and the maximum delay of the data stream
  • the path calculation module sorts the difference from small to large and sends it to the resource calculation module.
  • the service analysis module sending the second message to the path calculation module includes:
  • the service analysis module maps the service application category identification to one or more of peak service packet rate, maximum data packet length, end-to-end delay upper limit, packet loss upper limit, network bandwidth, and It is sent to the path calculation module together with one or more of the same source end, the destination end, the data stream identifier, the service application type, and the service application category identifier.
  • embodiments of the present disclosure provide a network node, including:
  • the sending module is configured to send the working state parameter of the network node to the control device, so that the control device can update the network topology and resource view according to the working state parameter of the network node.
  • embodiments of the present disclosure provide a network node, including: a first transceiver and a first processor;
  • the first transceiver sends and receives data under the control of the first processor
  • the first processor reads the program in the memory and executes the following operation: send the working state parameter of the network node to the control device, so that the control device can check the network topology and resources according to the working state parameter of the network node.
  • the view is updated.
  • control device including:
  • the obtaining module is used to obtain the working state parameters of the network node
  • the update module is used to update the network topology and resource view according to the working state parameters of the network node.
  • an embodiment of the present disclosure provides a control device, including: a second transceiver and a second processor;
  • the second transceiver sends and receives data under the control of the second processor
  • the second processor reads the program in the memory to perform the following operations: obtain the working state parameter of the network node; and update the network topology and resource view according to the working state parameter of the network node.
  • an embodiment of the present disclosure provides a communication device including: a processor, a memory, and a program stored on the memory and capable of running on the processor, and the program is implemented when the processor is executed It includes the steps of the network control method as described in the first aspect or the second aspect.
  • embodiments of the present disclosure provide a computer-readable storage medium with a program stored on the computer-readable storage medium, and when the program is executed by a processor, the implementation includes: The steps of the network control method.
  • Figure 1 is an SDN architecture diagram
  • FIG. 2 is a schematic diagram of TSN in the IEEE802.1 standard framework
  • FIG. 3 is one of the flowcharts of the network control method according to the embodiment of the disclosure.
  • FIG. 4 is the second flowchart of the network control method according to an embodiment of the disclosure.
  • FIG. 5 is a schematic diagram of the system architecture of an embodiment of the disclosure.
  • FIG. 6 is a schematic diagram of a network management process according to an embodiment of the disclosure.
  • FIG. 7 is a schematic diagram of a network control flow of an embodiment of the disclosure.
  • FIG. 8 is a schematic diagram of a resource reservation process according to an embodiment of the disclosure.
  • FIG. 9 is a schematic diagram of a data processing flow of an embodiment of the disclosure.
  • FIG. 10 is one of schematic diagrams of a network node according to an embodiment of the disclosure.
  • FIG. 11 is a second schematic diagram of a network node according to an embodiment of the disclosure.
  • FIG. 12 is one of the schematic diagrams of the control device of the embodiment of the disclosure.
  • FIG. 13 is the second schematic diagram of the control device of the embodiment of the disclosure.
  • FIG. 14 is a schematic diagram of a communication device according to an embodiment of the disclosure.
  • TSN Time-Sensitive Networking
  • TSN uses standard Ethernet to provide distributed time synchronization and deterministic communication.
  • the essence of standard Ethernet is a non-deterministic network, but certainty must be required in the industrial field, and a set of data packets must arrive at the destination in a complete, real-time, and deterministic manner. Therefore, the new TSN standard maintains time synchronization of all network devices, adopts central control, and performs time slot planning, reservation, and fault-tolerant protection at the data link layer to achieve determinism.
  • TSN includes three basic components: time synchronization, communication path selection, reservation and fault tolerance, scheduling and traffic shaping.
  • ⁇ Time synchronization The time in the TSN network is transferred from a central time source to the Ethernet device through the network itself, and the high-frequency round-trip delay measurement is used to maintain high-precision synchronization of the time between the network device and the central clock source. That is, the precise time protocol of IEEE1588.
  • TSN calculates the path through the network according to the network topology, and provides explicit path control and bandwidth surplus for the data stream, and provides redundant transmission for the data stream according to the network topology.
  • TSN time-aware queues use Time Aware Shaper (TAS) to enable TSN switches to control queued traffic.
  • Ethernet frames are identified and assigned to priority-based virtual local area networks ( Virtual Local Area Network, VLAN tag (Tag), each queue is defined in a schedule, and then these data queue packets are transmitted at the egress within a predetermined time window. Other queues will be locked in the specified time window. Therefore, the effect of periodic data being affected by non-periodic data is eliminated. This means that the delay of each switch is deterministic and knowable. The data message delay in the TSN network is guaranteed.
  • TAS Time Aware Shaper
  • DetNet The goal of the DetNet network is to determine the transmission path of the second-level bridging and the third-level routing segment. These paths can provide the worst-case limit of delay, packet loss and jitter, and control and reduce the technology of end-to-end delay. DetNet extends the technology developed by TSN from the data link layer to routing.
  • the DetNet working group of the Internet Engineering Task Force currently focuses on the overall architecture, data platform specifications, data flow information model, and YANG model; however, no new specifications have been proposed for network control, but they have been used.
  • SDN divides the network into different planes according to business functions.
  • the planes from top to bottom are introduced as follows:
  • ⁇ Application Plane The plane where applications and services that define network behavior are located.
  • Control Plane Decide how one or more network devices forward data packets, and send these decisions to the network devices in the form of a flow table for execution.
  • the control plane mainly interacts with the forwarding plane, and less attention is paid to the operation plane of the device, unless the control plane wants to know the current state and function of a specific port.
  • Management Plane responsible for monitoring, configuring and maintaining network equipment, for example, making decisions on the status of network equipment.
  • the management plane mainly interacts with the operation plane of the device.
  • the network device is responsible for processing the data packets in the data path according to the instructions received from the control plane.
  • the operations of the forwarding plane include, but are not limited to, forwarding, discarding, and modifying data packets.
  • the Operational Plane is responsible for managing the operating status of the network device where it is located, for example, whether the device is active or inactive, the number of available ports, and the status of each port.
  • the operation plane is responsible for network equipment resources, such as ports, memory, and so on.
  • the original SDN network receives a data packet request that needs to be forwarded from the application plane or the forwarding plane, and the control plane performs routing calculation according to the formed network topology, generates a flow table, and sends it to the forwarding plane of the device.
  • the specific working principle of the forwarding plane is as follows:
  • ⁇ Matching flow table The header field is used as a matching field, including the ingress port, source media access control (MAC), virtual local area network ID (VLANID), Internet Protocol (IP) address, etc. . Match the entries of the locally stored flow table in sequence according to the priority, and use the matching table entry with the highest priority as the matching result.
  • the multi-level flow table can reduce the overhead, extract the characteristics of the flow table, and decompose the matching process into several steps to form a pipeline processing form, reducing the number of flow table records.
  • the forwarding rules are organized in different flow tables. The rules in the same flow table are matched according to priority.
  • ⁇ Instruction execution The instructions of the matched flow entry are used as the forwarding execution set. At first, it is an empty set. Each time it matches, one item is added, and multiple actions continue to accumulate until there is no go to Table, stop, and execute the instruction set together . Instructions are forwarding, discarding, queuing, modifying domains, etc. For forwarding, you can specify ports, physical ports, logical ports, and reserved ports; modify domains, including group use group tables to process data packets, modify data packet header values, modify TTL, etc. Different processing combinations will bring different delays.
  • the sending end measures each path, and periodically measures the packet loss, delay, and jitter of each path. Through cycle accumulation, an end-to-end path can be established for each path. Pre-estimation model for delay and end-to-end packet loss.
  • the scheduling module estimates according to the delay and packet loss pre-estimation model, and selects one of the paths according to the shortest delay/minimum packet loss/minimum jitter algorithm as the transmission path of the packet.
  • the SDN control device can find the current relatively suitable path for a specific business, and generate a flow table for each related node and send it to the switch.
  • the data flow is processed according to the flow table point by point to ensure the end-to-end routing of the data flow. Determine, try to ensure that the delay is determined.
  • the sender assigns Quality of Service (QoS) levels to each data stream, which is generally divided into 8 levels.
  • QoS Quality of Service
  • the switch checks its level and inserts the packet into the corresponding queue according to the level.
  • the switch preferentially processes high-priority packets; if the priorities are the same, they will be processed in the order of entry.
  • Each group occupies a buffer (BUFFER) resource according to priority. Due to the limited BUFFER resources in the switch, for example, when a high priority packet arrives and the BUFFER is already full, the switch will select the lowest priority packet to discard it, and allocate the freed BUFFER to the new incoming high priority packet use. Try to ensure that the delay and jitter of high-priority packets are low.
  • TSN will provide a universal time-sensitive mechanism for the MAC layer of the Ethernet protocol. While ensuring the time certainty of Ethernet data communication, it also provides the possibility of interoperability between different protocol networks. Referring to Figure 2, TSN does not cover the entire network. TSN is only about the protocol standard of the second layer in the Ethernet communication protocol model, that is, the data link layer (more precisely, the MAC layer). Therefore, TSN only supports bridged networks and does not support data streams that require routers end-to-end.
  • the related technology adopts the priority processing method, which really improves the performance of the high-priority data stream.
  • the high-time-sensitive data flow is using the link, there is a higher-level data flow in the background traffic or the same-level data flow is sharing the link and switch node resources, whether a certain packet will be lost due to congestion is heavily dependent on it. Sharing the same level and advanced data flow characteristics of the resources of the switch, the queuing delay in the end-to-end delay of the packet in the data flow cannot be determined. The queuing delay of a certain packet is heavily dependent on the resource sharing of the switch with it. The flow characteristics of other data streams; the delay jitter of the same packet will be relatively large. But if the priority is very high, then only new incoming packets can be discarded, which is the main cause of congestion and packet loss. Therefore, the original technology cannot guarantee that the data stream will not be congested and lost packets.
  • the related technology introduces a considerable processing delay through packet loss feedback compensation and redundant coding methods, and high-time-sensitive data flow applications cannot tolerate a long time; however, the related technology still cannot guarantee packet loss on the link.
  • the related technology adopts a dedicated line method to ensure absolute low latency and near zero packet loss, and cannot achieve dynamic sharing of path resources and switch resources. Therefore, time-sensitive services and non-time-sensitive services cannot coexist.
  • words such as “exemplary” or “for example” are used as examples, illustrations, or illustrations. Any embodiment or design solution described as “exemplary” or “for example” in the embodiments of the present disclosure should not be construed as being more preferable or advantageous than other embodiments or design solutions. To be precise, words such as “exemplary” or “for example” are used to present related concepts in a specific manner.
  • LTE Long Time Evolution
  • LTE-A Long Time Evolution
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single Carrier Frequency Single-carrier Frequency-Division Multiple Access
  • the terms “system” and “network” are often used interchangeably.
  • the CDMA system can implement radio technologies such as CDMA2000 and Universal Terrestrial Radio Access (UTRA).
  • UTRA includes Wideband Code Division Multiple Access (WCDMA) and other CDMA variants.
  • the TDMA system can implement radio technologies such as the Global System for Mobile Communication (GSM).
  • OFDMA system can realize such as Ultra Mobile Broadband (UMB), Evolved UTRA (Evolution-UTRA, E-UTRA), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, etc. Radio technology.
  • UMB Ultra Mobile Broadband
  • Evolution-UTRA Evolved UTRA
  • E-UTRA IEEE 802.11
  • WiMAX IEEE 802.16
  • IEEE 802.20 Flash-OFDM
  • Flash-OFDM Flash-OFDM
  • LTE and more advanced LTE are new UMTS versions that use E-UTRA.
  • UTRA, E-UTRA, UMTS, LTE, LTE-A, and GSM are described in documents from an organization named "3rd Generation Partnership Project” (3GPP).
  • CDMA2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2" (3GPP2).
  • the techniques described in this article can be used for the systems and radio technologies mentioned above, as well as other systems and radio technologies.
  • an embodiment of the present disclosure provides a network control method.
  • the execution body of the method is a network node (or referred to as a forwarding device, a switch, etc.).
  • the steps of the method include: step 301.
  • Step 301 Send the working state parameter of the network node to the control device, so that the control device updates the network topology and resource view according to the working state parameter of the network node.
  • the network node may send the working state parameter of the network node to the control device through a periodic heartbeat message.
  • the working status parameters include one or more of the following: network device type; inherent bandwidth; allocatable bandwidth; best-effort bandwidth; allocated bandwidth; remaining allocated bandwidth; inherent buffer (BUFFER); allocatable Buffer; best-effort buffer; allocated buffer; remaining allocated buffer.
  • the method further includes: after receiving the flow table from the control device, updating the flow table according to the service level of the data flow, inserting or deleting the forwarding path of the data flow in the flow table of the relevant level, Obtain the execution result of the hierarchical flow table; notify the control device of the execution result of the hierarchical flow table.
  • the method further includes: after receiving resource reservation information from the control device, performing resource reservation or cancellation according to the flow identifier to obtain the execution result of the resource reservation; and storing the execution result of the resource reservation Notify the control device.
  • the method further includes: after receiving the data stream from the data source device, selecting a stream table according to the level of the data stream, and matching; according to the stream identifier of the data stream, in the Resource reservation is performed on the network node.
  • the method before selecting a flow table according to the level of the data flow and performing matching, the method further includes: judging whether copying is required according to the flow identifier and/or flow type of the data flow; To copy, each packet of the data stream is copied to form multiple data streams, which are transferred to the flow table for matching; if copying is not required, it is directly transferred to the flow table for matching.
  • the method further includes: judging whether the network node is the last hop; if the network node is the last hop, analyzing whether it is a duplicate packet according to the packet sequence number in the flow identifier, and if it is a duplicate Packet, delete duplicate packets; analyze the arrival time of the data stream according to the stream type, and set the sending timer according to the timestamp; if the sending timer expires, send the data stream to the next hop .
  • the topology and resource conditions of the entire network can be clearly understood through centralized control, and more reasonable path and resource reservation decisions can be made. Furthermore, through the resource reservation of network nodes, it is possible to ensure that the data flow is not caused by Congestion and packet loss; through replication elimination, it is ensured that the data flow is not lost due to the link, thereby ensuring that the end-to-end packet loss rate is almost zero; further, through resource reservation and path planning, the end-to-end delay can be guaranteed to be the worst Lower than a predetermined value; further, through packet storage, eliminate end-to-end delay jitter. Furthermore, through resource reservation, bandwidth for ordinary services can be reserved to ensure that high-reliability services can be realized without building a private network.
  • an embodiment of the present disclosure provides a network control method.
  • the execution subject of the method may be a control device, including: step 401 and step 402.
  • Step 401 Obtain working state parameters of a network node
  • receiving a periodic heartbeat message sent by the network node where the periodic heartbeat message carries working state parameters of the network node.
  • the working status parameters include one or more of the following: network device type; inherent bandwidth; allocatable bandwidth; best-effort bandwidth; allocated bandwidth; remaining allocated bandwidth; inherent buffer (BUFFER); allocatable Buffer; best-effort buffer; allocated buffer; remaining allocated buffer.
  • Step 402 Update the network topology and resource view according to the working state parameters of the network node.
  • the method further includes: receiving a first message from an application device, the first message requesting service analysis; generating a flow table according to the first message; sending the flow table to the network node.
  • the first message includes one or more of the following: source information, destination information, data stream information, service application type, and service application category identifier.
  • generating a flow table according to the first message includes:
  • the service analysis module identifies the service type applied by the application device according to the first message; if the applied service type is an application resource, the service analysis module sends a second message to the path calculation module;
  • the path calculation module obtains the network topology and resource view and the reserved resources of the network node from the topology management module according to the second message;
  • the path calculation module obtains the network topology and resource view and the reserved resources of the network node according to the network topology and resource view and the reserved resources of the network node, Perform path calculation and estimate the end-to-end delay of each path;
  • the path calculation module sends the path set less than the maximum delay of the data stream to the resource calculation module;
  • the resource calculation module obtains the network topology from the topology management module And the resource view and the reserved resources of the network node, estimate the resources of the paths in the path set, select the path that meets the resource requirements, and send the path information to the flow table generation module;
  • the flow table generation module is based on Based on the path information, a flow table
  • the method further includes: if there is no path that meets the resource requirement, the path calculation module notifies the service analysis module of the result; the service analysis module feeds back the result to the application device.
  • the method further includes: the service analysis module receives a third message from the application device, the third message indicates bearer withdrawal, and the third message carries a data flow identifier; the service analysis module notifies The topology management module releases the resources related to the data flow identifier, and updates the network topology and resource view; the topology management module notifies the flow table generating module to delete the flow table entry related to the data flow identifier.
  • the path calculation module sends the path set less than the maximum delay of the data stream to the resource calculation module, including: the path calculation module determines the path set less than the maximum delay of the data stream; the path calculation module determines the path set less than the maximum delay of the data stream; The difference between the delay of each path in the path set and the maximum delay of the data stream; the path calculation module sorts the difference from small to large and sends it to the resource calculation module.
  • the service analysis module sends a second message to the path calculation module, including: the service analysis module maps the service application category identifier to the peak service packet rate and the maximum data packet length according to the established service model library. , End-to-end delay upper limit, packet loss upper limit, network bandwidth one or more, and send together with one or more of the same source end, destination end, data stream identification, service application type, service application category identification Give the path calculation module.
  • the topology and resource conditions of the entire network can be clearly understood through centralized control, and more reasonable path and resource reservation decisions can be made. Furthermore, through the resource reservation of network nodes, it is possible to ensure that the data flow is not caused by Congestion and packet loss; through replication elimination, it is ensured that the data flow is not lost due to the link, thereby ensuring that the end-to-end packet loss rate is almost zero; further, through resource reservation and path planning, the end-to-end delay can be guaranteed to be the worst Lower than a predetermined value; further, through packet storage, eliminate end-to-end delay jitter. Furthermore, through resource reservation, bandwidth for ordinary services can be reserved to ensure that high-reliability services can be realized without building a private network.
  • service applications can be converted into end-to-end requirements for network indicators (bandwidth, delay, jitter, packet loss) within a certain time interval, and the control device performs the path according to the requirements of the network indicators.
  • the path with the smallest delay requirement and the calculation delay is the optimal path, which endogenously reduces network jitter; in the path decision process, the delay and the resources on the path node are comprehensively considered to ensure simultaneous effectiveness.
  • the network system is divided into application equipment, control equipment and network nodes.
  • the application equipment contains various application requirements, and puts forward requirements for network control through the northbound interface;
  • the control equipment mainly constructs the latest network topology and resource view of the network, and performs network path planning, control, resource calculation and prediction according to the requirements of the application. Leave, and notify the application device and network node layer of the result.
  • the control equipment includes different modules such as link discovery, topology management, service analysis, path calculation, resource management, and flow table generation; network nodes mainly include the classification and processing of data flows according to control requirements and the guarantee of resources. Contains different modules such as flow identification, classification flow table, resource reservation, packet replication, packet storage and packet elimination.
  • the work of this system is mainly divided into four processes, network management process, network control process, resource reservation, and data stream processing.
  • the purpose of the network management process is to collect the latest network topology and resource views of the system; the purpose of the network control process is to select a path that meets the requirements according to the needs of the application, generate a flow table for it, and send it to the switch.
  • Each calculation of the network control process requires the latest network topology and resource view of the network management process, and will be updated.
  • the resource reservation process is that the control device reserves resources for the resource decisions of each network node.
  • the data stream processing process is to identify the data stream, select the stream table for matching according to the level of the data stream, and then set the sending timer according to the timestamp. When the sending timer expires, the data stream is sent to the next hop.
  • Step 1 Automatically start the link discovery module after power on
  • Step 2 The control device (or called the controller) uses the Link Layer Discovery Protocol (LLDP) as the link discovery protocol.
  • LLDP Link Layer Discovery Protocol
  • the link discovery module encapsulates the related information of the control device (such as: main capabilities, management address, device identification, interface identification, etc.) in the LLDP.
  • Step 3 The control device sends the LLDP data packet to the connected network node 1 (it can be understood that the network node may also be referred to as a switch) through the packetout message, and the network node 1 saves it.
  • the function of the Packet-out message is to send the relevant data of the controller to the OpenFlow switch, which is a message that contains the data packet sending command.
  • Step 4 Network node 1 spreads the message through all ports. If the neighbor network node 2 is also open flow forwarding, then the network node 2 executes the flow table.
  • Step 5 If there is no such flow table on the network node 2, the network node 2 makes a request to the control device through a packet_in message.
  • the openflow switch continues to broadcast packets to its neighbors. If there is a non-openflow switch, after traversing, to another openflow switch, the switch uploads the first packet to the control device, then it can be known that it is a non-openflow switch, otherwise the same.
  • the function of the Packet-in message is to send data packets arriving at the OpenFlow switch to the controller
  • the function of the Packet-out message is to send the relevant data of the controller to the OpenFlow switch, which is a message that contains the data packet sending command.
  • Step 6 The control device collects the packet_in message and sends the packet_in message to the topology management module, which can draw the network topology and resource view.
  • Step 7 After the topology is established, send periodic heartbeat messages to request the working state parameters of the switch.
  • Step 8 After the resource calculation matches successfully, update the above parameters for the next calculation.
  • Step 1 The application device (application layer) sends a request to the service analysis module through the northbound interface.
  • the request can include one or more of the following: source end (core network entrance E-NODEB), destination end (corresponding optional gate), data stream ID, service application type (open/cancel), service category number (corresponding requirements) ).
  • Step 2 The service analysis module identifies the type of service applied for. If it is an application resource, it maps the service category number to peak service packet speed, maximum data packet length, end-to-end delay upper limit, and service model library established in advance. Packet loss upper limit, network bandwidth, together with the source end (core network entrance E-NODEB), destination end (corresponding optional gate), data stream ID, service application type (open/cancel), service category number (corresponding requirements) Sent to the path calculation module.
  • Step 3 After receiving the request, the path calculation module obtains the current topology and resource conditions from the topology management module to perform path calculation.
  • Step 4 The path calculation module performs end-to-end path calculation based on the real-time information of the topology management module, and estimates the end-to-end delay of each path.
  • Step 5 The path calculation module sorts the path sets that are less than the maximum delay requirement of the data stream according to the difference value from smallest to largest, and sends them to the resource calculation module (parameters are: data flow ID, path ID (device ID set), terminal End-to-end delay estimation).
  • Step 6 The resource calculation module reads the real-time information of the topology and equipment from the topology management module.
  • Step 7 The resource calculation module performs resource estimation point by point according to the path sequence sent by the path calculation module.
  • Step 8a If the resource calculation module selects the path, it sends the path information to the flow table generation module, generates the flow table, and sends it to the switching device (in order to improve availability, the interface between the control device and the switching device follows the principle of openflow. Reduce the modification of the device itself); at the same time, the resource calculation module sends the calculation result to the topology management module, the topology management updates in real time, and sends a success message to the path analysis module;
  • Step 8b If there is no path that meets the requirements, notify the path analysis module of the result.
  • Step 9 The path analysis module feeds back the result to the application layer.
  • Step 10 If bearer cancellation occurs at the application layer, the data stream ID and service application type (open/cancel) are sent to the service analysis module.
  • Step 11 The service analysis module notifies the topology management module to release the relevant resources of the data stream.
  • Step 12 Notify the deletion of the related flow entry of the data flow.
  • Step 1 The control device sends the generated flow tables to each relevant network node one by one;
  • Step 2 After receiving the flow table, the network node updates the multi-level flow table according to the data flow level, and inserts/deletes the forwarding path of this data flow in the flow table of the relevant level;
  • Step 3 After the network node receives the resource reservation information, it performs resource reservation/cancellation on the network node as required;
  • Step 4 Resource reservation and hierarchical flow table notify the network node of the execution result
  • Step 5 The network node notifies the result to the topology management module of the control device, and updates the network topology and resource view.
  • Step 1 After the data source device starts to send the data stream, it connects to the network node to analyze the stream number and stream type
  • Step 2a The network node judges whether it needs to be copied, and if necessary, copies each packet of the flow to form two data flows, and then transfers to the flow table for matching;
  • Step 2b If the identification does not need to be copied, then directly transfer to the flow table matching stage
  • Step 3 Select the flow table according to the level of the data flow and perform matching; according to the flow number, perform resource reservation on the device and use the buffer area;
  • Step 4 Determine whether it is the last hop, if it is the last hop, analyze whether it is a duplicate group, and delete the duplicate group;
  • Step 5 Analyze the arrival time according to the category, and set the sending timer according to the timestamp
  • Step 6 When the sending timer expires, send it to the next hop.
  • an embodiment of the present disclosure provides a network node, and the network node 1000 includes:
  • the sending module 1001 is configured to send the working state parameter of the network node to the control device, so that the control device updates the network topology and resource view according to the working state parameter of the network node.
  • the working status parameters include one or more of the following: network device type; inherent bandwidth; allocatable bandwidth; best-effort bandwidth; allocated bandwidth; remaining allocated bandwidth; inherent buffer (BUFFER); allocatable Buffer; best-effort buffer; allocated buffer; remaining allocated buffer.
  • the sending module 1000 is further configured to send the working state parameters of the network node to the control device through periodic heartbeat messages.
  • the network node 1000 further includes:
  • the first processing module is configured to update the flow table according to the service level of the data flow after receiving the flow table from the control device, insert or delete the forwarding path of the data flow in the flow table of the relevant level, to obtain the hierarchical flow table Execution result; notifying the control device of the execution result of the hierarchical flow table.
  • the network node 1000 further includes:
  • the second processing module is configured to, after receiving the resource reservation information from the control device, perform resource reservation or cancellation according to the flow identifier to obtain the execution result of the resource reservation; and notify the control device of the execution result of the resource reservation equipment.
  • the network node 1000 further includes:
  • the third processing module is configured to select the flow table according to the level of the data flow after receiving the data flow from the data source device, and perform matching;
  • the network node 1000 further includes:
  • the fourth processing module is used to determine whether copying is required according to the stream identifier and/or stream type of the data stream; if copying is required, copy each packet of the data stream to form multiple data streams, and transfer Into the flow table for matching; if copying is not required, it is directly transferred to the flow table to match.
  • the network node 1000 further includes:
  • the fifth processing module is used to determine whether the network node is the last hop; if the network node is the last hop, analyze whether it is a duplicate packet according to the packet sequence number in the flow identifier, and if it is a duplicate packet, delete the duplicate Analyze the arrival time of the data stream according to the stream type, and set the sending timer according to the timestamp; if the sending timer expires, send the data stream to the next hop.
  • the network node provided by the embodiment of the present disclosure can execute the method embodiment shown in FIG. 3 above, and its implementation principles and technical effects are similar, and details are not described herein again in this embodiment.
  • the network node 1100 includes: a first transceiver 1101 and a first processor 1102;
  • the first transceiver 1101 sends and receives data under the control of the first processor 1102;
  • the first processor 1102 reads the program in the memory and executes the following operations: send the working state parameter of the network node to the control device, so that the control device can check the network topology and the working state parameter of the network node according to the working state parameter of the network node.
  • the resource view is updated.
  • the working status parameters include one or more of the following: network device type; inherent bandwidth; allocatable bandwidth; best-effort bandwidth; allocated bandwidth; remaining allocated bandwidth; inherent buffer (BUFFER) ); Allocable buffer; best-effort buffer; allocated buffer; remaining allocated buffer.
  • the first processor 1102 reads the program in the memory and also performs the following operations: sending the working state parameter of the network node to the control device through a periodic heartbeat message.
  • the first processor 1102 reads the program in the memory and performs the following operations: after receiving the flow table from the control device, the flow table is updated according to the service level of the data flow, and the flow table is updated at the relevant level. Insert or delete the forwarding path of the data flow in the flow table to obtain the execution result of the hierarchical flow table; notify the control device of the execution result of the hierarchical flow table.
  • the first processor 1102 reads the program in the memory and performs the following operations: after receiving the resource reservation information from the control device, the resource reservation or cancellation is performed according to the flow identifier, and the resource reservation is obtained. Execution result; notifying the control device of the execution result of the resource reservation.
  • the first processor 1102 reads the program in the memory and performs the following operations: after receiving the data stream from the data source device, select the stream table according to the level of the data stream, and perform matching;
  • the first processor 1102 reads the program in the memory and performs the following operations: according to the stream identifier and/or stream type of the data stream, determine whether copying is required; if copying is required, perform the following operations: Each packet of the data stream is copied to form multiple data streams, which are transferred to the flow table for matching; if there is no need to copy, it is directly transferred to the flow table for matching.
  • the first processor 1102 reads the program in the memory and performs the following operations: determine whether the network node is the last hop; if the network node is the last hop, follow the grouping in the flow identifier Analyze whether the sequence number is a duplicate packet, if it is a duplicate packet, delete the duplicate packet; analyze the arrival time of the data stream according to the flow type, and set the sending timer according to the timestamp; if the sending timer expires , The data stream is sent to the next hop.
  • the network node provided by the embodiment of the present disclosure can execute the method embodiment shown in FIG. 3 above, and its implementation principles and technical effects are similar, and details are not described herein again in this embodiment.
  • control device 1200 includes:
  • the obtaining module 1201 is used to obtain the working state parameters of the network node
  • the update module 1202 is used to update the network topology and resource view according to the working state parameters of the network node.
  • the working status parameters include one or more of the following: network device type; inherent bandwidth; allocatable bandwidth; best-effort bandwidth; allocated bandwidth; remaining allocated bandwidth; inherent buffer (BUFFER) ); Allocable buffer; best-effort buffer; allocated buffer; remaining allocated buffer.
  • the obtaining module 1201 is further configured to: receive a periodic heartbeat message sent by the network node, where the periodic heartbeat message carries the working state parameter of the network node.
  • control device 1200 further includes:
  • the sixth processing module is configured to receive a first message from an application device, where the first message requests service analysis; generate a flow table according to the first message; and send the flow table to the network node.
  • the first message includes one or more of the following: source information, destination information, data stream information, service application type, and service application category identifier.
  • control device 1200 further includes: a service analysis module, a path calculation module, a resource calculation module, a topology management module, and a flow table generation module;
  • the service analysis module identifies the service type applied by the application device according to the first message; if the applied service type is an application resource, the service analysis module sends a second message to the path calculation module;
  • the path calculation module obtains the network topology and resource view and the reserved resources of the network node from the topology management module according to the second message;
  • the path calculation module obtains the network topology and resource view and the reservation of the network node according to the network topology and resource view and the reservation of the network node Resource, perform path calculation, and estimate the end-to-end delay of each path;
  • the path calculation module sends the path set less than the maximum delay of the data stream to the resource calculation module;
  • the resource calculation module obtains it from the topology management module
  • the network topology and resource view and the reserved resources of the network node are used to estimate the resources of the paths in the path set, select the paths that meet the resource requirements, and send the path information to the flow table generation module; the flow table generation The module generates a flow table according to the path information
  • the path calculation module if there is no path that meets the resource requirements, the path calculation module notifies the service analysis module of the result;
  • the service analysis module feeds back the result to the application device.
  • the service analysis module receives a third message from the application device, the third message indicates bearer withdrawal, and the third message carries a data flow identifier;
  • the service analysis module notifies the topology management module to release the resources related to the data flow identifier, and updates the network topology and resource view;
  • the topology management module notifies the flow table generation module to delete the flow table entry related to the data flow identifier.
  • the path calculation module determines a path set that is less than the maximum delay of the data stream
  • the path calculation module determines the difference between the delay of each path in the path set and the maximum delay of the data stream
  • the path calculation module sorts the difference from small to large and sends it to the resource calculation module.
  • the service analysis module maps the service application category identifier to one of peak service packet speed, maximum data packet length, end-to-end delay upper limit, packet loss upper limit, and network bandwidth according to the established service model library. Item or multiple items, and sent to the path calculation module together with one or more of the same source end, destination end, data stream identifier, service application type, and service application category identifier.
  • control device provided in the embodiment of the present disclosure can execute the method embodiment shown in FIG. 4, and its implementation principles and technical effects are similar, and the details are not described herein again in this embodiment.
  • an embodiment of the present disclosure provides a control device, which includes: a second transceiver 1301 and a second processor 1302;
  • the second transceiver 1301 sends and receives data under the control of the second processor 1302;
  • the second processor 1302 reads the program in the memory to perform the following operations: obtain the working state parameters of the network node; and update the network topology and resource view according to the working state parameters of the network node.
  • the working state parameters include one or more of the following: network device type; inherent bandwidth; allocatable bandwidth; best-effort bandwidth; allocated bandwidth; remaining allocated bandwidth; inherent buffer (BUFFER) ); Allocable buffer; best-effort buffer; allocated buffer; remaining allocated buffer.
  • the second processor 1302 reads the program in the memory to perform the following operations: receive a periodic heartbeat message sent by the network node, the periodic heartbeat message carrying the working status of the network node parameter.
  • the second processor 802 reads the program in the memory to perform the following operations: receive a first message from an application device, and the first message requests service analysis; A message is used to generate a flow table; the flow table is sent to the network node.
  • the first message includes one or more of the following: source information, destination information, data stream information, service application type, and service application category identifier.
  • the second processor 802 reads the program in the memory to perform the following operations: according to the first message, identify the type of service requested by the application device; if the type of service requested is a resource request , The second message is sent to the path calculation module through the service analysis module; the path calculation module obtains the network topology and resource view and the reserved resources of the network node from the topology management module according to the second message; the path calculation The module calculates the path according to the network topology and resource view and the reserved resources of the network node, and estimates the end-to-end delay of each path; the path calculation module sends the path set that is less than the maximum delay of the data stream to Resource calculation module; the resource calculation module obtains the network topology and resource view and the reserved resources of the network node from the topology management module, performs resource estimation on the paths in the path concentration, selects paths that meet the resource requirements, and compares all the paths The information of the path is sent to a flow table generating module; the flow table generating module generates a flow table according
  • the second processor 802 reads the program in the memory to perform the following operations: if there is no path that meets the resource requirements, the path calculation module notifies the service analysis module of the result; The service analysis module feeds back the result to the application device.
  • the second processor 802 reads the program in the memory to perform the following operations: receive a third message from the application device through the service analysis module, the third message indicating bearer cancellation, the The third message carries the data flow identifier; the service analysis module notifies the topology management module to release resources related to the data flow identifier, and updates the network topology and resource view; the topology management module notifies the flow table generation module to delete The data flow identifies the related flow entry.
  • the second processor 802 reads the program in the memory to perform the following operations: determine the path set that is less than the maximum delay of the data stream through the path calculation module; determine the path set through the path calculation module The difference between the delay of each path and the maximum delay of the data stream; the path calculation module sorts the difference from small to large and sends it to the resource calculation module.
  • the second processor 802 reads the program in the memory to perform the following operations: According to the established service model library by the service analysis module, the service application category identifier is mapped to the peak service packet rate and the maximum data packet length. , End-to-end delay upper limit, packet loss upper limit, network bandwidth one or more, and send together with one or more of the same source end, destination end, data stream identification, service application type, service application category identification Give the path calculation module.
  • control device provided in the embodiment of the present disclosure can execute the method embodiment shown in FIG. 4, and its implementation principles and technical effects are similar, and the details are not described herein again in this embodiment.
  • FIG. 14 is a structural diagram of a communication device applied in an embodiment of the present disclosure.
  • the communication device 1400 includes: a processor 1401, a transceiver 1402, a memory 1403, and a bus interface, where:
  • the communication device 1400 further includes: a program that is stored in the memory 1403 and can run on the processor 1401. When the program is executed by the processor 1401, the program in the embodiments shown in FIGS. 3 to 4 is implemented step.
  • the bus architecture may include any number of interconnected buses and bridges. Specifically, one or more processors represented by the processor 1401 and various circuits of the memory represented by the memory 1403 are linked together.
  • the bus architecture can also link various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are all known in the art, and therefore, no further descriptions are provided herein.
  • the bus interface provides the interface.
  • the transceiver 1402 may be multiple elements, including a transmitter and a receiver, and provide a unit for communicating with various other devices on a transmission medium. It is understood that the transceiver 1402 is an optional component.
  • the processor 1401 is responsible for managing the bus architecture and general processing, and the memory 1403 can store data used by the processor 1401 when performing operations.
  • the communication device provided in the embodiment of the present disclosure can execute the method embodiments shown in FIG. 3 to FIG. 4, and its implementation principles and technical effects are similar, and details are not described herein again in this embodiment.
  • the steps of the method or algorithm described in conjunction with the disclosure of the present disclosure may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
  • the software instructions can be composed of corresponding software modules, and the software modules can be stored in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disks, mobile hard disks, read-only optical disks, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor, so that the processor can read information from the storage medium and can write information to the storage medium.
  • the storage medium may also be an integral part of the processor.
  • the processor and the storage medium may be located in the ASIC.
  • the ASIC may be located in the core network interface device.
  • the processor and the storage medium may also exist as discrete components in the core network interface device.
  • the functions described in the present disclosure can be implemented by hardware, software, firmware, or any combination thereof.
  • these functions can be stored in a computer-readable medium or transmitted as one or more instructions or codes on the computer-readable medium.
  • the computer-readable medium includes a computer storage medium and a communication medium, where the communication medium includes any medium that facilitates the transfer of a computer program from one place to another.
  • the storage medium may be any available medium that can be accessed by a general-purpose or special-purpose computer.
  • the embodiments of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the embodiments of the present disclosure may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of the present disclosure may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • the division of the above modules is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
  • these modules can all be implemented in the form of software called by processing elements; they can also be implemented in the form of hardware; some modules can be implemented in the form of calling software by processing elements, and some of the modules can be implemented in the form of hardware.
  • the determination module may be a separately established processing element, or it may be integrated into a certain chip of the above-mentioned device for implementation.
  • it may also be stored in the memory of the above-mentioned device in the form of program code, which is determined by a certain processing element of the above-mentioned device.
  • each step of the above method or each of the above modules can be completed by an integrated logic circuit of hardware in the processor element or instructions in the form of software.
  • each module, unit, sub-unit or sub-module may be one or more integrated circuits configured to implement the above method, for example: one or more application specific integrated circuits (ASIC), or, one or Multiple microprocessors (digital signal processor, DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, FPGA), etc.
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • FPGA Field Programmable Gate Array
  • the processing element may be a general-purpose processor, such as a central processing unit (CPU) or other processors that can call program codes.
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • SOC system-on-a-chip

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本公开实施例提供一种网络控制方法及设备,该方法包括:向控制设备发送所述网络节点的工作状态参数,以使所述控制设备根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。

Description

网络控制方法及设备
相关申请的交叉引用
本申请主张在2020年5月15日在中国提交的中国专利申请号No.202010415264.9的优先权,其全部内容通过引用包含于此。
技术领域
本公开实施例涉及通信技术领域,具体涉及一种网络控制方法及设备。
背景技术
国际互联网工程任务组(The Internet Engineering Task Force,IETF)的DetNet工作组目前工作关注的整体架构、数据平台规范、数据流量信息模型、YANG模型;但没有对网络控制提出新的规范,只是沿用了IETF RFC7426中SDN的相关架构和控制。控制平面收集到网络系统的拓扑,管理平面监控网络的设备的故障及实时信息。控制平面根据网络系统的拓扑和管理平面的信息,进行路径计算,生成流表,整个过程中未考虑资源的占用,无法保证零丢包、零抖动、低延迟等确定性的性能。
发明内容
本公开实施例的一个目的在于提供一种网络控制方法及设备,解决由于未考虑资源的占用,无法保证零丢包、零抖动、低延迟等确定性的性能的问题。
本公开实施例提供一种网络控制方法,应用于网络节点,包括:
向控制设备发送所述网络节点的工作状态参数,以使所述控制设备根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
可选地,所述工作状态参数包括以下一项或多项:网络设备类型;固有带宽;可分配带宽;尽力服务带宽;已分配带宽;剩余分配带宽;固有缓冲区;可分配缓冲区;尽力服务缓冲区;已分配缓冲区;剩余分配缓冲区。
可选地,向控制设备发送所述网络节点的工作状态参数,包括:
通过周期性心跳消息,向所述控制设备发送所述网络节点的工作状态参数。
可选地,所述方法还包括:
在从所述控制设备接收到流表后,按照数据流的业务等级更新流表,在相关等级的流表中插入或删除数据流的转发路径,得到分级流表的执行结果;
将所述分级流表的执行结果通知给所述控制设备。
可选地,所述方法还包括:
在从控制设备接收资源预留信息后,按照流标识进行资源预留或取消,得到资源预留的执行结果;
将所述资源预留的执行结果通知给所述控制设备。
可选地,所述方法还包括:
在从数据源设备接收到数据流后,按照所述数据流的等级选取流表,并进行匹配;
根据所述数据流的流标识,在所述网络节点上执行资源预留。
可选地,在按照所述数据流的等级选取流表,并进行匹配之前,所述方法还包括:
根据所述数据流的流标识和/或流类型,判断是否需要复制;
如果需要复制,则对所述数据流的每个分组进行复制,形成多条数据流,转入流表进行匹配;
如果不需要复制,则直接转入流表匹配。
可选地,所述方法还包括:
判断所述网络节点是否为末跳;
如果所述网络节点为末跳,则按照流标识中的分组序列号分析是否为重复的分组,如果是重复分组,则删除重复的分组;
按照流类型分析所述数据流的到达时间,根据时间戳,设定发送定时器;
如果所述发送定时器到时,则将所述数据流发送给下一跳。
第二方面,本公开实施例提供一种网络控制方法,应用于控制设备,包括:
获取网络节点的工作状态参数;
根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
可选地,所述工作状态参数包括以下一项或多项:网络设备类型;固有带宽;可分配带宽;尽力服务带宽;已分配带宽;剩余分配带宽;固有缓冲区;可分配缓冲区;尽力服务缓冲区;已分配缓冲区;剩余分配缓冲区。
可选地,所述获取网络节点的工作状态参数,包括:
接收所述网络节点发送的周期性心跳消息,所述周期性心跳消息中携带所述网络节点的工作状态参数。
可选地,所述方法还包括:
从应用设备接收第一消息,所述第一消息请求业务解析;
根据所述第一消息,生成流表;
将所述流表发送给所述网络节点。
可选地,所述第一消息中包括以下一项或多项:源端的信息、目的端的信息、数据流的信息、业务申请类型和业务申请类别标识。
可选地,根据所述第一消息,生成流表,包括:
业务解析模块根据所述第一消息,对所述应用设备申请的业务类型进行识别;
如果所述申请的业务类型是申请资源,则所述业务解析模块向路径计算模块发送第二消息;
所述路径计算模块根据所述第二消息,从拓扑管理模块中获取网络拓扑和资源视图和网络节点的预留资源;
所述路径计算模块根据所述网络拓扑和资源视图和网络节点的预留资源,进行路径计算,并对每条路径进行端到端的延迟进行估计;
所述路径计算模块将小于数据流最大延迟的路径集发送给资源计算模块;
所述资源计算模块从拓扑管理模块中获取网络拓扑和资源视图和网络节点的预留资源,对所述路径集中的路径进行资源估计,从中选取满足资源要求的路径,并将所述路径的信息发送流表生成模块;
所述流表生成模块根据所述路径的信息,生成流表。
可选地,还包括:
如果没有满足所述资源要求的路径,则所述路径计算模块将结果通知所 述业务解析模块;
所述业务解析模块将所述结果反馈给所述应用设备。
可选地,还包括:
所述业务解析模块从所述应用设备接收第三消息,所述第三消息指示承载撤销,所述第三消息中携带数据流标识;
所述业务解析模块通知拓扑管理模块释放与所述数据流标识相关资源,并更新网络拓扑和资源视图;
所述拓扑管理模块通知所述流表生成模块删除所述数据流标识相关的流表项。
可选地,所述路径计算模块将小于数据流最大延迟的路径集发送给资源计算模块,包括:
所述路径计算模块确定小于数据流最大延迟的路径集;
所述路径计算模块确定所述路径集中的每条路径的延迟与数据流最大延迟的差值;
所述路径计算模块按照所述差值从小到大排序,并发送给所述资源计算模块。
可选地,所述业务解析模块向路径计算模块发送第二消息,包括:
所述业务解析模块根据建立的业务模型库,将业务申请类别标识映射为业务峰值包速、数据包最大长度、端到端延迟上限、丢包上限、网络带宽中的一项或多项,并与同源端、目的端、数据流标识、业务申请类型、业务申请类别标识中一项或多项一起发送给所述路径计算模块。
第三方面,本公开实施例提供一种网络节点,包括:
发送模块,用于向控制设备发送所述网络节点的工作状态参数,以使所述控制设备根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
第四方面,本公开实施例提供一种网络节点,包括:第一收发机和第一处理器;
所述第一收发机在所述第一处理器的控制下发送和接收数据;
所述第一处理器读取存储器中的程序执行以下操作:向控制设备发送所 述网络节点的工作状态参数,以使所述控制设备根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
第五方面,本公开实施例提供一种控制设备,包括:
获取模块,用于获取网络节点的工作状态参数;
更新模块,用于根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
第六方面,本公开实施例提供一种控制设备,包括:第二收发机和第二处理器;
所述第二收发机在所述第二处理器的控制下发送和接收数据;
所述第二处理器读取存储器中的程序执行以下操作:获取网络节点的工作状态参数;根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
第七方面,本公开实施例提供一种通信设备,包括:处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序,所述程序被所述处理器执行时实现包括如第一方面或第二方面所述的网络控制方法的步骤。
第八方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质上存储有程序,所述程序被处理器执行时实现包括如第一方面或第二方面所述的网络控制方法的步骤。
在本公开实施例中,通过集中控制可以清楚了解全网的拓扑和资源情况,可以做出更合理的路径和资源预留决策。
附图说明
通过阅读下文可选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出可选实施方式的目的,而并不认为是对本公开的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1为SDN架构图;
图2为TSN在IEEE802.1标准框架中的示意图;
图3为本公开实施例的网络控制方法流程图之一;
图4为本公开实施例的网络控制方法流程图之二;
图5为本公开实施例的系统架构示意图;
图6为本公开实施例的网络管理流程示意图;
图7为本公开实施例的网络控制流程示意图;
图8为本公开实施例的资源预留流程示意图;
图9为本公开实施例的数据处理流程示意图;
图10为本公开实施例的网络节点的示意图之一;
图11为本公开实施例的网络节点的示意图之二;
图12为本公开实施例的控制设备的示意图之一;
图13为本公开实施例的控制设备的示意图之二;
图14为本公开实施例的通信设备的示意图。
具体实施方式
下为了便于理解本公开实施例,下面先介绍以下几个技术点:
1)时间敏感网络(Time-Sensitive Networking,TSN):
TSN使用标准以太网提供分布式时间同步和确定性通信。标准以太网的本质是一种非确定性网,但在工业领域必须要求确定性,一组数据包必须完整、实时、确定性的到达目的地。因此,新的TSN标准保持所有网络设备的时间同步,采用了中心控制,在数据链路层进行时隙规划、预留和容错保护,来实现确定性。TSN包括三个基本组成部分:时间同步,通信路径选择、预留和容错,调度和流量整形。
√时间同步:TSN网络中的时间从一个中央时间源通过网络本身传递给以太网设备,通过高频率的往返延迟测量,来保持网络设备与中央时钟源的时间高精度同步。也就是IEEE1588的精确时间协议。
√通信路径选择、预留和容错:TSN根据网络拓扑计算出通过网络的路径,并为数据流提供显式的路径控制和带宽余留,并根据网络拓扑为数据流提供冗余传输。
√调度和流量整形:TSN时间感知队列通过时间感知整形器(Time Aware Shaper,TAS)使TSN交换机能够来控制队列流量(queued traffic),以太网 帧被标识并指派给基于优先级的虚拟局域网(Virtual Local Area Network,VLAN)标签(Tag),每个队列在一个时间表中定义,然后这些数据队列报文在预定时间窗口在出口执行传输。其它队列将被锁定在规定时间窗口里。因此消除了周期性数据被非周期性数据所影响的结果。这意味着每个交换机的延迟是确定的,可知的。而在TSN网络的数据报文延时被得到保障。
2)确定性网络(Deterministic Networking,DetNet):
DetNet网络目标是第二层桥接和第三层路由段实现确定传输路径,这些路径可以提供延迟、丢包和抖动的最坏情况界限,控制并降低端到端时延的技术。DetNet将TSN开发的技术扩展从数据链路层扩展到路由。
国际互联网工程任务组(The Internet Engineering Task Force,IETF)的DetNet工作组目前工作关注的整体架构、数据平台规范、数据流量信息模型、YANG模型;但没有对网络控制提出新的规范,只是沿用了IETF RFC7426中软件定义网络(Software Defined Network,SDN)的控制方法。
参见图1,图中示意SDN架构,这里对相关模块及交互工作原理进行说明。SDN按照业务功能,将网络分为不同的平面,从上至下的平面介绍如下:
√应用平面(Application Plane):定义网络行为的应用程序和服务所在的平面。
√控制平面(Control Plane):决定一个或多个网络设备怎么转发数据包,并将这些决定以流表的方式下发给网络设备执行。这里控制平面主要与转发平面进行交互,较少关注设备的操作平面,除非控制平面想了解特定端口的当前状态和功能。
√管理平面(Management Plane):负责监控、配置和维护网络设备,例如,对网络设备的状态做出决策。管理平面主要与设备的操作平面进行交互。
√转发平面(Forwarding Plane):网络设备负责根据从控制平面接收到的指令处理数据路径中的数据包的功能模块。转发平面的操作包括但不限于转发、丢弃和更改数据包。
√操作平面(Operational Plane):操作平面负责管理所在网络设备的操作状态,例如,设备是处于活动状态还是非活动状态,可用端口的数量,每个端口的状态等。操作平面负责网络设备资源,如端口、内存等。
因此,原来的SDN网络从应用平面或者转发平面收到需要转发的数据包请求,由控制平面进行根据形成的网络拓扑进行路由计算,并生成流表,下发给设备的转发平面。转发平面的具体工作原理如下:
√匹配流表:包头域作为匹配域,包括入端口、源媒体接入控制(Media Access Control,MAC)、虚拟局域网的ID(VLANID)、网际互连协议(Internet Protocol,IP)地址等等组。按照优先级依次匹配本地保存的流表的表项,并以最高优先级的匹配表项作为匹配结果。多级流表可以降低开销,将流表特征进行提取,将匹配过程分解成几个步骤,形成流水线处理形式,降低流表记录条数。转发规则被组织在不同的流表中。同一个流表中的规则按照优先级进行匹配。按照次序从小到大跳转,更新统计数据,结束后进行修改和执行指令集合多流表流水线处理架构,额能够减少流表项条数,但匹配延迟增加了,同时,提升了数据流量生成和维护的算法复杂度。
√指令执行:匹配的流表项的指令作为转发执行集合,起初是个空集合,每匹配一次增加一项,多个动作不断累加,直到没有转向表(go to Table),停止,一起执行指令集合。指令为转发、丢弃、排队、修改域等。其中转发,可以指定端口,物理端口、逻辑端口、保留端口均可以;修改域,包括group利用组表处理数据包、数据包头值修改、修改TTL等,不同的处理组合会带来不同的延迟。
3)在端到端有多条路径,发送端对每条路径进行测量,为每条路径的丢包、延迟、抖动进行周期性的测量,通过周期累计,可为每个路径建立端到端延迟、端到端丢包的预估计模型。在每个发送端进行分组发送时,调度模块按照延迟和丢包的预估计模型进行估计,按照最短延迟/最小丢包/抖动最小等算法选择其中一条路径,作为本分组的发送路径。
4)通过SDN控制设备可以为特定业务寻找当前相对合适的路径,并为每个相关节点生成流表下发给交换机,数据流在逐点按照流表进行处理,保证数据流端到端的路由的确定,尽量保证延迟的确定。
5)发送端对每个数据流进行服务质量(Quality of Service,QoS)等级分配,一般分为8个等级。交换机在接收的分组时,查看其等级,并按照等级将分组插入相应的队列。交换机优先处理高优先级的分组;如优先级一致, 则按照进入的先后顺序进行处理。各分组按照优先级占用缓冲(BUFFER)资源。由于交换机中的BUFFER资源有限,如某个高优先级分组到来时,BUFFER已经被占满,交换机会选择最低优先级的分组丢弃,并将腾出来的BUFFER分配给新进来的高优先级的分组使用。尽量保证高优先级的分组的延迟低、抖动小。
6)相关技术中的数据面对丢包大多采用接收端反馈丢包,发送端重传的方式进行补发,也数倍于往返时延(Round-Trip Time,RTT)的大小增加延迟;或者在分组内增加前向纠错码(Forward Error Correction,FEC)冗余,在两端进行聚合编解码,引入一定的处理延迟。
相关技术的缺点:
1)TSN技术:
TSN将会为以太网协议的MAC层提供一套通用的时间敏感机制,在确保以太网数据通讯的时间确定性的同时,为不同协议网络之间的互操作提供了可能性。参见图2,TSN并非涵盖整个网络,TSN就仅仅是关于以太网通讯协议模型中的第二层,也就是数据链路层(更确切的说是MAC层)的协议标准。因此,TSN只支持桥接网络,端到端不支持需要路由器的数据流。
3)相关技术采用了优先级处理方法,确实提高高优先级数据流的性能。但如果高时敏数据流在使用链路时,背景流量中有更高级别的数据流或者同样级别的数据流在共享链路和交换机节点资源,某个分组是否会因为拥塞丢包严重依赖于与它共享交换机的资源的同级和高级的数据流的流量特征,那么该数据流中分组的端到端的延迟中的排队延迟则无法确定,某个分组的排队延迟严重依赖与它共享交换机的资源的其他数据流的流量特征;同样分组的延迟抖动会比较大。但如果优先级都很高,那么只能丢弃新进分组,这是引起拥塞丢包的主要原因。因此,原有技术无法保证数据流不拥塞丢包。
4)相关技术通过网络监测端到端的丢包率、延迟等参数,在进行路径选择时,进行延迟估计,以期望按照预期的端到端延迟到达接收端,但网络测量的参数是累计参数,代表过去某段时间的性能,而网络状况总是瞬间变化的。这种估计值是不准确的;并且相关技术中的控制器不对数据流的需要的资源进行计算,并在逐个点进行最大资源预留。因此,数据流实际的传输性 能严重依赖于当时背景流量的特征、级别等,因此无法保证数据流延迟低于某个特定值。
5)相关技术通过丢包反馈补偿和冗余编码方法,引入了不小的处理延迟,而高时敏数据流应用无法容忍长时间;尽管如此,相关技术依然无法保证链路丢包。
6)相关技术采用了专线的方法保证绝对的低延迟和近零丢包,无法做到路径资源和交换机资源的动态共享,因此时敏业务和非时敏业务的无法共存。
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本申请的说明书和权利要求书中的术语“包括”以及它的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。此外,说明书以及权利要求中使用“和/或”表示所连接对象的至少其中之一,例如A和/或B,表示包含单独A,单独B,以及A和B都存在三种情况。
在本公开实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本文所描述的技术不限于长期演进型(Long Time Evolution,LTE)/LTE的演进(LTE-Advanced,LTE-A)系统,并且也可用于各种无线通信系统,诸如码分多址(Code Division Multiple Access,CDMA)、时分多址(Time Division Multiple Access,TDMA)、频分多址(Frequency Division Multiple Access,FDMA)、正交频分多址(Orthogonal Frequency Division Multiple Access,OFDMA)、单载波频分多址(Single-carrier Frequency-Division Multiple Access,SC-FDMA)和其他系统。
术语“系统”和“网络”常被可互换地使用。CDMA系统可实现诸如 CDMA2000、通用地面无线电接入(Universal Terrestrial Radio Access,UTRA)等无线电技术。UTRA包括宽带CDMA(Wideband Code Division Multiple Access,WCDMA)和其他CDMA变体。TDMA系统可实现诸如全球移动通信系统(Global System for Mobile Communication,GSM)之类的无线电技术。OFDMA系统可实现诸如超移动宽带(Ultra Mobile Broadband,UMB)、演进型UTRA(Evolution-UTRA,E-UTRA)、IEEE 802.11(Wi-Fi)、IEEE 802.16(WiMAX)、IEEE 802.20、Flash-OFDM等无线电技术。UTRA和E-UTRA是通用移动电信系统(Universal Mobile Telecommunications System,UMTS)的部分。LTE和更高级的LTE(如LTE-A)是使用E-UTRA的新UMTS版本。UTRA、E-UTRA、UMTS、LTE、LTE-A以及GSM在来自名为“第三代伙伴项目”(3rd Generation Partnership Project,3GPP)的组织的文献中描述。CDMA2000和UMB在来自名为“第三代伙伴项目2”(3GPP2)的组织的文献中描述。本文所描述的技术既可用于以上提及的系统和无线电技术,也可用于其他系统和无线电技术。
参见图3,本公开实施例提供一种网络控制方法,该方法的执行主体为网络节点(或者称为转发设备、交换机等),该方法的步骤包括:步骤301。
步骤301:向控制设备发送所述网络节点的工作状态参数,以使所述控制设备根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
可选地,网络节点可以通过周期性心跳消息,向所述控制设备发送所述网络节点的工作状态参数。
所述工作状态参数包括以下一项或多项:网络设备类型;固有带宽;可分配带宽;尽力服务(best-effort)带宽;已分配带宽;剩余分配带宽;固有缓冲区(BUFFER);可分配缓冲区;尽力服务缓冲区;已分配缓冲区;剩余分配缓冲区。
在一些实施方式中,所述方法还包括:在从所述控制设备接收到流表后,按照数据流的业务等级更新流表,在相关等级的流表中插入或删除数据流的转发路径,得到分级流表的执行结果;将所述分级流表的执行结果通知给所述控制设备。
在一些实施方式中,所述方法还包括:在从控制设备接收资源预留信息 后,按照流标识进行资源预留或取消,得到资源预留的执行结果;将所述资源预留的执行结果通知给所述控制设备。
在一些实施方式中,所述方法还包括:在从数据源设备接收到数据流后,按照所述数据流的等级选取流表,并进行匹配;根据所述数据流的流标识,在所述网络节点上执行资源预留。
在一些实施方式中,在按照所述数据流的等级选取流表,并进行匹配之前,所述方法还包括:根据所述数据流的流标识和/或流类型,判断是否需要复制;如果需要复制,则对所述数据流的每个分组进行复制,形成多条数据流,转入流表进行匹配;如果不需要复制,则直接转入流表匹配。
在一些实施方式中,所述方法还包括:判断所述网络节点是否为末跳;如果所述网络节点为末跳,则按照流标识中的分组序列号分析是否为重复的分组,如果是重复分组,则删除重复的分组;按照流类型分析所述数据流的到达时间,根据时间戳,设定发送定时器;如果所述发送定时器到时,则将所述数据流发送给下一跳。
在本公开实施例中,通过集中控制可以清楚了解全网的拓扑和资源情况,可以做出更合理的路径和资源预留决策,进一步地,通过网络节点的资源预留,保障数据流不因为拥塞丢包;通过复制消除,保障数据流不因为链路丢包,从而保证端到端的丢包率几乎为零;进一步地,通过资源预留和路径规划,可以保证端到端延迟最差不低于预定值;进一步地,通过分组存储,消除端到端延迟抖动。进一步地,通过资源预留,可以留给普通业务的带宽,保障在不建设专网的情况下,可以实现高可靠业务。
参见图4,本公开实施例提供一种网络控制方法,该方法的执行主体可以为控制设备,包括:步骤401和步骤402。
步骤401:获取网络节点的工作状态参数;
比如,接收所述网络节点发送的周期性心跳消息,所述周期性心跳消息中携带所述网络节点的工作状态参数。
所述工作状态参数包括以下一项或多项:网络设备类型;固有带宽;可分配带宽;尽力服务(best-effort)带宽;已分配带宽;剩余分配带宽;固有缓冲区(BUFFER);可分配缓冲区;尽力服务缓冲区;已分配缓冲区;剩余 分配缓冲区。
步骤402:根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
在一些实施方式中,所述方法还包括:从应用设备接收第一消息,所述第一消息请求业务解析;根据所述第一消息,生成流表;将所述流表发送给所述网络节点。
在一些实施方式中,所述第一消息中包括以下一项或多项:源端的信息、目的端的信息、数据流的信息、业务申请类型和业务申请类别标识。
在一些实施方式中,根据所述第一消息,生成流表,包括:
业务解析模块根据所述第一消息,对所述应用设备申请的业务类型进行识别;如果所述申请的业务类型是申请资源,则所述业务解析模块向路径计算模块发送第二消息;所述路径计算模块根据所述第二消息,从拓扑管理模块中获取网络拓扑和资源视图以及网络节点的预留资源;所述路径计算模块根据所述网络拓扑和资源视图以及网络节点的预留资源,进行路径计算,并对每条路径进行端到端的延迟进行估计;所述路径计算模块将小于数据流最大延迟的路径集发送给资源计算模块;所述资源计算模块从拓扑管理模块中获取网络拓扑和资源视图以及网络节点的预留资源,对所述路径集中的路径进行资源估计,从中选取满足资源要求的路径,并将所述路径的信息发送流表生成模块;所述流表生成模块根据所述路径的信息,生成流表。
可以理解的是,上述预留资源即为使用占用,保证预留资源不被占用。
在一些实施方式中,还包括:如果没有满足所述资源要求的路径,则所述路径计算模块将结果通知所述业务解析模块;所述业务解析模块将所述结果反馈给所述应用设备。
在一些实施方式中,还包括:所述业务解析模块从所述应用设备接收第三消息,所述第三消息指示承载撤销,所述第三消息中携带数据流标识;所述业务解析模块通知拓扑管理模块释放与所述数据流标识相关资源,并更新网络拓扑和资源视图;所述拓扑管理模块通知所述流表生成模块删除所述数据流标识相关的流表项。
在一些实施方式中,所述路径计算模块将小于数据流最大延迟的路径集 发送给资源计算模块,包括:所述路径计算模块确定小于数据流最大延迟的路径集;所述路径计算模块确定所述路径集中的每条路径的延迟与数据流最大延迟的差值;所述路径计算模块按照所述差值从小到大排序,并发送给所述资源计算模块。
在一些实施方式中,所述业务解析模块向路径计算模块发送第二消息,包括:所述业务解析模块根据建立的业务模型库,将业务申请类别标识映射为业务峰值包速、数据包最大长度、端到端延迟上限、丢包上限、网络带宽中的一项或多项,并与同源端、目的端、数据流标识、业务申请类型、业务申请类别标识中一项或多项一起发送给所述路径计算模块。
在本公开实施例中,通过集中控制可以清楚了解全网的拓扑和资源情况,可以做出更合理的路径和资源预留决策,进一步地,通过网络节点的资源预留,保障数据流不因为拥塞丢包;通过复制消除,保障数据流不因为链路丢包,从而保证端到端的丢包率几乎为零;进一步地,通过资源预留和路径规划,可以保证端到端延迟最差不低于预定值;进一步地,通过分组存储,消除端到端延迟抖动。进一步地,通过资源预留,可以留给普通业务的带宽,保障在不建设专网的情况下,可以实现高可靠业务。
在本公开实施例中,可以将业务的申请转化成在端到端在某个时间区间内对网络指标(带宽、延迟、抖动、丢包)的需求,由控制设备按照网络指标的需求进行路径计算,生成流表;在路径计算前,使用一种确定性网络资源视图,整合原来SDN网络拓扑视图和网络管理系统,认定资源预留即为使用占用,保证预留资源不被抢占;在路径计算时,以延迟要求和计算延迟的差距最小的路径为最优路径,内生减少网络抖动;在路径决策流程上,综合考虑延迟和路径节点上的资源,保证同时有效。
参见图5,本网络系统分为应用设备、控制设备和网络节点。其中,应用设备包含各种应用需求,通过北向接口向网络控制提出需求;控制设备主要进行网络最新网络拓扑和资源视图的构建,并按照应用的需求,进行网络路径规划、控制、资源计算及预留,并将结果通知应用设备和网络节点层。控制设备包含链路发现、拓扑管理、业务解析、路径计算、资源管理、流表生成等不同的模块;网络节点,主要是包含按控制要求的数据流的分类处理及资源 的保障。包含流识别、分类流表、资源预留、分组复制、分组存储和分组消除等不同模块。
本系统工作主要分为四个的流程,网络管理流程、网络控制流程、资源预留、数据流处理。
网络管理流程目的是为了搜集系统的最新网络拓扑和资源视图;网络控制流程目的是为了根据应用的需求选择符合要求的路径,并为其生成流表,发送给交换机。网络控制流程的每次计算都需要网络管理流程的最新网络拓扑和资源视图,并会对其进行更新。资源预留流程是控制设备对各网络节点的资源决策进行资源预留。数据流处理流程是对数据流进行身份识别后,根据数据流的等级选取流表进行匹配,再根据时间戳设定发送定时器,发送定时器到时,将数据流发送给下一跳。
实施例一:
参见图6,图中示意网络管理流程。
步骤1:上电后自动启动链路发现模块;
步骤2:控制设备(或者称为控制器)使用链路层发现协议(Link Layer Discovery Protocol,LLDP)作为链路发现协议。链路发现模块将控制设备的相关信息(比如:主要能力、管理地址、设备标识、接口标识等)封装在LLDP里。
步骤3:控制设备通过packetout消息将LLDP数据包发送给与之相联的网络节点1(可以理解的是网络节点也可以称为交换机),网络节点1保存起来。
Packet-out消息的功能是:将控制器的相关数据发送到OpenFlow交换机,是包含数据包发送命令的消息。
步骤4:网络节点1通过所有端口扩散该消息。如果邻居网络节点2也是开放流(openflow)转发,那么该网络节点2执行流表。
步骤5:如果网络节点2上没有此流表,网络节点2通过packet_in消息向控制设备进行请求。openflow交换机继续向邻居广播包,如存在非openflow交换机,穿越后,到达另外一个openflow交换机,该交换机将首包上传给控制设备,则可知非openflow交换机,否则亦然。
Packet-in消息的功能是:将到达OpenFlow交换机的数据包发送到控制器
Packet-out消息的功能是:将控制器的相关数据发送到OpenFlow交换机,是包含数据包发送命令的消息。
步骤6:控制设备收集packet_in消息,并将packet_in消息发送给拓扑管理模块,可以画出网络拓扑和资源视图。
步骤7:拓扑建立后,发送周期性的心跳消息,请求交换机的工作状态参数。
表1:
Figure PCTCN2021092099-appb-000001
步骤8:资源计算匹配成功后,对以上参数进行更新,用于下一次计算。
实施例二:
参见图7,图中示意网络控制流程。
步骤1:应用设备(应用层)通过北向接口向业务解析模块发送请求。
请求可以包含以下一项或多项:源端(核心网入口E-NODEB)、目的端(对应的可选gate),数据流ID、业务申请类型(开通/取消)、业务类别号(对应要求)。
步骤2:业务解析模块进行申请的业务类型的识别,如果是申请资源,则根据提前建立的业务模型库,将业务类别号映射为业务峰值包速、数据包最大长度、端到端延迟上限、丢包上限、网络带宽,连同源端(核心网入口E-NODEB)、目的端(对应的可选gate),数据流ID、业务申请类型(开通/取消)、业务类别号(对应要求)一起发给路径计算模块。
步骤3:路径计算模块收到请求后,向拓扑管理模块取当前的拓扑和资源情况,进行路径计算。
步骤4:路径计算模块根据拓扑管理模块的实时信息,进行端到端的需求进行路径计算,并对每条路径进行端到端的延迟进行估计。
步骤5:路径计算模块将小于该数据流最大延迟的要求的路径集,按照差值从小到大排序,并发送给资源计算模块(参数为:数据流ID、路径ID(设备ID集合)、端到端延迟估计)。
步骤6:资源计算模块到拓扑管理模块读取拓扑和设备的实时信息。
步骤7:资源计算模块按照路径计算模块发送的路径顺序,逐点进行资源估计。
选定第一组设备ID集合,并与可分配的BUFFER比较,如果均满足,则输出;如遇一个不满足,则跳出进行下一条路径设备的比较;如有满足路径集合,则从剩下的路径中,选择节点重合度最小的路径作为备用路径。
步骤8a:如果资源计算模块选出其中的路径,则将路径信息发送给流表生成模块,生成流表,发送给交换设备(这里为了提高可用性,控制设备到交换设备的接口遵守openflow的原则,减少设备本身的修改);同时资源计算模块将计算结果发送给拓扑管理模块,拓扑管理进行实时更新,并发送成功消息给路径解析模块;
步骤8b:如果无满足要求的路径,则将结果通知路径解析模块。
步骤9:路径解析模块将结果反馈给应用层。
步骤10:应用层发生承载撤销,则将数据流ID和业务申请类型(开通/取消)发送给业务解析模块。
步骤11:业务解析模块通知拓扑管理模块将该项数据流相关资源释放。
步骤12:通知删除该项数据流相关流表项。
实施例三:
参见图8,图中示意资源预留流程。
步骤1:控制设备将生成的流表逐个发送给各相关网络节点;
步骤2:网络节点收到流表后,按照数据流等级更新多级流表,在相关等级的流表中插入/删除此项数据流的转发路径;
步骤3:网络节点收到资源预留信息后,按照要求在网络节点上进行资源预留/取消;
步骤4:资源预留和分级流表将执行结果通知给网络节点;
步骤5:网络节点将结果通知给控制设备的拓扑管理模块,更新网络拓扑和资源视图。
实施例四:
参见图9,图中示意数据处理流程。
步骤1:数据源设备开始发送数据流后,接入到网络节点,进行流号和流类型的分析
步骤2a:网络节点判断是否需要复制,如需要,则对该流的每个分组进行复制,形成两条数据流,然后转入流表进行匹配;
步骤2b:如识别不需要复制,则直接转入流表匹配阶段
步骤3:按照数据流的等级选取流表,并进行匹配;根据流号,在本设备上执行资源预留,使用缓冲区域;
步骤4:判断是否为末跳,如果为末跳,分析是否为重复的分组,删除重复的分组;
步骤5:按照类别分析到达时间,根据时间戳,设定发送定时器;
步骤6:发送定时器到时,发送给下一跳。
参见图10,本公开实施例提供一种网络节点,该网络节点1000包括:
发送模块1001,用于向控制设备发送所述网络节点的工作状态参数,以使所述控制设备根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
所述工作状态参数包括以下一项或多项:网络设备类型;固有带宽;可分配带宽;尽力服务(best-effort)带宽;已分配带宽;剩余分配带宽;固有缓冲区(BUFFER);可分配缓冲区;尽力服务缓冲区;已分配缓冲区;剩余分配缓冲区。
在一些实施方式中,发送模块1000进一步用于:通过周期性心跳消息,向所述控制设备发送所述网络节点的工作状态参数。
在一些实施方式中,网络节点1000还包括:
第一处理模块,用于在从所述控制设备接收到流表后,按照数据流的业务等级更新流表,在相关等级的流表中插入或删除数据流的转发路径,得到分级流表的执行结果;将所述分级流表的执行结果通知给所述控制设备。
在一些实施方式中,网络节点1000还包括:
第二处理模块,用于在从控制设备接收资源预留信息后,按照流标识进行资源预留或取消,得到资源预留的执行结果;将所述资源预留的执行结果通知给所述控制设备。
在一些实施方式中,网络节点1000还包括:
第三处理模块,用于在从数据源设备接收到数据流后,按照所述数据流的等级选取流表,并进行匹配;
在一些实施方式中,网络节点1000还包括:
第四处理模块,用于根据所述数据流的流标识和/或流类型,判断是否需要复制;如果需要复制,则对所述数据流的每个分组进行复制,形成多条数据流,转入流表进行匹配;如果不需要复制,则直接转入流表匹配。
在一些实施方式中,网络节点1000还包括:
第五处理模块,用于判断所述网络节点是否为末跳;如果所述网络节点为末跳,则按照流标识中的分组序列号分析是否为重复的分组,如果是重复分组,则删除重复的分组;按照流类型分析所述数据流的到达时间,根据时间戳,设定发送定时器;如果所述发送定时器到时,则将所述数据流发送给下一跳。
本公开实施例提供的网络节点,可以执行上述图3所示方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。
参见图11,本公开实施例提供一种网络节点,该网络节点1100包括:第一收发机1101和第一处理器1102;
所述第一收发机1101在所述第一处理器1102的控制下发送和接收数据;
所述第一处理器1102读取存储器中的程序执行以下操作:向控制设备发送所述网络节点的工作状态参数,以使所述控制设备根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
可选地,所述工作状态参数包括以下一项或多项:网络设备类型;固有 带宽;可分配带宽;尽力服务(best-effort)带宽;已分配带宽;剩余分配带宽;固有缓冲区(BUFFER);可分配缓冲区;尽力服务缓冲区;已分配缓冲区;剩余分配缓冲区。
在一些实施方式中,所述第一处理器1102读取存储器中的程序还执行以下操作:通过周期性心跳消息,向所述控制设备发送所述网络节点的工作状态参数。
在一些实施方式中,所述第一处理器1102读取存储器中的程序还执行以下操作:在从所述控制设备接收到流表后,按照数据流的业务等级更新流表,在相关等级的流表中插入或删除数据流的转发路径,得到分级流表的执行结果;将所述分级流表的执行结果通知给所述控制设备。
在一些实施方式中,所述第一处理器1102读取存储器中的程序还执行以下操作:在从控制设备接收资源预留信息后,按照流标识进行资源预留或取消,得到资源预留的执行结果;将所述资源预留的执行结果通知给所述控制设备。
在一些实施方式中,所述第一处理器1102读取存储器中的程序还执行以下操作:在从数据源设备接收到数据流后,按照所述数据流的等级选取流表,并进行匹配;
在一些实施方式中,所述第一处理器1102读取存储器中的程序还执行以下操作:根据所述数据流的流标识和/或流类型,判断是否需要复制;如果需要复制,则对所述数据流的每个分组进行复制,形成多条数据流,转入流表进行匹配;如果不需要复制,则直接转入流表匹配。
在一些实施方式中,所述第一处理器1102读取存储器中的程序还执行以下操作:判断所述网络节点是否为末跳;如果所述网络节点为末跳,则按照流标识中的分组序列号分析是否为重复的分组,如果是重复分组,则删除重复的分组;按照流类型分析所述数据流的到达时间,根据时间戳,设定发送定时器;如果所述发送定时器到时,则将所述数据流发送给下一跳。
本公开实施例提供的网络节点,可以执行上述图3所示方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。
参见图12,本公开实施例提供一种控制设备,该控制设备1200包括:
获取模块1201,用于获取网络节点的工作状态参数;
更新模块1202,用于根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
可选地,所述工作状态参数包括以下一项或多项:网络设备类型;固有带宽;可分配带宽;尽力服务(best-effort)带宽;已分配带宽;剩余分配带宽;固有缓冲区(BUFFER);可分配缓冲区;尽力服务缓冲区;已分配缓冲区;剩余分配缓冲区。
在一些实施方式中,获取模块1201进一步用于:接收所述网络节点发送的周期性心跳消息,所述周期性心跳消息中携带所述网络节点的工作状态参数。
在一些实施方式中,控制设备1200还包括:
第六处理模块,用于从应用设备接收第一消息,所述第一消息请求业务解析;根据所述第一消息,生成流表;将所述流表发送给所述网络节点。
在一些实施方式中,所述第一消息中包括以下一项或多项:源端的信息、目的端的信息、数据流的信息、业务申请类型和业务申请类别标识。
在一些实施方式中,控制设备1200还包括:业务解析模块、路径计算模块、资源计算模块、拓扑管理模块和流表生成模块;
其中,业务解析模块根据所述第一消息,对所述应用设备申请的业务类型进行识别;如果所述申请的业务类型是申请资源,则所述业务解析模块向路径计算模块发送第二消息;所述路径计算模块根据所述第二消息,从拓扑管理模块中获取网络拓扑和资源视图以及网络节点的预留资源;所述路径计算模块根据所述网络拓扑和资源视图以及网络节点的预留资源,进行路径计算,并对每条路径进行端到端的延迟进行估计;所述路径计算模块将小于数据流最大延迟的路径集发送给资源计算模块;所述资源计算模块从拓扑管理模块中获取网络拓扑和资源视图以及网络节点的预留资源,对所述路径集中的路径进行资源估计,从中选取满足资源要求的路径,并将所述路径的信息发送流表生成模块;所述流表生成模块根据所述路径的信息,生成流表。
在一些实施方式中,如果没有满足所述资源要求的路径,则所述路径计算模块将结果通知所述业务解析模块;
所述业务解析模块将所述结果反馈给所述应用设备。
在一些实施方式中,所述业务解析模块从所述应用设备接收第三消息,所述第三消息指示承载撤销,所述第三消息中携带数据流标识;
所述业务解析模块通知拓扑管理模块释放与所述数据流标识相关资源,并更新网络拓扑和资源视图;
所述拓扑管理模块通知所述流表生成模块删除所述数据流标识相关的流表项。
在一些实施方式中,所述路径计算模块确定小于数据流最大延迟的路径集;
所述路径计算模块确定所述路径集中的每条路径的延迟与数据流最大延迟的差值;
所述路径计算模块按照所述差值从小到大排序,并发送给所述资源计算模块。
在一些实施方式中,所述业务解析模块根据建立的业务模型库,将业务申请类别标识映射为业务峰值包速、数据包最大长度、端到端延迟上限、丢包上限、网络带宽中的一项或多项,并与同源端、目的端、数据流标识、业务申请类型、业务申请类别标识中一项或多项一起发送给所述路径计算模块。
本公开实施例提供的控制设备,可以执行上述图4所示方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。
参见图13,本公开实施例提供一种控制设备,该控制设备包括:第二收发机1301和第二处理器1302;
所述第二收发机1301在所述第二处理器1302的控制下发送和接收数据;
所述第二处理器1302读取存储器中的程序执行以下操作:获取网络节点的工作状态参数;根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
可选地,所述工作状态参数包括以下一项或多项:网络设备类型;固有带宽;可分配带宽;尽力服务(best-effort)带宽;已分配带宽;剩余分配带宽;固有缓冲区(BUFFER);可分配缓冲区;尽力服务缓冲区;已分配缓冲区;剩余分配缓冲区。
在一些实施方式中,所述第二处理器1302读取存储器中的程序执行以下操作:接收所述网络节点发送的周期性心跳消息,所述周期性心跳消息中携带所述网络节点的工作状态参数。
在一些实施方式中,在一些实施方式中,所述第二处理器802读取存储器中的程序执行以下操作:从应用设备接收第一消息,所述第一消息请求业务解析;根据所述第一消息,生成流表;将所述流表发送给所述网络节点。
在一些实施方式中,所述第一消息中包括以下一项或多项:源端的信息、目的端的信息、数据流的信息、业务申请类型和业务申请类别标识。
在一些实施方式中,所述第二处理器802读取存储器中的程序执行以下操作:根据所述第一消息,对应用设备申请的业务类型进行识别;如果所述申请的业务类型是申请资源,则通过业务解析模块向路径计算模块发送第二消息;所述路径计算模块根据所述第二消息,从拓扑管理模块中获取网络拓扑和资源视图以及网络节点的预留资源;所述路径计算模块根据所述网络拓扑和资源视图以及网络节点的预留资源,进行路径计算,并对每条路径进行端到端的延迟进行估计;所述路径计算模块将小于数据流最大延迟的路径集发送给资源计算模块;所述资源计算模块从拓扑管理模块中获取网络拓扑和资源视图以及网络节点的预留资源,对所述路径集中的路径进行资源估计,从中选取满足资源要求的路径,并将所述路径的信息发送流表生成模块;所述流表生成模块根据所述路径的信息,生成流表。
在一些实施方式中,所述第二处理器802读取存储器中的程序执行以下操作:如果没有满足所述资源要求的路径,则通过所述路径计算模块将结果通知所述业务解析模块;所述业务解析模块将所述结果反馈给所述应用设备。
在一些实施方式中,所述第二处理器802读取存储器中的程序执行以下操作:通过所述业务解析模块从所述应用设备接收第三消息,所述第三消息指示承载撤销,所述第三消息中携带数据流标识;通过所述业务解析模块通知拓扑管理模块释放与所述数据流标识相关资源,并更新网络拓扑和资源视图;所述拓扑管理模块通知所述流表生成模块删除所述数据流标识相关的流表项。
在一些实施方式中,所述第二处理器802读取存储器中的程序执行以下 操作:通过所述路径计算模块确定小于数据流最大延迟的路径集;通过所述路径计算模块确定所述路径集中的每条路径的延迟与数据流最大延迟的差值;所述路径计算模块按照所述差值从小到大排序,并发送给所述资源计算模块。
在一些实施方式中,所述第二处理器802读取存储器中的程序执行以下操作:通过业务解析模块根据建立的业务模型库,将业务申请类别标识映射为业务峰值包速、数据包最大长度、端到端延迟上限、丢包上限、网络带宽中的一项或多项,并与同源端、目的端、数据流标识、业务申请类型、业务申请类别标识中一项或多项一起发送给所述路径计算模块。
本公开实施例提供的控制设备,可以执行上述图4所示方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。
请参阅图14,图14是本公开实施例应用的通信设备的结构图,如图14所示,通信设备1400包括:处理器1401、收发机1402、存储器1403和总线接口,其中:
在本公开的一个实施例中,通信设备1400还包括:存储在存储器上1403并可在处理器1401上运行的程序,程序被处理器1401执行时实现图3~图4所示实施例中的步骤。
在图14中,总线架构可以包括任意数量的互联的总线和桥,具体由处理器1401代表的一个或多个处理器和存储器1403代表的存储器的各种电路链接在一起。总线架构还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口提供接口。收发机1402可以是多个元件,即包括发送机和接收机,提供用于在传输介质上与各种其他装置通信的单元,可以理解的是,收发机1402为可选部件。
处理器1401负责管理总线架构和通常的处理,存储器1403可以存储处理器1401在执行操作时所使用的数据。
本公开实施例提供的通信设备,可以执行上述图3~图4所示方法实施例,其实现原理和技术效果类似,本实施例此处不再赘述。
结合本公开公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的 软件模块组成,软件模块可以被存放于RAM、闪存、ROM、EPROM、EEPROM、寄存器、硬盘、移动硬盘、只读光盘或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于核心网接口设备中。当然,处理器和存储介质也可以作为分立组件存在于核心网接口设备中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本公开所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能存取的任何可用介质。
以上所述的具体实施方式,对本公开的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本公开的具体实施方式而已,并不用于限定本公开的保护范围,凡在本公开的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本公开的保护范围之内。
本领域内的技术人员应明白,本公开实施例可提供为方法、系统、或计算机程序产品。因此,本公开实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开实施例是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实 现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
需要说明的是,应理解以上各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块通过处理元件调用软件的形式实现,部分模块通过硬件的形式实现。例如,确定模块可以为单独设立的处理元件,也可以集成在上述装置的某一个芯片中实现,此外,也可以以程序代码的形式存储于上述装置的存储器中,由上述装置的某一个处理元件调用并执行以上确定模块的功能。其它模块的实现与之类似。此外这些模块全部或部分可以集成在一起,也可以独立实现。这里所述的处理元件可以是一种集成电路,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。
例如,各个模块、单元、子单元或子模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(Application Specific Integrated Circuit,ASIC),或,一个或多个微处理器(digital signal processor,DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(Central Processing Unit,CPU)或其它可以调用程序代码的处理器。再如,这些模块可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。
本公开的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例,例如除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。此外,说明书以及权利要求中使用“和/或”表示所连接对象的至少其中之一,例如A和/或B和/或C,表示包含单独A,单独B,单独C,以及A和B都存在,B和C都存在,A和C都存在,以及A、B和C都存在的7种情况。类似地,本说明书以及权利要求中使用“A和B中的至少一个”应理解为“单独A,单独B,或A和B都存在”。
显然,本领域的技术人员可以对本公开实施例进行各种改动和变型而不脱离本公开的精神和范围。这样,倘若本公开实施例的这些修改和变型属于本公开权利要求及其等同技术的范围之内,则本公开也意图包含这些改动和变型在内。

Claims (24)

  1. 一种网络控制方法,应用于网络节点,包括:
    向控制设备发送所述网络节点的工作状态参数,以使所述控制设备根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
  2. 根据权利要求1所述的方法,其中,所述工作状态参数包括以下一项或多项:网络设备类型;固有带宽;可分配带宽;尽力服务带宽;已分配带宽;剩余分配带宽;固有缓冲区;可分配缓冲区;尽力服务缓冲区;已分配缓冲区;剩余分配缓冲区。
  3. 根据权利要求1所述的方法,其中,向控制设备发送所述网络节点的工作状态参数,包括:
    通过周期性心跳消息,向所述控制设备发送所述网络节点的工作状态参数。
  4. 根据权利要求1所述的方法,还包括:
    在从所述控制设备接收到流表后,按照数据流的业务等级更新流表,在相关等级的流表中插入或删除数据流的转发路径,得到分级流表的执行结果;
    将所述分级流表的执行结果通知给所述控制设备。
  5. 根据权利要求1所述的方法,还包括:
    在从控制设备接收资源预留信息后,按照流标识进行资源预留或取消,得到资源预留的执行结果;
    将所述资源预留的执行结果通知给所述控制设备。
  6. 根据权利要求1~5任意一项所述的方法,还包括:
    在从数据源设备接收到数据流后,按照所述数据流的等级选取流表,并进行匹配;
    根据所述数据流的流标识,在所述网络节点上执行资源预留。
  7. 根据权利要求6所述的方法,其中,在按照所述数据流的等级选取流表,并进行匹配之前,所述方法还包括:
    根据所述数据流的流标识和/或流类型,判断是否需要复制;
    如果需要复制,则对所述数据流的每个分组进行复制,形成多条数据流, 转入流表进行匹配;
    如果不需要复制,则直接转入流表匹配。
  8. 根据权利要求6或7所述的方法,还包括:
    判断所述网络节点是否为末跳;
    如果所述网络节点为末跳,则按照流标识中的分组序列号分析是否为重复的分组,如果是重复分组,则删除重复的分组;
    按照流类型分析所述数据流的到达时间,根据时间戳,设定发送定时器;
    如果所述发送定时器到时,则将所述数据流发送给下一跳。
  9. 一种网络控制方法,应用于控制设备,包括:
    获取网络节点的工作状态参数;
    根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
  10. 根据权利要求9所述的方法,其中,所述工作状态参数包括以下一项或多项:网络设备类型;固有带宽;可分配带宽;尽力服务带宽;已分配带宽;剩余分配带宽;固有缓冲区;可分配缓冲区;尽力服务缓冲区;已分配缓冲区;剩余分配缓冲区。
  11. 根据权利要求9所述的方法,其中,所述获取网络节点的工作状态参数,包括:
    接收所述网络节点发送的周期性心跳消息,所述周期性心跳消息中携带所述网络节点的工作状态参数。
  12. 根据权利要求9所述的方法,还包括:
    从应用设备接收第一消息,所述第一消息请求业务解析;
    根据所述第一消息,生成流表;
    将所述流表发送给所述网络节点。
  13. 根据权利要求12所述的方法,其中,所述第一消息中包括以下一项或多项:源端的信息、目的端的信息、数据流的信息、业务申请类型和业务申请类别标识。
  14. 根据权利要求12所述的方法,其中,根据所述第一消息,生成流表,包括:
    业务解析模块根据所述第一消息,对所述应用设备申请的业务类型进行 识别;
    如果所述申请的业务类型是申请资源,则所述业务解析模块向路径计算模块发送第二消息;
    所述路径计算模块根据所述第二消息,从拓扑管理模块中获取网络拓扑和资源视图和网络节点的预留资源;
    所述路径计算模块根据所述网络拓扑和资源视图和网络节点的预留资源,进行路径计算,并对每条路径进行端到端的延迟进行估计;
    所述路径计算模块将小于数据流最大延迟的路径集发送给资源计算模块;
    所述资源计算模块从拓扑管理模块中获取网络拓扑和资源视图和网络节点的预留资源,对所述路径集中的路径进行资源估计,从中选取满足资源要求的路径,并将所述路径的信息发送流表生成模块;
    所述流表生成模块根据所述路径的信息,生成流表。
  15. 根据权利要求14所述的方法,还包括:
    如果没有满足所述资源要求的路径,则所述路径计算模块将结果通知所述业务解析模块;
    所述业务解析模块将所述结果反馈给所述应用设备。
  16. 根据权利要求15所述的方法,还包括:
    所述业务解析模块从所述应用设备接收第三消息,所述第三消息指示承载撤销,所述第三消息中携带数据流标识;
    所述业务解析模块通知拓扑管理模块释放与所述数据流标识相关资源,并更新网络拓扑和资源视图;
    所述拓扑管理模块通知所述流表生成模块删除所述数据流标识相关的流表项。
  17. 根据权利要求14所述的方法,其中,所述路径计算模块将小于数据流最大延迟的路径集发送给资源计算模块,包括:
    所述路径计算模块确定小于数据流最大延迟的路径集;
    所述路径计算模块确定所述路径集中的每条路径的延迟与数据流最大延迟的差值;
    所述路径计算模块按照所述差值从小到大排序,并发送给所述资源计算 模块。
  18. 根据权利要求17所述的方法,其中,所述业务解析模块向路径计算模块发送第二消息,包括:
    所述业务解析模块根据建立的业务模型库,将业务申请类别标识映射为业务峰值包速、数据包最大长度、端到端延迟上限、丢包上限、网络带宽中的一项或多项,并与同源端、目的端、数据流标识、业务申请类型、业务申请类别标识中一项或多项一起发送给所述路径计算模块。
  19. 一种网络节点,包括:
    发送模块,用于向控制设备发送所述网络节点的工作状态参数,以使所述控制设备根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
  20. 一种网络节点,包括:第一收发机和第一处理器;
    所述第一收发机在所述第一处理器的控制下发送和接收数据;
    所述第一处理器读取存储器中的程序执行以下操作:向控制设备发送所述网络节点的工作状态参数,以使所述控制设备根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
  21. 一种控制设备,包括:
    获取模块,用于获取网络节点的工作状态参数;
    更新模块,用于根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
  22. 一种控制设备,包括:第二收发机和第二处理器;
    所述第二收发机在所述第二处理器的控制下发送和接收数据;
    所述第二处理器读取存储器中的程序执行以下操作:获取网络节点的工作状态参数;根据所述网络节点的工作状态参数,对网络拓扑和资源视图进行更新。
  23. 一种通信设备,包括:处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序,所述程序被所述处理器执行时实现包括如权利要求1至18中任一项所述的网络控制方法的步骤。
  24. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有 程序,所述程序被处理器执行时实现包括如权利要求1至18中任一项所述的网络控制方法的步骤。
PCT/CN2021/092099 2020-05-15 2021-05-07 网络控制方法及设备 WO2021227947A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21803432.0A EP4152703A4 (en) 2020-05-15 2021-05-07 NETWORK CONTROL METHOD AND APPARATUS
US17/998,717 US20230388215A1 (en) 2020-05-15 2021-05-07 Network control method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010415264.9 2020-05-15
CN202010415264.9A CN113676412A (zh) 2020-05-15 2020-05-15 网络控制方法及设备

Publications (1)

Publication Number Publication Date
WO2021227947A1 true WO2021227947A1 (zh) 2021-11-18

Family

ID=78526440

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092099 WO2021227947A1 (zh) 2020-05-15 2021-05-07 网络控制方法及设备

Country Status (4)

Country Link
US (1) US20230388215A1 (zh)
EP (1) EP4152703A4 (zh)
CN (1) CN113676412A (zh)
WO (1) WO2021227947A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086202A (zh) * 2022-04-14 2022-09-20 安世亚太科技股份有限公司 一种基于网络数字孪生体的时延分析方法及系统
CN115174370A (zh) * 2022-09-05 2022-10-11 杭州又拍云科技有限公司 一种分布式混合数据确定性传输装置及方法
CN115599638A (zh) * 2022-12-01 2023-01-13 浙江锐文科技有限公司(Cn) 一种在智能网卡/dpu内对多服务大流量功耗优化方法及装置
CN116232977A (zh) * 2023-01-12 2023-06-06 中国联合网络通信集团有限公司 一种基于链路和设备状态的网络负载均衡方法及装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116232902A (zh) * 2021-12-02 2023-06-06 大唐移动通信设备有限公司 网络拓扑获取方法、装置、控制器及核心网网元

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6118791A (en) * 1995-12-20 2000-09-12 Cisco Technology, Inc. Adaptive bandwidth allocation method for non-reserved traffic in a high-speed data transmission network, and system for implementing said method
CN1357998A (zh) * 2000-12-07 2002-07-10 阿尔卡塔尔加拿大公司 在通信网络中用于呼叫阻塞触发拓扑更新的方法和系统
US20090116404A1 (en) * 2007-11-01 2009-05-07 Telefonaktiebolaget Lm Ericsson (Publ) Topology discovery in heterogeneous networks
US20120087377A1 (en) * 2010-10-11 2012-04-12 Wai Sum Lai Methods and apparatus for hierarchical routing in communication networks
US20130322299A1 (en) * 2012-05-30 2013-12-05 Byung Kyu Choi Optimized spanning tree construction based on parameter selection
US20170244607A1 (en) * 2014-12-03 2017-08-24 Hewlett Packard Enterprise Development Lp Updating a virtual network topology based on monitored application data

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103346922B (zh) * 2013-07-26 2016-08-10 电子科技大学 基于sdn的确定网络状态的控制器及其确定方法
US9882828B1 (en) * 2014-11-11 2018-01-30 Amdocs Software Systems Limited System, method, and computer program for planning distribution of network resources in a network function virtualization (NFV) based communication network
CN105024853A (zh) * 2015-07-01 2015-11-04 中国科学院信息工程研究所 基于谣言传播机制的sdn资源匹配和服务路径发现方法
US10298488B1 (en) * 2016-09-30 2019-05-21 Juniper Networks, Inc. Path selection and programming of multiple label switched paths on selected paths of multiple computed paths
US11310128B2 (en) * 2017-05-30 2022-04-19 Zhejiang Gongshang University Software-definable network service configuration method
US10425829B1 (en) * 2018-06-28 2019-09-24 At&T Intellectual Property I, L.P. Dynamic resource partitioning for multi-carrier access for 5G or other next generation network
CN109714275B (zh) * 2019-01-04 2022-03-15 电子科技大学 一种用于接入业务传输的sdn控制器及其控制方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6118791A (en) * 1995-12-20 2000-09-12 Cisco Technology, Inc. Adaptive bandwidth allocation method for non-reserved traffic in a high-speed data transmission network, and system for implementing said method
CN1357998A (zh) * 2000-12-07 2002-07-10 阿尔卡塔尔加拿大公司 在通信网络中用于呼叫阻塞触发拓扑更新的方法和系统
US20090116404A1 (en) * 2007-11-01 2009-05-07 Telefonaktiebolaget Lm Ericsson (Publ) Topology discovery in heterogeneous networks
US20120087377A1 (en) * 2010-10-11 2012-04-12 Wai Sum Lai Methods and apparatus for hierarchical routing in communication networks
US20130322299A1 (en) * 2012-05-30 2013-12-05 Byung Kyu Choi Optimized spanning tree construction based on parameter selection
US20170244607A1 (en) * 2014-12-03 2017-08-24 Hewlett Packard Enterprise Development Lp Updating a virtual network topology based on monitored application data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4152703A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086202A (zh) * 2022-04-14 2022-09-20 安世亚太科技股份有限公司 一种基于网络数字孪生体的时延分析方法及系统
CN115174370A (zh) * 2022-09-05 2022-10-11 杭州又拍云科技有限公司 一种分布式混合数据确定性传输装置及方法
CN115174370B (zh) * 2022-09-05 2023-01-03 杭州又拍云科技有限公司 一种分布式混合数据确定性传输装置及方法
CN115599638A (zh) * 2022-12-01 2023-01-13 浙江锐文科技有限公司(Cn) 一种在智能网卡/dpu内对多服务大流量功耗优化方法及装置
CN116232977A (zh) * 2023-01-12 2023-06-06 中国联合网络通信集团有限公司 一种基于链路和设备状态的网络负载均衡方法及装置

Also Published As

Publication number Publication date
EP4152703A1 (en) 2023-03-22
EP4152703A4 (en) 2023-11-01
CN113676412A (zh) 2021-11-19
US20230388215A1 (en) 2023-11-30

Similar Documents

Publication Publication Date Title
WO2021227947A1 (zh) 网络控制方法及设备
US7636781B2 (en) System and method for realizing the resource distribution in the communication network
CN109412964B (zh) 报文控制方法及网络装置
US11616729B2 (en) Method and apparatus for processing low-latency service flow
US11722407B2 (en) Packet processing method and apparatus
WO2021180073A1 (zh) 报文传输方法、装置、网络节点及存储介质
US8265076B2 (en) Centralized wireless QoS architecture
CN112565068B (zh) 一种应用于tsn网络的冗余流调度方法
JP2009542113A (ja) フォルトトレラントQoSのための方法及びシステム
CN113630893A (zh) 基于无线信道信息的5g与tsn联合调度方法
US10652135B2 (en) Distributed constrained tree formation for deterministic multicast
JP2012182605A (ja) ネットワーク制御システム及び管理サーバ
US20120147748A1 (en) Computer readable storage medium storing congestion control program, information processing apparatus, and congestion control method
WO2023082815A1 (zh) 确定性路由的构建方法、装置和存储介质
US20120163398A1 (en) Communication apparatus, relay apparatus, and network system
CN114221912B (zh) 一种针对非周期时间触发业务流的时间敏感网络接入方法
WO2023123104A1 (zh) 一种报文传输方法及网络设备
WO2022242243A1 (zh) 通信方法、设备及系统
Kaur An overview of quality of service computer network
WO2023236832A1 (zh) 数据调度处理方法、设备、装置及存储介质
WO2023155802A1 (zh) 数据调度方法、装置、设备及存储介质
WO2023130744A1 (zh) 报文调度方法、网络设备、存储介质及计算机程序产品
Mousheng et al. Controllable network architecture based on SDN
CN115733808A (zh) 一种基于循环队列集群转发的数据传输方法、装置及设备
CN117118918A (zh) 跨域确定性网络、调度方法、装置及跨域控制编排器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21803432

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 17998717

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2021803432

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2021803432

Country of ref document: EP

Effective date: 20221215

NENP Non-entry into the national phase

Ref country code: DE