WO2019085907A1 - 一种基于软件定义网络的数据发送方法、装置及系统 - Google Patents

一种基于软件定义网络的数据发送方法、装置及系统 Download PDF

Info

Publication number
WO2019085907A1
WO2019085907A1 PCT/CN2018/112756 CN2018112756W WO2019085907A1 WO 2019085907 A1 WO2019085907 A1 WO 2019085907A1 CN 2018112756 W CN2018112756 W CN 2018112756W WO 2019085907 A1 WO2019085907 A1 WO 2019085907A1
Authority
WO
WIPO (PCT)
Prior art keywords
switch
packet
flow
port
controller
Prior art date
Application number
PCT/CN2018/112756
Other languages
English (en)
French (fr)
Inventor
柳嘉强
李勇
金德鹏
曹龙雨
Original Assignee
华为技术有限公司
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司, 清华大学 filed Critical 华为技术有限公司
Publication of WO2019085907A1 publication Critical patent/WO2019085907A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer

Definitions

  • the embodiments of the present invention relate to the field of communications, and in particular, to a data sending method, apparatus, and system based on a software-defined network.
  • SDN Software Defined Networking
  • OF OpenFlow
  • the controller and the switch communicate with each other through the OpenFlow protocol, and the switch forwards the received data packet according to a flow table configured by the controller. If the switch does not query the flow table entry matching the forwarding packet in the flow table, the switch will generate a packet-in message and send the packet-in message to the controller.
  • the controller calculates a new packet based on the packet-in message.
  • the flow entry is configured with a flow-mod message to configure a new flow entry to the switch, and the packet-out message is used to instruct the switch to process the packet corresponding to the packet-in message.
  • the switch configures a new flow entry
  • the packet of the same data flow may continue to be received. Because there is no matching flow entry, the switch will continue to generate a packet-in message and send it to the controller. Controlling the bandwidth of the link and the processing resources of the controller; and in order to reduce the number of packet-in messages generated by the switch, the switch first writes a new flow entry according to the flow-mod message, and then processes the data corresponding to the packet-in message. Packet, at this time, the switch may process the data packets of the same data stream received first, causing the packets of the same data stream to be out of order.
  • the prior art proposes a solution to solve the problem that the switch generates a larger number of packet-in messages and packets out of order by adding a Pi (packet-in) cache management module and a flow action pre-processing module in the switch.
  • the Pi cache management module is configured to buffer the data packet and control the generation of the packet-in message
  • the flow action pre-processing module is configured to process the flow-mod message and the packet-out message sent by the controller, and ensure the same data flow.
  • the packet corresponding to the packet-in message is processed first, and then the new flow entry is written.
  • adding a new processing module to the switch increases the complexity of implementing the switch and occupies more switch resources.
  • the switch needs to The data packet is first stored in the buffer area, and then the data packet is processed, which increases the delay in processing the data packet.
  • the switch does not query the flow entry of the forwarded data packet when forwarding the data packet, how can the number of packet-in messages generated by the switch be reduced and the data packets of the same data flow can be prevented from being out of order, and can be reduced.
  • the resource overhead of the switch is an urgent problem to be solved.
  • the embodiment of the present invention provides a data sending method, device, and system based on a software-defined network, which solves the problem of reducing the packet generated by the switch when the switch does not query the flow entry of the forwarded data packet when the switch forwards the data packet.
  • the number of -in messages and the avoidance of out-of-order packets of the same data stream can reduce the resource overhead of the switch.
  • a first aspect of the embodiments of the present application provides an SDN-based data sending method, including: first, a switch receives an i-th data packet from a first port, where the i-th data packet includes a packet header information, where i is a positive integer, i The value of the packet is 2 to N, the packet header information of the N packets is the same, and the N packets belong to the same data stream packet. Then, the switch queries the flow table according to the port number of the first port or/and the header information.
  • the first flow entry is configured to store the ith data packet in the cache queue corresponding to the cache queue identifier j according to the first flow entry, where the first flow entry includes the first matching feature and the first processing mode, and the first matching feature includes the first An entry number and a first matching field, where the first entry number is a port number of the first port, the first matching field includes all or part of the header information, and the first processing mode is that the data packet matched with the first matching feature is stored in the cache queue identifier.
  • the second flow entry includes a second matching feature and a second processing mode
  • the second matching feature includes a second entry number and a first matching field
  • the second entry number is a cache queue identifier j
  • the second processing mode is a slave switch
  • the second port forwards the data packet stored in the cache queue corresponding to the cache queue identifier j
  • the open-cache-port message is used to instruct the switch to forward the data packet stored in the cache queue corresponding to the cache queue identifier j, and the cache corresponding to the cache queue identifier j
  • the priority of the queue is higher than the priority of the ingress queue of the switch.
  • the switch forwards the data packet stored in the cache queue corresponding to the cache queue identifier j according to the second flow entry.
  • the SDN-based data sending method provided by the embodiment of the present application, first, by extending the OpenFlow protocol to enable the flow table to support forwarding to a certain cache queue, and then using the flow table entry to control the buffer of the data packet, the effective reduction of the same The number of packet-in messages generated when the data packet includes no flow table entries, and the logic for the cache is not separately implemented, which can save the resources of the switch.
  • the cache queue is used as an input of the OpenFlow pipeline.
  • the OpenFlow pipeline can directly process the cached data packets, and the logic of processing the cached data packets is not separately implemented, which can save the resources of the switch.
  • the entry queue and the cache queue corresponding to the switch port are distinguished by the priority mode to ensure the OpenFlow pipeline. Priority is given to processing the data packets of the cache queue, so that the OpenFlow pipeline processes and forwards the data packets of the same data stream in order to avoid out-of-order packets of the same data stream.
  • the flow table can be forwarded to a certain cache queue by extending the OpenFlow protocol, that is, when the switch does not query the flow entry of the forwarded data packet when the switch forwards the data packet, the first flow table is used.
  • the item controls the caching of the packet.
  • the first flow entry can be implemented in different ways.
  • the method before the switch receives the ith data packet from the first port, the method further includes: after the switch receives the first data packet from the first port, determining, according to the first port, The port number or / and the header information included in the first data packet are not queried in the flow table to forward the flow entry of the first data packet, and then the cache queue is determined, and the cache queue identifier corresponding to the cache queue is j, the cache queue If yes, the first data packet is stored in the buffer queue corresponding to the cache queue identifier j; the first packet-in message is generated, the first packet-in message includes the header information and the cache queue identifier j; and the first packet is sent to the controller. In message, the flow table entry of the forwarding data packet is obtained from the controller, and the first data packet and the N data packets belong to the data packet of the same data flow.
  • the method further includes: the switch generates the first flow entry .
  • each data volume stream only generates a packet-in message, that is, a packet-in message is generated when the first data packet in the same data stream does not match the flow entry used for forwarding, and the first flow entry is generated.
  • the subsequent data packet (the i-th data packet) of the same data stream forwards the ith data packet to the cache queue storage according to the processing manner of the first flow entry, so the ith data packet does not generate an event with no flow entry matching. Therefore, the packet-in message is not generated, which effectively reduces the load on the control link and the controller.
  • the method before the switch receives the ith data packet from the first port, the method further includes: after the switch receives the first data packet from the first port, determining, according to the first port The port number or / and the header information included in the first data packet are not queried in the flow table to the flow table entry forwarding the first data packet, and then the second packet-in message is generated, and the second packet-in message includes the header information.
  • each data volume stream only generates a packet-in message, that is, the first data packet in the same data stream does not match the flow table entry used for forwarding, and generates a packet-in message, and sends a request to the controller.
  • the i-th data packet Obtaining the first flow entry, and the subsequent data packet (the i-th data packet) of the same data flow forwards the i-th data packet to the cache queue storage according to the processing manner of the first flow entry, so the i-th data packet does not generate no
  • the events matching the flow entry will not generate a packet-in message, which effectively reduces the load on the control link and the controller.
  • the method further includes: the switch receiving the third flow sent by the controller- The mod message, the third flow-mod message includes a third flow entry, the third flow entry includes a first matching feature and a second processing manner, so that the switch forwards the same data received from the ingress queue according to the third flow entry. Streaming packets.
  • the method further includes: deleting, by the switch, the first flow entry. Therefore, after the switch acquires the third flow entry of the data packet of the same data stream received from the ingress queue, the first flow entry used to cache the data packet loses the meaning of existence, and at this time, by deleting The first flow entry increases the storage space of the switch.
  • the method further includes: Delete the second flow entry. Therefore, after the switch forwards the data packet stored in the cache queue through the second flow entry, the second flow entry for processing the cached data packet loses the meaning of existence, and at this time, the second flow is deleted. Table entry to increase the storage space of the switch.
  • a second aspect of the embodiments of the present application provides an SDN-based data sending method, including: first, a controller generates a second flow entry and a first flow-mod message, where the first flow-mod message includes a second flow table.
  • the second flow entry includes a second matching feature and a second processing mode
  • the second matching feature includes a second entry number and a first matching field
  • the second entry number is a cache queue identifier j
  • the first matching field includes all or Part of the header information
  • the header information is the header information included in the i-th packet received by the switch from the first port, where i is a positive integer, the value of i is 2 to N, and the header information of the N packets is the same, N
  • the data packet belongs to the data packet of the same data stream
  • the second processing mode is to forward the data packet stored in the buffer queue corresponding to the cache queue identifier j from the second port of the switch; then, the controller sends the first flow-mod to the switch.
  • the controller generates an open-cache-port message, and sends an open-cache-port message to the switch, where the open-cache-port message is used to instruct the switch to forward the cache queue corresponding to the cache queue identifier j.
  • Packet buffer queue buffer queue identifier j corresponding to a higher priority than the priority of the ingress queue switch.
  • the SDN-based data sending method provided by the embodiment of the present application, first, by extending the OpenFlow protocol to enable the flow table to support forwarding to a certain cache queue, and then using the flow table entry to control the buffer of the data packet, the effective reduction of the same
  • the number of packet-in messages generated when the data packet includes no flow table entries, and the logic for the cache is not separately implemented, which can save the resources of the switch.
  • the cache queue is used as an input of the OpenFlow pipeline.
  • the OpenFlow pipeline can directly process the cached data packets, and the logic of processing the cached data packets is not separately implemented, which can save the resources of the switch.
  • the entry queue and the cache queue corresponding to the switch port are distinguished by the priority mode to ensure the OpenFlow pipeline. Priority is given to processing the data packets of the cache queue, so that the OpenFlow pipeline processes and forwards the data packets of the same data stream in order to avoid out-of-order packets of the same data stream.
  • the method further includes: the controller receiving the first packet-in message sent by the switch, where the first packet-in message includes the packet header
  • the information and the cache queue identifier j are provided so that the controller generates the second flow entry, so that the switch processes the data packet stored in the cache queue.
  • the method further includes: the controller receives the second packet-in message sent by the switch, and the second packet-in message includes the packet header The information and the port number of the first port; the controller determines the cache queue corresponding to the cache queue identifier j and the port number of the first port, and the port number of the first port is corresponding to the port of the switch receiving the first data packet and the i-th data packet The port number, the first data packet and the N data packets belong to the same data stream; the controller generates the first flow entry and the second flow-mod message, and the second flow-mod message includes the first flow entry, the first flow
  • the first matching feature includes a first entry number and a first matching field, where the first entry number is a port number of the first port, and the first processing mode is the first matching feature.
  • the matched data packet is stored in the cache queue corresponding to the cache queue identifier j; the controller sends a second flow-mod message to the switch; the controller generates a packet-out message, and the packet-out message is used to indicate the handover Machine forward from the second port of the first packet; packet-out message sent by the controller to the switch to forward the first data packet to the switch.
  • the method further includes: the controller calculates the forwarding according to the packet header information.
  • Path the forwarding path includes M switches, and the M switches include a switch that receives the i-th packet; the controller generates each of the M-1 other switches except the switch that receives the i-th packet on the forwarding path.
  • the method further includes: the controller generates the third flow entry and the third flow-mod message.
  • the third flow-mod message includes a third flow entry, the third flow entry includes a first matching feature and a second processing mode, where the first matching feature includes a first entry number and a first matching field, where the first entry number is The port number of the first port; the controller sends a third flow-mod message to the switch. In order to enable the switch to forward the data packet of the same data stream received from the ingress queue according to the third flow entry.
  • the method further includes: deleting, by the controller, the first flow entry and the second flow entry, thereby Increase the storage space of the controller.
  • a third aspect of the present application provides a switch, including: a receiving unit, configured to receive an i-th data packet from a first port, where the i-th data packet includes a packet header information, where i is a positive integer, and the value of i is For the 2 to N, N packets have the same header information, and the N packets belong to the same data stream; the processing unit is configured to query the flow table according to the port number or/and the header information of the first port to obtain the first
  • the first-level entry includes a first matching feature and a first processing manner, where the first matching feature includes a first entry number and a first matching field, where the first entry number is a port number of the first port, and the first matching field
  • the first processing mode is that the data packet matched with the first matching feature is stored in the cache queue corresponding to the cache queue identifier j.
  • the processing unit is further configured to store the ith data packet according to the first flow entry.
  • the cache queue is corresponding to the cache queue;
  • the receiving unit is further configured to receive the first flow-mod message sent by the controller, where the first flow-mod message includes the second flow entry, and the second flow entry includes the second And a second processing mode, the second matching feature includes a second entry number and a first matching field, the second entry number is a cache queue identifier j, and the second processing mode is to forward the cache queue identifier j from the second port of the switch.
  • the data packet stored in the cache queue; the receiving unit is further configured to receive an open-cache-port message sent by the controller, and the open-cache-port message is used to instruct the switch to forward the data stored in the cache queue corresponding to the cache queue identifier j.
  • the priority of the cache queue corresponding to the cache queue identifier j is higher than the priority of the entry queue of the switch.
  • the processing unit is further configured to forward the data packet stored in the cache queue corresponding to the cache queue identifier j according to the second flow entry.
  • a fourth aspect of the embodiments of the present application provides a controller, including: a processing unit, configured to generate a second flow entry, where the second flow entry includes a second matching feature and a second processing mode, where the second matching feature includes a second entry number and a first matching field, where the second entry number is a cache queue identifier j, the first match field includes all or part of the header information, and the header information is a header information included in the i-th packet received by the switch from the first port.
  • i is a positive integer
  • the value of i is 2 to N
  • the header information of N packets is the same
  • N packets belong to the same data stream
  • the second processing mode is the second port of the slave switch.
  • the processing unit is further configured to generate a first flow-mod message, the first flow-mod message includes a second flow table entry, and the sending unit is configured to send to the switch The first flow-mod message; the processing unit is further configured to generate an open-cache-port message, where the open-cache-port message is used to instruct the switch to forward the data packet stored in the cache queue corresponding to the cache queue identifier j.
  • the priority of the cache queue corresponding to the cache queue identifier j is higher than the priority of the switch queue of the switch; the sending unit is further configured to send an open-cache-port message to the switch.
  • the foregoing third and fourth functional modules may be implemented by hardware, or may be implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • a communication interface for performing functions of the receiving unit and the transmitting unit a processor for performing functions of the processing unit, a memory, and a program instruction for the processor to process the SDN-based data transmitting method of the embodiment of the present application.
  • the processor, communication interface, and memory are connected by a bus and communicate with each other.
  • the function of the behavior of the switch in the SDN-based data transmission method provided by the first aspect, and the function of the controller in the SDN-based data transmission method provided by the second aspect can be referred to.
  • a fifth aspect of the embodiments of the present application provides a switch, where the switch may include: at least one processor, a memory, a communication interface, and a communication bus; at least one processor is connected to the memory and the communication interface through a communication bus, and the memory is used for storage.
  • the computer executes the instructions, and when executed by the processor, the processor executes the memory stored computer to execute the instructions to cause the switch to perform the SDN-based data transmission method of any of the first aspect or the possible implementation of the first aspect.
  • a sixth aspect of the embodiments of the present application provides a controller, where the controller may include: at least one processor, a memory, a communication interface, and a communication bus; at least one processor is connected to the memory and the communication interface through a communication bus, and the memory is used.
  • the storage computer executes the instructions, and when the processor is running, the processor executes the memory stored computer execution instructions to cause the controller to perform the SDN-based data transmission of any of the second aspect or the possible implementation of the second aspect. method.
  • a seventh aspect of the embodiments of the present application provides a software-defined network, comprising: the switch of the third aspect or the fifth aspect, and the controller of the fourth aspect or the sixth aspect.
  • An eighth aspect of the embodiments of the present application provides a computer readable storage medium for storing computer software instructions for use in the switch, wherein when the computer software instructions are executed by the processor, the switch can perform any of the foregoing method.
  • a ninth aspect of the embodiments of the present application provides a computer readable storage medium for storing computer software instructions used by the controller, when the computer software instructions are executed by the processor, so that the controller can perform any of the foregoing Aspect method.
  • a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the above aspects.
  • the names of the switches and controllers are not limited to the devices themselves. In actual implementation, these devices may appear under other names. As long as the functions of the respective devices are similar to the embodiments of the present application, they are within the scope of the claims and their equivalents.
  • FIG. 1 is a schematic diagram of a software defined network according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a data transmission process based on a Pi cache management module and a flow action preprocessing module provided by the prior art
  • FIG. 4 is a schematic structural diagram of a Pi cache table provided by the prior art
  • FIG. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
  • FIG. 6 is a flowchart of a data sending method based on a software-defined network according to an embodiment of the present application
  • FIG. 7 is a flowchart of another method for sending data based on a software-defined network according to an embodiment of the present application.
  • FIG. 8 is a flowchart of still another method for sending data based on a software-defined network according to an embodiment of the present application
  • FIG. 9 is a flowchart of still another method for sending data based on a software-defined network according to an embodiment of the present application.
  • FIG. 10 is a flowchart of still another method for sending data based on a software-defined network according to an embodiment of the present application
  • FIG. 11 is a schematic structural diagram of a switch according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of another switch according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram of a controller according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram of another controller according to an embodiment of the present application.
  • SDN Software Defined Networking
  • OF OpenFlow
  • FIG. 1 a software-defined network diagram is provided in the embodiment of the present application, where the software-defined network includes a controller 101, a switch 102, a switch 103, a switch 104, a switch 105, a switch 106, and a terminal device 107.
  • the controller knows all the network information, calculates the forwarding path of the data packet, manages the flow table, and directs the work of all the switches; the switch does not know any network information, and only works according to the command of the controller.
  • the controller and the switch use the OpenFlow protocol to communicate through the control link.
  • the switch can also be called an OpenFlow switch.
  • the switch forwards the received data packet through the data link according to the flow table configured by the controller. Therefore, the software-defined network can be considered as a data plane and a control plane, and a centralized control plane is adopted, which greatly improves the flexibility of the network.
  • the terminal device 107 can be a mobile phone, a tablet computer, a notebook computer, an Ultra-mobile Personal Computer (UMPC), a netbook, a Personal Digital Assistant (PDA), or the like.
  • the network device included in the network architecture of the embodiment of the present application is a computer.
  • the flow table is a forwarding table for the switch to forward the received data packets.
  • the flow table includes at least one flow entry, and each flow entry includes a matching feature, a processing manner, and statistical information.
  • Matching features include an entry number and a matching field.
  • the entry number is the port number corresponding to the port on which the switch receives the data packet. From which port the switch receives the data packet, the port number of the port is the entry number.
  • the matching field includes all or part of the header information of the data packet received by the switch, and the matching field may include an Internet Protocol (IP) address (IP src) and a destination IP address (IP dst) between the source networks.
  • IP Internet Protocol
  • IP src Internet Protocol address
  • IP dst destination IP address
  • more matching fields may be included in the actual application, for example, an ingress port, a source access control (MAC) address (Ether source), a destination MAC address (Ether dst), and an Ethernet.
  • Type (Ether Type), Virtual Local Area Network (VLAN) tag (VLAN ID), VLAN priority (VLAN priority), IP protocol field (IP proto), IP service type (IP Tos bits), Transmission Control Protocol (Transmission Control Protocol, TCP) / User Datagram Protocol (UDP) source port number (TCP/UDP src port) and TCP/UDP destination port number (TCP/UDP dst port).
  • VLAN Virtual Local Area Network
  • VLAN priority VLAN priority
  • IP protocol field IP proto
  • IP service type IP Tos bits
  • Transmission Control Protocol Transmission Control Protocol
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • TCP/UDP dst port TCP/UDP destination port number
  • the processing mode specifies the processing of the data packet that successfully matches the flow entry, including forwarding or discarding the data packet from a port of the switch, or modifying the value of some fields in the header information.
  • the statistics record information such as the number of packets processed and the total number of bytes processed according to the flow entry.
  • the flow of the SDN-based data transmission method in the prior art is briefly described below with reference to FIG. 2, as shown in FIG. 2, the method includes:
  • the switch receives the data packet from the first port, and parses the data packet to obtain the packet header information.
  • the switch queries the flow table according to the port number of the first port or/and the packet header information of the data packet, and compares with all flow entries in the flow table to determine whether the port number or/and the header information of the first port are matched. Flow entry.
  • step 203 is performed; if it is determined that the flow entry matching the port number or/and the header information of the first port is not obtained, the switch needs to The controller sends a packet-in message and obtains a new flow entry from the controller to forward the packet. As shown in FIG. 2, the method further includes steps 204 to 213.
  • the switch processes the data packet according to a processing manner of the flow entry.
  • the first flow table only matches the entry number
  • the second flow table matches both the entry number and the IP address. Since the entry numbers of the two flow tables are the same, they will match at the same time. Then, select one of the flow table entries with the highest priority as the matching result.
  • the priority of the flow entry can be set according to the length of the forwarding path, that is, the priority of the forwarding path is low, and the forwarding path is short.
  • the switch generates a packet-in message.
  • the packet-in message can carry different content according to whether the switch can cache the data packet locally.
  • the switch If the switch cannot cache packets locally, the switch encapsulates the complete packet in a packet-in message and sends it to the controller cache. After the controller parses the data packet to obtain the packet header information, the new flow table entry is calculated according to the packet header information, and then the complete data packet is sent to the switch for processing by the packet-out message.
  • the switch If the switch can cache packets locally, the switch stores the packet locally and generates a unique buffer number (Buffer-id), and then encapsulates the packet header information and the cache number in the packet-in message, packet- The out message also only transmits the cache number.
  • Buffer-id a unique buffer number
  • the switch sends a packet-in message to the controller.
  • the controller receives a packet-in message sent by the switch.
  • the controller calculates a new flow entry.
  • the controller sends a flow-mod message to the switch, where the flow-mod message includes a new flow entry.
  • the switch receives a flow-mod message sent by the controller.
  • the controller sends a packet-out message to the switch, where the packet-out message is used to instruct the switch to forward the data packet from the second port.
  • the second port is any physical port of the switch that distinguishes the first port.
  • the switch receives a packet-out message sent by the controller.
  • the switch forwards the data packet through the second port.
  • the switch sends the first packet-in message to the controller. During the period when the switch receives the new flow entry, that is, between S204 and S210, the switch may continue to receive the data packet of the same data stream. Because there is no matching flow entry, the switch will continue to generate packet-in messages and send them to the controller, occupying the bandwidth of the control link and the processing resources of the controller. Second, in order to reduce the packet-in message generated by the switch. The number of switches, the switch first writes a new flow entry according to the flow-mod message, and then processes the data packet corresponding to the packet-in message.
  • the data packet of the same data flow that arrives at the switch can be processed according to the new flow entry. And forwarding, before the cached data packet is processed and forwarded, causing the switch to forward the data packet in the same order as the received data packet, and the data packets of the same data stream are out of order.
  • the prior art proposes a solution to solve the problem of more switch generation by adding a Pi (packet-in) cache management module and a flow action pre-processing module in the switch.
  • the number of packet-in messages and packets is out of order.
  • FIG. 3 is a schematic diagram of a data transmission process based on a Pi cache management module and a flow action pre-processing module provided by the prior art, wherein the Pi cache management module is configured to cache a data packet and control generation of a packet-in message.
  • the flow action pre-processing module is configured to process the flow-mod message and the packet-out message sent by the controller. For the same data stream, ensure that the data packet corresponding to the packet-in message is processed first, and then the new flow table is written in the OpenFlow pipeline. item.
  • the Pi cache management module manages the cache using the Pi Buffer Table (PiBT).
  • FIG. 4 is a schematic structural diagram of a Pi cache table provided by the prior art.
  • the Pi cache table includes four parts: a matching field, a buffer first address, a current buffer address, a buffered message count, and a timeout period.
  • Each entry represents a cache rule for packets of the same data stream.
  • the matching field is the value of each field in the packet header information included in the data packet, for example, the source MAC address, the destination MAC address, the source IP address, the destination IP address, and the like;
  • the first address of the buffer area represents the buffer queue of the data packet buffering the same data stream.
  • the first address; the current buffer address represents the address of the last packet of the packet buffering the same data stream;
  • the cache message count indicates the number of packets that have been cached;
  • the timeout period indicates the time when the cache rule expires.
  • the data packet enters the OpenFlow pipeline.
  • the packet parsing module in the OpenFlow pipeline parses the data packet to obtain the packet header information included in the data packet.
  • the flow table search module in the OpenFlow pipeline according to the data packet The packet header information is queried in the flow table for the flow entry of the forwarded data packet. If no matching flow entry is found, the data packet is sent to the Pi cache management module.
  • the Pi cache management module searches for the matching field in the Pi cache table according to the header information included in the data packet, and if a matching item is found, inserts the data packet into the corresponding cache queue; otherwise, the new cache queue is newly cached to cache the data packet.
  • a new cache queue is generated, and a packet-in message is generated and sent to the controller.
  • the packet-in message includes the packet header information of the packet and the first address of the buffer of the newly created queue. Thus, it is guaranteed that only one packet-in message will be generated for each data stream.
  • the controller calculates a new flow entry of the forwarded data packet according to the packet header information in the packet-in message, and sends the new flow entry to the switch through the flow-mod message, and the flow-mod message carries the need to write The new flow entry and the first address of the cache; and the controller also needs to send a packet-out message to the switch, instructing the switch to process the data packet according to the new flow entry, and the packet-out message carries the first address of the buffer ( Buffer_id).
  • the flow action pre-processing module After receiving the flow-mod message and the packet-out message, the flow action pre-processing module first finds the cache queue according to the first address of the buffer area, and processes the data packet in the cache queue one by one according to the processing action in the new flow entry. After the data packet in the cache queue is processed, a new flow entry is written in the OpenFlow pipeline. Therefore, it is ensured that the data packets in each data stream are forwarded in order, and the data packets of the same data stream are prevented from being out of order.
  • the above-mentioned data transmission process based on the Pi cache management module and the flow action preprocessing module also has disadvantages.
  • new processing modules Pi cache management and flow action preprocessing
  • the packet needs to be searched and matched twice before being cached.
  • the first time is the lookup matching of the OpenFlow pipeline
  • the second is the lookup matching of the Pi cache table.
  • the two search matches increase the calculation and storage of the switch processing one packet.
  • Resource overhead Third, the stream action pre-processing module processes the data packets in the buffer area, and new packets arrive, and no flow table entries match the data packets before the new flow entry is written.
  • the embodiment of the present application provides a data transmission method based on SDN, the basic principle is: first, the switch receives the i-th data packet from the first port, and the i-th data packet includes the packet header information, where, i For a positive integer, the value of i is 2 to N, the header information of N packets is the same, and the N packets belong to the same data stream; then, the switch is based on the port number or/and header information of the first port.
  • the matching feature includes a first entry number and a first matching field, where the first entry number is a port number of the first port, and the first matching field includes all or part of the header information.
  • a processing manner is that the data packet matched with the first matching feature is stored in a buffer queue corresponding to the cache queue identifier j; the switch receives the first flow-mod message and the open-cache-port message sent by the controller, where the first flow The -mod message includes a second flow entry, the second flow entry includes a second matching feature and a second processing mode, the second matching feature includes a second entry number and a first matching field, and the second entry number is a cache queue identifier
  • the second processing mode is to forward the data packet stored in the cache queue corresponding to the cache queue identifier j from the second port of the switch, and the open-cache-port message is used to instruct the switch to forward the data stored in the cache queue corresponding to the cache queue identifier j.
  • the priority of the cache queue corresponding to the cache queue identifier j is higher than the priority of the entry queue of the switch.
  • the switch forwards the data packet stored in the cache queue corresponding to the cache queue identifier j according to the second flow entry.
  • the SDN-based data sending method provided by the embodiment of the present application, first, by extending the OpenFlow protocol to enable the flow table to support forwarding to a certain cache queue, and then using the flow table to control the buffer of the data packet, the effective reduction of the same data
  • the number of Packet-in messages generated when the data packet included in the flow does not query the flow table entry does not need to implement the logic for the cache separately, which can save the resources of the switch.
  • the cache queue is used as an input of the OpenFlow pipeline.
  • the OpenFlow pipeline is used to process the cached data packets, and the logic of processing the cached data packets is not separately implemented, which can save the resources of the switch.
  • the ingress queue and the cache queue corresponding to the switch port are distinguished by the priority mode to ensure the priority of the OpenFlow pipeline. Processing the data packets of the cache queue, so that the OpenFlow pipeline processes and forwards the data packets of the same data stream in order, thereby avoiding out-of-order packets of the same data stream.
  • FIG. 5 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure.
  • the computer device may include at least one processor 51, a memory 52, a communication interface 53, and a communication bus 54.
  • the processor 51 is a control center of the computer device, and may be a processor or a collective name of a plurality of processing elements.
  • the processor 51 may include a central processing unit (CPU) or a plurality of CPUs, such as CPU0 and CPU1 shown in FIG.
  • the processor 51 may also be an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application, for example, one or more microprocessors (Digital Signal Processor, DSP), or one or more Field Programmable Gate Arrays (FPGAs).
  • ASIC Application Specific Integrated Circuit
  • the processor 51 can perform various functions of the computer device by running or executing a software program stored in the memory 52 and calling data stored in the memory 52.
  • a computer device can include multiple processors, such as processor 51 and processor 55 shown in FIG. Each of these processors can be a single core processor (CPU) or a multi-core processor (multi-CPU).
  • processors herein may refer to one or more devices, circuits, and/or processing cores for processing data, such as computer program instructions.
  • the computer device may be a switch
  • the processor 51 is configured to query the flow table according to the port number or/and the header information of the first port to obtain the first flow entry, according to the first
  • the first-class entry stores the i-th packet to the cache queue corresponding to the cache queue identifier j, and forwards the data packet stored in the cache queue corresponding to the cache queue identifier j according to the second flow entry.
  • the computer device may be a controller, and the processor 51 is mainly configured to generate a second flow entry, a first flow-mod message, and an open-cache-port message.
  • the memory 52 can be a Read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type that can store information and instructions.
  • the dynamic storage device can also be an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical disc storage, and a disc storage device. (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be Any other media accessed, but not limited to this.
  • Memory 52 may be present independently and coupled to processor 51 via communication bus 54. The memory 52 can also be integrated with the processor 51.
  • the memory 52 is used to store a software program that executes the solution of the present application, and is controlled by the processor 51 for execution.
  • the memory 52 is used to store data packets.
  • the communication interface 53 is configured to communicate with other devices or communication networks, such as an Ethernet, a Radio Access Network (RAN), a Wireless Local Area Networks (WLAN), and the like.
  • the communication interface 53 may include a receiving unit that implements a receiving function, and a transmitting unit that implements a transmitting function.
  • the communication bus 54 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, or an Extended Industry Standard Architecture (EISA) bus.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 5, but it does not mean that there is only one bus or one type of bus.
  • the device structure shown in Figure 5 does not constitute a limitation of a computer device, and may include more or fewer components than those illustrated, or some components may be combined, or different component arrangements.
  • FIG. 6 is a flowchart of a method for sending data based on a software-defined network according to an embodiment of the present disclosure. As shown in FIG. 6, the method may include:
  • the switch receives the ith data packet from the first port.
  • the switch receives N data packets from the first port, each data packet includes header information and data, N data packets include different data, N data packets include the same header information, and the data packet with the same header information belongs to the switch. For the same data stream, N packets belong to the same data stream.
  • the first port can be any physical port of the switch, that is, the physical port from which the switch receives the data packet is not limited, and the data packet can be received from any physical port, and the physical port from which the switch forwards the data packet requires the controller to indicate Switch's. Where i is a positive integer and i is 2 to N. N indicates the number of data packets received by the switch before the flow entry of the forwarded data packet is obtained. N is less than or equal to the number of data packets included in the data flow. Different data streams include different numbers of data packets.
  • the switch queries the flow table according to the port number or/and the packet header information of the first port to obtain the first flow entry.
  • the switch After the switch receives the i-th packet from the first port, the i-th packet enters the ingress queue for processing.
  • the switch knows the port number of the first port, and the switch can query the flow table according to the packet header information, or the switch queries the flow table according to the port number of the first port, or the switch queries the flow table according to the port number and the packet header information of the first port to obtain the first flow. Entry.
  • the first flow entry is used to forward the ith data packet to the cache queue for storage.
  • the switch does not query the flow entry of the ith data packet, the switch generates a packet-in message and sends a packet-in message to the controller. Forwarding the flow entry of the i-th packet, occupying the bandwidth of the control link and the processing resources of the controller.
  • the first flow entry includes a first matching feature and a first processing mode.
  • the first matching feature includes a first entry number and a first matching field, where the first entry number is a port number of the first port, and the first matching field includes all or part of the header information, and the specific content of the first matching field may refer to the foregoing For technical explanation, the embodiments of the present application are not described herein again.
  • the first processing mode is that the data packet matched with the first matching feature is stored in the cache queue corresponding to the cache queue identifier j.
  • the first flow entry is used by the switch to store the data packet with the same header information received from the first port to the cache queue corresponding to the cache queue identifier j.
  • the switch stores the ith data packet in the cache queue corresponding to the cache queue identifier j according to the first flow entry.
  • the switch queries the flow table according to the port number or/and the packet header information of the first port, and obtains the first flow entry, and stores the i-th data packet to the cache queue corresponding to the cache queue identifier j according to the first processing manner in the first flow entry. .
  • the controller generates a second flow entry.
  • the second flow entry is used by the switch to forward the data packet stored in the cache queue corresponding to the cache queue identifier j
  • the second flow entry includes a second matching feature and a second processing mode
  • the second matching feature includes a second entry number and The first matching field
  • the second entry number is the cache queue identifier j
  • the first matching field includes all or part of the header information
  • the header information is the header information included in the data packet received by the switch from the first port
  • the second processing manner is The second port of the switch forwards the data packet stored in the cache queue corresponding to the cache queue identifier j.
  • the controller generates a first flow-mod message, where the first flow-mod message includes a second flow table entry.
  • the controller After the controller generates the second flow entry, the first flow-mod message is generated.
  • the controller sends a first flow-mod message to the switch.
  • the controller sends the second flow entry to the switch by using the first flow-mod message.
  • the switch receives the first flow-mod message sent by the controller.
  • the controller generates an open-cache-port message.
  • the open-cache-port message is generated, and the open-cache-port message is used to instruct the switch to forward the data packet stored in the cache queue corresponding to the cache queue identifier j, and the cache queue identifier j corresponds to
  • the cache queue has a higher priority than the switch's ingress queue. For example, suppose the switch ingress queue has a priority of 5 and the cache queue has a priority of 10.
  • the switch preferentially reads the data packets of the high priority queue for processing, so as to ensure that the buffered data packets are forwarded preferentially, thereby ensuring the sequential forwarding of the data packets of the same data stream.
  • the switch does not read the data packet of the switch's ingress queue for processing, so that no new data packet is forwarded to the cache queue, which avoids the prior art.
  • the new packet arrival rate is greater than the rate of processing packets in the buffer area, there will always be packet accumulation in the buffer area, causing the switch to first save the data packet to the buffer area, and then process the data packet, increasing the processing data. The problem of packet delay.
  • sequence of the steps of the data transmission method based on the software-defined network may be appropriately adjusted.
  • sequence between the first flow-mod message sent by the controller to the switch and the open-cache-port message generated by the S608 controller may be interchanged, that is, S606 may be performed first and then S608 may be performed, or S606 and S608 may be simultaneously.
  • the solution described in the embodiment of the present application is an exemplary implementation manner of a data transmission method based on a software-defined network. Any person skilled in the art can easily think of changes within the technical scope disclosed by the present invention. The method should be covered by the scope of the present invention and therefore will not be described again.
  • the controller sends an open-cache-port message to the switch.
  • S610 The switch receives an open-cache-port message sent by the controller.
  • the switch receives the open-cache-port message sent by the controller, obtains the second flow entry, and forwards the data packet stored in the cache queue corresponding to the cache queue identifier j
  • the data packet received by the switch is based on
  • the first flow entry stores the data packet in the cache queue corresponding to the cache queue identifier j.
  • the controller also generates the second flow entry, and then configures the second flow entry to the switch, that is, the period during which the switch executes S601 to S603.
  • the controller can simultaneously execute S604, S605, S606, S608, S609.
  • the switch forwards the data packet stored in the cache queue corresponding to the cache queue identifier j according to the second flow entry.
  • the switch After receiving the open-cache-port message sent by the controller, the switch processes the data packet stored in the cache queue corresponding to the cache queue identifier j according to the indication of the open-cache-port message.
  • the switch queries the flow table according to the cache queue identifier j or/and the packet header information of the data packet.
  • the switch obtains the second flow entry by using the first flow-mod message, and the data packet is in the cache queue corresponding to the cache queue identifier j.
  • the second entry number (cache queue identifier j) of the second flow entry can be matched, or / and the header information of the data packet can also match the first matching field (all or part of the header information) of the second flow entry.
  • the data packet is forwarded to other switches in the software-defined network, and the other switches still do not query the data forwarding the same data flow.
  • the flow entry of the packet sends a packet-in message to the controller.
  • the controller also needs to send the flow entry to be used by other switches on the forwarding path, as shown in FIG.
  • the example also includes the following steps:
  • the controller calculates a forwarding path according to the packet header information.
  • the forwarding path includes M switches, and the M switches include switches that receive the i-th packet.
  • the controller generates a flow entry to be used by each switch of the M-1 other switches except the switch that receives the i-th packet on the forwarding path.
  • the controller sends, to each switch of the M-1 other switches except the switch that receives the i-th packet, the flow entry to be used.
  • the flow table can be forwarded to a cache queue by extending the OpenFlow protocol, that is, when the switch does not query the flow entry of the forwarded data packet when the switch forwards the data packet.
  • the first flow entry is used to control the buffering of the data packet.
  • the first flow entry may be generated by the switch itself or by the controller.
  • the embodiment of the present application further includes the following steps:
  • the switch receives the first data packet from the first port.
  • the first data packet includes packet header information, and the first data packet includes header information that is the same as the header information included in the i-th data packet, and the first data packet and the i-th data packet belong to the same data stream.
  • the switch determines that the flow entry for forwarding the first data packet is not queried in the flow table according to the port number or/and the packet header information of the first port.
  • the switch After receiving the first data packet from the first port, the switch queries the flow table to forward the flow entry of the first data packet according to the port number or/and the packet header information of the first port, and the switch determines the port number according to the first port or And the header information is not queried in the flow table to forward the flow entry of the first data packet to perform S617; if the switch determines to forward the first data packet according to the port number or/and the header information of the first port in the flow table The flow entry forwards the first data packet according to the queried flow entry of the first data packet.
  • the switch determines a cache queue corresponding to the cache queue identifier j.
  • the switch After the switch determines that the flow entry of the first data packet is not queried in the flow table according to the port number or/and the header information of the first port, the switch selects a cache queue from the empty cache queue list, if there is no empty cache queue. The switch creates a cache queue locally. The embodiment of the present application assumes that the determined empty cache queue is the cache queue corresponding to the cache queue identifier j.
  • the switch stores the first data packet to a cache queue corresponding to the cache queue identifier j.
  • the switch determines the cache queue corresponding to the cache queue identifier j
  • the first data packet is stored in the cache queue corresponding to the cache queue identifier j.
  • the switch generates a first flow entry.
  • the switch After the switch generates the first flow entry, the switch stores the received data packet of the same data stream in the cache queue corresponding to the cache queue identifier j before the second flow entry is obtained, to avoid generating a packet-in message. Control the bandwidth of the link and the processing resources of the controller.
  • sequence of the steps of the data transmission method based on the software-defined network may be appropriately adjusted.
  • sequence between S618 and S619 may be interchanged, that is, S619 may be performed first and then S618 may be performed, or S618 and S619 may be simultaneously executed.
  • the solution described in the embodiment of the present application is a data transmission method based on a software-defined network.
  • An exemplary implementation manner, and any method that can be easily conceived by those skilled in the art within the scope of the present invention is intended to be included in the scope of the present invention and therefore will not be described again.
  • the switch generates a first packet-in message, where the first packet-in message includes a packet header information and a cache queue identifier j.
  • the switch After the switch determines the cache queue corresponding to the cache queue identifier j, the first packet-in message is generated.
  • the switch sends a first packet-in message to the controller.
  • the switch After the switch generates the first packet-in message, it sends a first packet-in message to the controller.
  • the controller receives the first packet-in message sent by the switch.
  • the first packet-in message includes a header information and a cache queue identifier j to facilitate the controller to generate a second flow entry.
  • the embodiment of the present application further includes the following steps:
  • the switch receives the first data packet from the first port.
  • the switch determines that the flow entry that forwards the first data packet is not queried in the flow table according to the port number or/and the packet header information of the first port.
  • the switch determines that the flow entry that forwards the first data packet is not queried in the flow table according to the port number or/and the header information of the first port, and executes S625.
  • the switch generates a second packet-in message, where the second packet-in message includes a packet header information and a port number of the first port.
  • the switch determines that the flow entry of the first data packet is not queried in the flow table according to the port number or/and the header information of the first port, and generates a second packet-in message.
  • the switch sends a second packet-in message to the controller.
  • the switch After the switch generates the second packet-in message, it sends a second packet-in message to the controller.
  • the controller receives a second packet-in message sent by the switch.
  • the switch sends a second packet-in message to the controller, and the controller receives the second packet-in message sent by the switch.
  • the controller determines a cache queue corresponding to the cache queue identifier j and a port number of the first port.
  • the controller parses the second packet-in message to obtain the header information, and can determine the port number of the first port.
  • the port number of the first port is a port number corresponding to the port of the switch receiving the first data packet and the i-th data packet, and the first data packet and the N data packets belong to the data packet of the same data stream.
  • the controller also knows which buffer queue in the switch is empty, that is, selects a cache queue from the empty cache queue list. If there is no empty cache queue, the controller can create a cache queue. The embodiment of the present application assumes that the determined empty cache queue is the cache queue corresponding to the cache queue identifier j.
  • the controller generates a first flow entry.
  • the first flow entry is generated, where the first flow entry includes the first matching feature and the first processing mode, and the first matching feature includes the first entry number.
  • the first matching field where the first entry number is the port number of the first port, and the first processing mode is that the data packet matched with the first matching feature is stored in the cache queue corresponding to the cache queue identifier j.
  • the controller generates a second flow-mod message, where the second flow-mod message includes a first flow table entry.
  • the second flow-mod message is generated.
  • the controller sends a second flow-mod message to the switch.
  • the controller sends the first flow entry to the switch by using the second flow-mod message.
  • the switch receives a second flow-mod message sent by the controller.
  • the switch After the controller sends the second flow-mod message to the switch, the switch receives the second flow-mod message sent by the controller, and obtains the first flow entry.
  • the controller generates a packet-out message.
  • the packet-out message is used to instruct the switch to forward the first data packet from the second port.
  • sequence of the steps of the data transmission method based on the software-defined network may be appropriately adjusted.
  • sequence between S633 and S631 may be interchanged, that is, S633 may be executed first and then S631 may be performed, or S633 and S631 may be simultaneously executed.
  • the solution described in the embodiment of the present application is a data transmission method based on a software-defined network.
  • An exemplary implementation manner, and any method that can be easily conceived by those skilled in the art within the scope of the present invention is intended to be included in the scope of the present invention and therefore will not be described again.
  • the controller sends a packet-out message to the switch.
  • the switch receives a packet-out message sent by the controller.
  • the switch forwards the first data packet by using the second port.
  • the switch After receiving the packet-out message sent by the controller, the switch forwards the first data packet through the second port according to the indication of the packet-out message.
  • the switch can also obtain the third flow entry, so as to facilitate the switch. After the switch obtains the data packet of the same data stream from the ingress queue, the switch forwards the data packet according to the third flow entry.
  • the embodiment of the present application further includes the following detailed steps:
  • the controller generates a third flow entry.
  • the third flow entry is used by the switch to forward data packets in the ingress queue.
  • the third flow entry includes a first matching feature and a second processing mode.
  • the first matching feature includes a first entry number and a first matching field, and the first entry number is a port number of the first port.
  • the second processing mode is to forward the data packet stored in the cache queue corresponding to the cache queue identifier j from the second port of the switch.
  • the controller generates a third flow-mod message, where the third flow-mod message includes a third flow table entry.
  • the controller sends a third flow-mod message to the switch.
  • the controller sends the third flow entry to the switch through the third flow-mod message.
  • the switch receives a third flow-mod message sent by the controller.
  • the switch receives the third flow-mod message sent by the controller, and obtains the third flow entry, so as to forward the data packet received from the ingress queue according to the third flow entry.
  • the switch After the switch receives the third flow-mod message sent by the controller, the switch can perform S641 and S642.
  • the switch acquires the third flow entry of the data packet of the same data stream received from the ingress queue, the first flow entry used to cache the data packet loses the meaning of existence, and at this time, by deleting The first flow entry increases the storage space of the switch.
  • the second flow entry for processing the cached data packet loses the meaning of existence, and at this time, the second flow is deleted.
  • Table entry to increase the storage space of the switch.
  • S643 may also be performed.
  • the controller deletes the first flow entry and the second flow entry.
  • the source IP address of the first packet is 102.224.112.01
  • the destination IP address is 126.136.134.221.
  • the switch 102 receives the first data packet from the first port (S1-1), where the first data packet includes packet header information and data, and the packet header information includes a source IP address and a destination IP address, and the port number of the first port is S1-1.
  • the source IP address is 102.224.112.01
  • the destination IP address is 126.136.134.221.
  • the switch 102 determines that the flow entry for forwarding the first data packet is not queried in the flow table according to the port number or/and the header information of the first port.
  • the switch 102 determines that the cache queue corresponding to the cache queue identifier j is empty, and the switch 102 stores the first data packet in the cache queue corresponding to the cache queue identifier j, and generates a first flow entry, as shown in Table 1.
  • the embodiment of the present application assumes the first flow entry that is generated by the switch 102. The description is not limited herein.
  • the switch 102 can also be generated by the controller 101 according to the foregoing embodiment, and the receiving controller 101 sends the same. The first flow entry.
  • the switch 102 generates a first packet-in message, and the first packet-in message includes a header information and a buffer queue identifier j, and sends a first packet-in message to the controller 101.
  • the controller 101 After receiving the first packet-in message, the controller 101 generates a second flow entry, as shown in Table 2.
  • the controller 101 generates a first flow-mod message, the first flow-mod message includes a second flow entry, sends a first flow-mod message to the switch 102, generates an open-cache-port message, and opens-cache-port
  • the message is used to instruct the switch 102 to forward the data packet stored in the cache queue corresponding to the cache queue identifier j.
  • the priority of the cache queue corresponding to the cache queue identifier j is higher than the priority of the entry queue of the switch 102, and the open-cache is sent to the switch 102. -port message.
  • the controller 101 after receiving the first packet-in message, the controller 101 also needs to calculate the forwarding path according to the packet header information.
  • the embodiment of the present application assumes that the forwarding path is the switch 102 to the switch 103 and then to the switch 106. The description is not limited thereto.
  • the forwarding path may also be the switch 102-switch 104-switch 105-switch 106. Then, for all switches except the switch 102 on the forwarding path, that is, the switch 103 and the switch 106 generate a flow entry for forwarding the data packet, and send a flow entry for forwarding the data packet to the switch 103 and the switch 106.
  • the switch 102 may further receive the ith data packet from the first port (S1-1), and query the flow table according to the port number or/and the header information of the first port.
  • the first flow entry is obtained, and the switch 102 stores the ith data packet to the cache queue corresponding to the cache queue identifier j according to the first flow entry.
  • the switch 102 After receiving the first flow-mod message and the open-cache-port message, the switch 102 forwards the data packet stored in the cache queue corresponding to the cache queue identifier j according to the second flow entry.
  • controller 101 generates a third flow entry, as shown in Table 3.
  • the controller 101 generates a third flow-mod message, and sends a third flow-mod message to the switch 102, where the third flow-mod message includes a third flow table entry. Then, the controller 101 deletes the first flow entry and the second flow entry.
  • the switch 102 receives the third flow-mod message sent by the controller 101, and obtains a third flow entry. Then, when the data packet of the same data stream is received on the first port (S1-1), the packet is forwarded according to the third flow entry.
  • the embodiments of the present application are applicable to any network based on a software-defined network architecture, including a wired network and a wireless network.
  • the switch in this embodiment of the present application is not limited to an OpenFlow switch, and may be any programmable switch that supports a “match-forward” operation.
  • the solution provided by the embodiment of the present application is mainly introduced from the perspective of interaction between the network elements.
  • various network elements such as switches and controllers, in order to implement the above functions, include corresponding hardware structures and/or software modules for performing various functions.
  • the present application can be implemented in hardware or a combination of hardware and software in combination with the algorithmic steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
  • the embodiment of the present application may divide the function module by the switch and the controller according to the foregoing method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the modules in the embodiment of the present application is schematic, and is only a logical function division, and may be further divided in actual implementation.
  • FIG. 11 is a schematic diagram showing a possible composition of the switch involved in the foregoing and the embodiment.
  • the switch may include: receiving unit 1101 and processing.
  • the receiving unit 1101 is configured to support the switch to execute S601, S607, and S610 in the software-defined network-based data transmitting method shown in FIG. 6, and S601 and S607 in the software-defined network-based data transmitting method shown in FIG. S601, S601, S607, S610, and S615 in the software-defined network-based data transmission method shown in FIG. 8, and S601, S607, S610, and S623 in the method for transmitting data based on the software-defined network shown in FIG. S632, S635, S640 in the method of data transmission based on software-defined network shown in FIG.
  • the processing unit 1102 is configured to support the switch to execute S602, S603, and S611 in the software-defined network-based data transmission method shown in FIG. 6, and S602, S603, and S611 in the software-defined network-based data transmission method shown in FIG. S616, S617, S618, S619, S620, S602, S603, S611 in the software-defined network-based data transmission method shown in FIG. 8, S624 in the method of data transmission based on the software-defined network shown in FIG. S625, S636, S602, S603, and S611, and S641 and S642 in the method of data transmission based on the software-defined network shown in FIG.
  • the switch may further include: a sending unit 1103.
  • the sending unit 1103 is configured to support the switch to execute S621 in the software-defined network-based data transmitting method shown in FIG. 8, and S626 in the software-defined network-based data transmitting method shown in FIG.
  • the switch provided by the embodiment of the present application is configured to execute the foregoing software-defined network-based data transmission method, so that the same effect as the above-described software-defined network-based data transmission method can be achieved.
  • FIG. 12 shows another possible composition diagram of the switch involved in the above embodiment.
  • the switch includes a processing module 1201 and a communication module 1202.
  • the processing module 1201 is configured to control and manage the action of the switch.
  • the processing module 1201 is configured to support the switch to execute S602, S603, and S611 shown in FIG. 6, and S602, S603, and S611 shown in FIG. S616, S617, S618, S619, S620, S602, S603, S611, S624, S625, S636, S602, S603, S611 shown in Fig. 9, S641, S642, and/or shown in Fig. 10 are used for description herein.
  • Communication module 1202 is used to support communication between the switch and other network entities, such as with the functional modules or network entities illustrated in Figures 6, 7, 8, 9, and 10.
  • the switch can also include a storage module 1203 for storing program code and data of the switch.
  • the processing module 1201 can be a processor or a controller. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor can also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 1202 can be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 1203 may be a memory.
  • the processing module 1201 is a processor
  • the communication module 1202 is a communication interface
  • the storage module 1203 is a memory
  • the switch involved in the embodiment of the present application may be the computer device shown in FIG. 5.
  • FIG. 13 is a schematic diagram showing a possible composition of the controller involved in the foregoing and the embodiment.
  • the controller may include: a processing unit 1301 And transmitting unit 1302.
  • the processing unit 1301 is configured to support the controller to execute S604, S605, and S608 in the software-defined network-based data transmission method shown in FIG. 6, and S604 in the software-defined network-based data transmission method shown in FIG. S605, S608, S612, S613, S604, S612, S613, S605, S608 in the software-defined network-based data transmission method shown in FIG. 8, S628 in the method of data transmission based on the software-defined network shown in FIG. S629, S633, S604, S612, S613, S605, and S608, and S637, S638, and S643 in the method of data transmission based on the software-defined network shown in FIG.
  • the sending unit 1302 is configured to support the controller to execute S606, S609 in the software-defined network-based data transmitting method shown in FIG. 6, and S606, S609, and S614 in the software-defined network-based data transmitting method shown in FIG. S614, S606, S609 in the software-defined network-based data transmission method shown in FIG. 8, and S614, S606, S609, S631, and S634 in the software-defined network-based data transmission method shown in FIG. S639 in the method of data transmission based on software-defined network.
  • the controller may further include: a receiving unit 1303.
  • the receiving unit 1303 is configured to support the controller to execute S622 in the software-defined network-based data transmitting method shown in FIG. 8, and S627 in the software-defined network-based data transmitting method shown in FIG.
  • the controller provided by the embodiment of the present application is configured to execute the foregoing software-defined network-based data transmission method, so that the same effect as the above-described software-defined network-based data transmission method can be achieved.
  • Fig. 14 shows another possible composition diagram of the controller involved in the above embodiment.
  • the controller includes a processing module 1401 and a communication module 1402.
  • the processing module 1401 is configured to control and manage the actions of the controller.
  • the processing module 1401 is configured to support the controller to execute S604, S605, S608 shown in FIG. 6, S604, S612, S613, S605, S608 shown in FIG. 7, S604, S612, S613, S605 shown in FIG. S608, S628, S629, S630, S633, S604, S612, S613, S605, S608 shown in Fig. 9, S637, S638, S643, and/or other processes for the techniques described herein.
  • Communication module 1402 is used to support communication of the controller with other network entities, such as with the functional modules or network entities illustrated in Figures 6, 7, 8, 9, and 10.
  • the controller may also include a storage module 1403 for storing program code and data of the controller.
  • the processing module 1401 can be a processor or a controller. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor can also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 1402 can be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 1403 may be a memory.
  • the controller involved in the embodiment of the present application may be the computer device shown in FIG. 5.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used.
  • the combination may be integrated into another device, or some features may be ignored or not performed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed to multiple different places. . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a readable storage medium.
  • the technical solution of the embodiments of the present application may be embodied in the form of a software product in the form of a software product in essence or in the form of a contribution to the prior art, and the software product is stored in a storage medium.
  • a number of instructions are included to cause a device (which may be a microcontroller, chip, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例公开了一种基于软件定义网络的数据发送方法、装置及系统,涉及通信领域,解决了在交换机转发数据包时没有查询到转发数据包的流表项的情况下,如何既能够减少交换机生成的packet-in消息的数量以及避免同一数据流的数据包发生乱序,又能够降低交换机的资源开销的问题。方案为:交换机接收第i数据包,查询流表得到第一流表项,根据第一流表项将第i数据包存储到缓存队列,接收控制器发送的第二流表项和open-cache-port消息,根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包。本申请实施例用于数据发送的过程。

Description

一种基于软件定义网络的数据发送方法、装置及系统
本申请要求于2017年10月30日提交中国专利局、申请号为201711035868.5、申请名称为“一种基于软件定义网络的数据发送方法、装置及系统的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及通信领域,尤其涉及一种基于软件定义网络的数据发送方法、装置及系统。
背景技术
软件定义网络(Software Defined Networking,SDN)是基于开放流(OpenFlow,OF)协议对数据包进行控制与转发的一种网络,包括控制器(controller)和交换机。其中,控制器和交换机之间通过OpenFlow协议进行通信,交换机根据控制器配置的流表(flow table)对接收到的数据包进行转发。如果交换机在流表中没有查询到转发数据包匹配的流表项,交换机将生成入包(packet-in)消息,并将packet-in消息发送至控制器,控制器根据packet-in消息计算新的流表项,再通过流配置(flow-mod)消息向交换机配置新的流表项,并通过出包(packet-out)消息指示交换机处理packet-in消息对应的数据包。但是,在交换机配置新的流表项的时段内,可能会继续接收到同一数据流的数据包,由于没有匹配的流表项,导致交换机将继续生成packet-in消息并发送至控制器,占用了控制链路的带宽和控制器的处理资源;且为了减少交换机生成的packet-in消息的数量,交换机先根据flow-mod消息写入新的流表项,再处理packet-in消息对应的数据包,此时,交换机可能会先处理后接收到的同一数据流的数据包,导致同一数据流的数据包发生乱序。
针对上述缺点,现有技术提出一种方案,通过在交换机中增加Pi(packet-in)缓存管理模块和流动作预处理模块来解决交换机生成较多数量的packet-in消息和数据包发生乱序的问题。其中,Pi缓存管理模块用于对数据包进行缓存并控制packet-in消息的生成,流动作预处理模块用于处理控制器发送的flow-mod消息和packet-out消息,对于同一数据流,保证先处理packet-in消息对应的数据包,再写入新的流表项。但是,在交换机中增加新的处理模块会增加实现交换机的复杂度,并且占用更多的交换机资源,且在新的数据包到达速率大于处理packet-in消息对应的数据包的速率时,交换机需要先将数据包存到缓存区,然后再处理数据包,增加了处理数据包的时延。
因此,在交换机转发数据包时没有查询到转发数据包的流表项的情况下,如何既能够减少交换机生成的packet-in消息的数量以及避免同一数据流的数据包发生乱序,又能够降低交换机的资源开销是一个亟待解决的问题。
发明内容
本申请实施例提供一种基于软件定义网络的数据发送方法、装置及系统,解决了在交换机转发数据包时没有查询到转发数据包的流表项的情况下,如何既能够减少交 换机生成的packet-in消息的数量以及避免同一数据流的数据包发生乱序,又能够降低交换机的资源开销的问题。
为达到上述目的,本申请实施例采用如下技术方案:
本申请实施例的第一方面,提供一种基于SDN的数据发送方法,包括:首先,交换机从第一端口接收第i数据包,第i数据包包括包头信息,其中,i为正整数,i的取值为2到N,N个数据包的包头信息相同,N个数据包属于同一个数据流的数据包;然后,交换机根据第一端口的端口号或/和包头信息查询流表,得到第一流表项,根据第一流表项将第i数据包存储到缓存队列标识j对应的缓存队列,其中,第一流表项包括第一匹配特征和第一处理方式,第一匹配特征包括第一入口号和第一匹配字段,第一入口号为第一端口的端口号,第一匹配字段包括全部或部分包头信息,第一处理方式为与第一匹配特征匹配的数据包存储到缓存队列标识j对应的缓存队列;交换机接收到控制器发送的第一flow-mod消息和开缓存口(open-cache-port)消息,其中,第一flow-mod消息包括第二流表项,第二流表项包括第二匹配特征和第二处理方式,第二匹配特征包括第二入口号和第一匹配字段,第二入口号为缓存队列标识j,第二处理方式为从交换机的第二端口转发缓存队列标识j对应的缓存队列中存储的数据包,open-cache-port消息用于指示交换机转发缓存队列标识j对应的缓存队列中存储的数据包,缓存队列标识j对应的缓存队列的优先级高于交换机的入口队列的优先级;最后,交换机根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包。本申请实施例提供的基于SDN的数据发送方法,第一、通过扩展OpenFlow协议使流表支持转发至某个缓存队列的操作,然后利用流表项控制数据包的缓存,能够有效地减少针对同一数据流包括的数据包未查询到流表项时而生成的packet-in消息的数量,不用单独实现用于缓存的逻辑,可以节省交换机的资源;第二、将缓存队列作为OpenFlow流水线的一个输入,直接利用OpenFlow流水线处理缓存的数据包,也不用单独实现处理缓存的数据包的逻辑,可以节省交换机的资源;第三、通过优先级的方式区分交换机端口对应的入口队列和缓存队列,保证OpenFlow流水线优先处理缓存队列的数据包,使OpenFlow流水线按序处理和转发同一数据流的数据包,避免同一数据流的数据包发生乱序。
在本申请实施例中,可以通过扩展OpenFlow协议使流表支持转发至某个缓存队列的操作,即在交换机转发数据包时没有查询到转发数据包的流表项的情况下,使用第一流表项控制数据包的缓存。第一流表项可以通过以下不同的方式实现。
结合第一方面,在一种可能的实现方式中,在交换机从第一端口接收第i数据包之前,方法还包括:交换机从第一端口接收到第一数据包后,确定根据第一端口的端口号或/和第一数据包包括的包头信息在流表中未查询到转发第一数据包的流表项,然后,确定缓存队列,该缓存队列对应的缓存队列标识为j,该缓存队列为空,将第一数据包存储到缓存队列标识j对应的缓存队列;生成第一packet-in消息,第一packet-in消息包括包头信息和缓存队列标识j;向控制器发送第一packet-in消息,从控制器处获取转发数据包的流表项,第一数据包与N个数据包属于同一个数据流的数据包。
结合第一方面和上述可能的实现方式,在另一种可能的实现方式中,在交换机将第一数据包存储到缓存队列标识j对应的缓存队列之后,方法还包括:交换机生成第 一流表项。
从而,每条数据量流只用生成一个packet-in消息,即同一数据流中的第一个数据包未匹配到用于转发的流表项时生成packet-in消息,并生成第一流表项,同一数据流的后续数据包(第i数据包)按照第一流表项的处理方式将第i数据包转发至缓存队列存储,因此,第i数据包就不会产生无流表项匹配的事件,也就不会生成packet-in消息了,有效地减小了控制链路和控制器的负载。
结合第一方面,在另一种可能的实现方式中,在交换机从第一端口接收第i数据包之前,方法还包括:交换机从第一端口接收到第一数据包后,确定根据第一端口的端口号或/和第一数据包包括的包头信息在流表中未查询到转发第一数据包的流表项,然后,生成第二packet-in消息,第二packet-in消息包括包头信息和第一端口的端口号,向控制器发送第二packet-in消息,交换机接收控制器发送的第二flow-mod消息,第二flow-mod消息包括第一流表项,交换机接收控制器发送的packet-out消息,packet-out消息用于指示交换机从第二端口转发第一数据包,交换机通过第二端口转发第一数据包,第一数据包与N个数据包属于同一个数据流的数据包。从而,每条数据量流只用生成一个packet-in消息,即同一数据流中的第一个数据包未匹配到用于转发的流表项时生成packet-in消息,并向控制器发送请求,获取第一流表项,同一数据流的后续数据包(第i数据包)按照第一流表项的处理方式将第i数据包转发至缓存队列存储,因此,第i数据包就不会产生无流表项匹配的事件,也就不会生成packet-in消息了,有效地减小了控制链路和控制器的负载。
结合第一方面和上述可能的实现方式,在另一种可能的实现方式中,在交换机接收控制器发送的open-cache-port消息之后,方法还包括:交换机接收控制器发送的第三flow-mod消息,第三flow-mod消息包括第三流表项,第三流表项包括第一匹配特征和第二处理方式,以便于交换机根据第三流表项转发从入口队列接收到的同一数据流的数据包。
结合第一方面和上述可能的实现方式,在另一种可能的实现方式中,在交换机接收控制器发送的第三flow-mod消息之后,方法还包括:交换机删除第一流表项。从而,在交换机获取到转发从入口队列接收到的同一数据流的数据包的第三流表项后,用于缓存数据包的第一流表项就失去了存在的意义了,此时,通过删除第一流表项,增加交换机的存储空间。
结合第一方面和上述可能的实现方式,在另一种可能的实现方式中,在交换机根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包之后,方法还包括:交换机删除第二流表项。从而,在交换机通过第二流表项将存储到缓存队列的数据包转发完成后,用于处理缓存数据包的第二流表项就失去了存在的意义了,此时,通过删除第二流表项,增加交换机的存储空间。
本申请实施例的第二方面,提供一种基于SDN的数据发送方法,包括:首先,控制器生成第二流表项和第一flow-mod消息,第一flow-mod消息包括第二流表项,第二流表项包括第二匹配特征和第二处理方式,第二匹配特征包括第二入口号和第一匹配字段,第二入口号为缓存队列标识j,第一匹配字段包括全部或部分包头信息,包头信息为交换机从第一端口接收到的第i数据包包括的包头信息,其中,i为正整数,i 的取值为2到N,N个数据包的包头信息相同,N个数据包属于同一个数据流的数据包,第二处理方式为从交换机的第二端口转发缓存队列标识j对应的缓存队列中存储的数据包;然后,控制器向交换机发送第一flow-mod消息;控制器再生成open-cache-port消息,向交换机发送open-cache-port消息,open-cache-port消息用于指示交换机转发缓存队列标识j对应的缓存队列中存储的数据包,缓存队列标识j对应的缓存队列的优先级高于交换机的入口队列的优先级。本申请实施例提供的基于SDN的数据发送方法,第一、通过扩展OpenFlow协议使流表支持转发至某个缓存队列的操作,然后利用流表项控制数据包的缓存,能够有效地减少针对同一数据流包括的数据包未查询到流表项时而生成的packet-in消息的数量,不用单独实现用于缓存的逻辑,可以节省交换机的资源;第二、将缓存队列作为OpenFlow流水线的一个输入,直接利用OpenFlow流水线处理缓存的数据包,也不用单独实现处理缓存的数据包的逻辑,可以节省交换机的资源;第三、通过优先级的方式区分交换机端口对应的入口队列和缓存队列,保证OpenFlow流水线优先处理缓存队列的数据包,使OpenFlow流水线按序处理和转发同一数据流的数据包,避免同一数据流的数据包发生乱序。
结合第二方面,在一种可能的实现方式中,在控制器生成第二流表项之前,方法还包括:控制器接收交换机发送的第一packet-in消息,第一packet-in消息包括包头信息和缓存队列标识j,以便于控制器生成第二流表项,使交换机处理缓存队列存储的数据包。
结合第二方面,在一种可能的实现方式中,在控制器生成第二流表项之前,方法还包括:控制器接收交换机发送的第二packet-in消息,第二packet-in消息包括包头信息和第一端口的端口号;控制器确定缓存队列标识j对应的缓存队列和第一端口的端口号,第一端口的端口号为交换机接收第一数据包和第i数据包的端口对应的端口号,第一数据包与N个数据包属于同一个数据流的数据包;控制器生成第一流表项和第二flow-mod消息,第二flow-mod消息包括第一流表项,第一流表项包括第一匹配特征和第一处理方式,第一匹配特征包括第一入口号和第一匹配字段,第一入口号为第一端口的端口号,第一处理方式为与第一匹配特征匹配的数据包存储到缓存队列标识j对应的缓存队列;控制器向交换机发送第二flow-mod消息;控制器生成packet-out消息,packet-out消息用于指示交换机从第二端口转发第一数据包;控制器向交换机发送packet-out消息,以便于交换机转发第一数据包。
结合上述可能的实现方式,在另一种可能的实现方式中,在控制器接收交换机发送的第一packet-in消息或第二packet-in消息之后,方法还包括:控制器根据包头信息计算转发路径,转发路径上包括M个交换机,M个交换机包括接收第i数据包的交换机;控制器生成转发路径上除了接收第i数据包的交换机之外的M-1个其他交换机中每个交换机所需要使用的流表项;控制器向转发路径上除了接收第i数据包的交换机之外的M-1个其他交换机中每个交换机发送所需要使用的流表项。从而,避免转发同一数据流的数据包的转发路径上的其他交换机仍然查询不到转发的流表项时,生成packet-in消息。
结合上述可能的实现方式,在另一种可能的实现方式中,在控制器向交换机发送open-cache-port消息之后,方法还包括:控制器生成第三流表项和第三flow-mod消息, 第三flow-mod消息包括第三流表项,第三流表项包括第一匹配特征和第二处理方式,第一匹配特征包括第一入口号和第一匹配字段,第一入口号为第一端口的端口号;控制器向交换机发送第三flow-mod消息。以便于使交换机根据第三流表项转发从入口队列接收到的同一数据流的数据包。
结合上述可能的实现方式,在另一种可能的实现方式中,控制器向交换机发送第三flow-mod消息之后,方法还包括:控制器删除第一流表项和第二流表项,从而,增加控制器的存储空间。
本申请实施例的第三方面,提供一种交换机,包括:接收单元,用于从第一端口接收第i数据包,第i数据包包括包头信息,其中,i为正整数,i的取值为2到N,N个数据包的包头信息相同,N个数据包属于同一个数据流的数据包;处理单元,用于根据第一端口的端口号或/和包头信息查询流表,得到第一流表项,第一流表项包括第一匹配特征和第一处理方式,第一匹配特征包括第一入口号和第一匹配字段,第一入口号为第一端口的端口号,第一匹配字段包括全部或部分包头信息,第一处理方式为与第一匹配特征匹配的数据包存储到缓存队列标识j对应的缓存队列;处理单元,还用于根据第一流表项将第i数据包存储到缓存队列标识j对应的缓存队列;接收单元,还用于接收控制器发送的第一flow-mod消息,第一flow-mod消息包括第二流表项,第二流表项包括第二匹配特征和第二处理方式,第二匹配特征包括第二入口号和第一匹配字段,第二入口号为缓存队列标识j,第二处理方式为从交换机的第二端口转发缓存队列标识j对应的缓存队列中存储的数据包;接收单元,还用于接收控制器发送的open-cache-port消息,open-cache-port消息用于指示交换机转发缓存队列标识j对应的缓存队列中存储的数据包,缓存队列标识j对应的缓存队列的优先级高于交换机的入口队列的优先级;处理单元,还用于根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包。
本申请实施例的第四方面,提供一种控制器,包括:处理单元,用于生成第二流表项,第二流表项包括第二匹配特征和第二处理方式,第二匹配特征包括第二入口号和第一匹配字段,第二入口号为缓存队列标识j,第一匹配字段包括全部或部分包头信息,包头信息为交换机从第一端口接收到的第i数据包包括的包头信息,其中,i为正整数,i的取值为2到N,N个数据包的包头信息相同,N个数据包属于同一个数据流的数据包,第二处理方式为从交换机的第二端口转发缓存队列标识j对应的缓存队列中存储的数据包;处理单元,还用于生成第一flow-mod消息,第一flow-mod消息包括第二流表项;发送单元,用于向交换机发送第一flow-mod消息;处理单元,还用于生成open-cache-port消息,open-cache-port消息用于指示交换机转发缓存队列标识j对应的缓存队列中存储的数据包,缓存队列标识j对应的缓存队列的优先级高于交换机的入口队列的优先级;发送单元,还用于向交换机发送open-cache-port消息。
需要说明的是,上述第三方面和第四方面功能模块可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。例如,通信接口,用于完成接收单元和发送单元的功能,处理器,用于完成处理单元的功能,存储器,用于处理器处理本申请实施例的基于SDN的数据发送方法的程序指令。处理器、通信接口和存储器通过总线连接并完成相互间的通信。具体的,可以参 考第一方面提供的基于SDN的数据发送方法中交换机的行为的功能,以及第二方面提供的基于SDN的数据发送方法中控制器的行为的功能。
本申请实施例的第五方面,提供一种交换机,该交换机可以包括:至少一个处理器,存储器、通信接口、通信总线;至少一个处理器与存储器、通信接口通过通信总线连接,存储器用于存储计算机执行指令,当处理器运行时,处理器执行存储器存储的计算机执行指令,以使交换机执行第一方面或第一方面的可能的实现方式中任一所述的基于SDN的数据发送方法。
本申请实施例的第六方面,提供一种控制器,该控制器可以包括:至少一个处理器,存储器、通信接口、通信总线;至少一个处理器与存储器、通信接口通过通信总线连接,存储器用于存储计算机执行指令,当处理器运行时,处理器执行存储器存储的计算机执行指令,以使控制器执行第二方面或第二方面的可能的实现方式中任一所述的基于SDN的数据发送方法。
本申请实施例的第七方面,提供一种软件定义网络,包括:上述第三方面或第五方面所述的交换机和上述第四方面或第六方面所述的控制器。
本申请实施例的第八方面,提供了一种计算机可读存储介质,用于储存为上述交换机所用的计算机软件指令,当计算机软件指令被处理器执行时,使得交换机可以执行上述中任意方面的方法。
本申请实施例的第九方面,提供了一种计算机可读存储介质,用于储存为上述控制器所用的计算机软件指令,当计算机软件指令被处理器执行时,使得控制器可以执行上述中任意方面的方法。
本申请实施例的第十方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机可以执行上述任意方面的方法。
另外,第三方面至第十方面中任一种设计方式所带来的技术效果可参见第一方面和第二方面中不同设计方式所带来的技术效果,此处不再赘述。
本申请实施例中,交换机和控制器的名字对设备本身不构成限定,在实际实现中,这些设备可以以其他名称出现。只要各个设备的功能和本申请实施例类似,属于本申请权利要求及其等同技术的范围之内。
本申请实施例的这些方面或其他方面在以下实施例的描述中会更加简明易懂。
附图说明
图1为本申请实施例提供的一种软件定义网络示意图;
图2为现有技术提供的一种基于SDN的数据发送方法流程图;
图3为现有技术提供的一种基于Pi缓存管理模块和流动作预处理模块的数据发送过程的示意图;
图4为现有技术提供的一种Pi缓存表的结构示意图;
图5为本申请实施例提供的一种计算机设备的结构示意图;
图6为本申请实施例提供的一种基于软件定义网络的数据发送方法的流程图;
图7为本申请实施例提供的另一种基于软件定义网络的数据发送方法的流程图;
图8为本申请实施例提供的又一种基于软件定义网络的数据发送方法的流程图;
图9为本申请实施例提供的再一种基于软件定义网络的数据发送方法的流程图;
图10为本申请实施例提供的再一种基于软件定义网络的数据发送方法的流程图;
图11为本申请实施例提供的一种交换机的结构示意图;
图12为本申请实施例提供的另一种交换机的结构示意图;
图13为本申请实施例提供的一种控制器的结构示意图;
图14为本申请实施例提供的另一种控制器的结构示意图。
具体实施方式
为了下述各实施例的描述清楚简洁,首先给出相关技术的简要介绍:
软件定义网络(Software Defined Networking,SDN)是一种新型的网络体系架构,基于开放流(OpenFlow,OF)协议对数据包进行控制与转发。如图1所示,本申请实施例提供的一种软件定义网络示意图,其中,软件定义网络包括控制器(controller)101、交换机102、交换机103、交换机104、交换机105、交换机106和终端设备107。控制器知道所有网络信息,用于计算数据包的转发路径,管理流表,指挥所有交换机的工作;交换机不知道任何网络信息,只按照控制器的指挥进行工作。控制器和交换机之间利用OpenFlow协议通过控制链路进行通信,交换机也可以称作OpenFlow交换机;交换机根据控制器配置的流表(flow table)通过数据链路对接收到的数据包进行转发。从而,软件定义网络可以认为分为数据平面与控制平面,并采用集中式的控制平面,极大地提高了网络的灵活性。在具体的实现中,该终端设备107可以手机、平板电脑、笔记本电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、上网本、个人数字助理(Personal Digital Assistant,PDA)等等。作为一种实施例,如图1中所示,本申请实施例的网络架构包括的终端设备107为电脑。
流表为交换机对接收到的数据包进行转发的转发表。流表包括至少一个流表项,每个流表项包括匹配特征、处理方式和统计信息。
匹配特征包括入口号和匹配字段。入口号为交换机接收数据包的端口对应的端口号,交换机从哪个端口接收到数据包,该端口的端口号就是入口号。匹配字段包括交换机接收到的数据包的包头信息的全部或部分内容,匹配字段可以包括源网络之间互连协议(Internet Protocol,IP)地址(IP src)和目的IP地址(IP dst)。当然,实际应用中可以包括更多的匹配字段,例如,交换机入端口(Ingress Port)、源媒体访问控制(Media Access Control,MAC)地址(Ether source)、目的MAC地址(Ether dst)、以太网类型(Ether Type)、虚拟局域网(Virtual Local Area Network,VLAN)标签(VLAN ID)、VLAN优先级(VLAN priority)、IP协议字段(IP proto)、IP服务类型(IP Tos bits)、传输控制协议(Transmission Control Protocol,TCP)/用户数据报协议(User Datagram Protocol,UDP)源端口号(TCP/UDP src port)和TCP/UDP目的端口号(TCP/UDP dst port)等。
处理方式规定了对成功匹配到流表项的数据包的处理,包括从交换机的某个端口转发或丢弃数据包,或修改包头信息中部分字段的取值等。
统计信息记录了根据流表项处理的数据包的数量和总的字节数等信息。
下面结合图2简要介绍下现有技术中基于SDN的数据发送方法流程,如图2所示,所述方法包括:
S201、交换机从第一端口接收到数据包,并解析数据包得到包头信息。
S202、交换机根据第一端口的端口号或/和数据包的包头信息查询流表,与流表中所有流表项进行比较,确定是否得到与第一端口的端口号或/和包头信息匹配的流表项。
若确定得到与第一端口的端口号或/和包头信息匹配的流表项,执行步骤203;若确定未得到与第一端口的端口号或/和包头信息匹配的流表项,交换机需要向控制器发送packet-in消息,从控制器获取新的流表项来转发数据包。如图2所示,所述方法还包括步骤204至步骤213。
S203、交换机根据流表项的处理方式处理该数据包。
需要说明的是,如果有多条流表项能同时匹配,选择其中优先级最高的一条流表项作为匹配结果。例如,第一条流表只匹配入口号,第二条流表同时匹配入口号和IP地址,由于两条流表的入口号相同,就会同时匹配。然后,选择其中优先级最高的一条流表项作为匹配结果。例如,可以根据转发路径长短设置流表项的优先级,即转发路径长的优先级低,转发路径短的优先级高。
S204、交换机生成packet-in消息。
其中,根据交换机能否在本地缓存数据包,packet-in消息可以携带不同的内容。
如果交换机在本地不能缓存数据包,则交换机将完整的数据包封装在packet-in消息中,发送至控制器缓存。控制器解析数据包得到包头信息后,根据包头信息计算得到新的流表项,再由packet-out消息将完整的数据包发送至交换机处理。
如果交换机在本地能够缓存数据包,则交换机在本地存储数据包,并生成一个唯一的缓存编号(Buffer-id),然后将数据包的包头信息以及缓存编号封装在packet-in消息中,packet-out消息也只传输缓存编号。
S205、交换机向控制器发送packet-in消息。
S206、控制器接收交换机发送的packet-in消息。
S207、控制器计算新的流表项。
S208、控制器向交换机发送flow-mod消息,flow-mod消息包括新的流表项。
S209、交换机接收控制器发送的flow-mod消息。
S210、交换机写入新的流表项。
S211、控制器向交换机发送packet-out消息,packet-out消息用于指示交换机从第二端口转发数据包。
第二端口为区别第一端口的交换机的任一个物理端口。
S212、交换机接收控制器发送的packet-out消息。
S213、交换机通过第二端口转发数据包。
但是,上述基于OpenFlow协议规定的数据发送过程中存在两点不足。第一,在交换机向控制器发送第一个packet-in消息,到交换机接收到新的流表项的时段内,即S204至S210之间,交换机可能会继续接收到同一数据流的数据包,由于没有匹配的流表项,导致交换机将继续生成packet-in消息并发送至控制器,占用了控制链路的带宽和控制器的处理资源;第二,为了减少交换机生成的packet-in消息的数量,交换机先根据flow-mod消息写入新的流表项,再处理packet-in消息对应的数据包,此时,后到达交换机的同一数据流的数据包可以根据新的流表项得到处理和转发,先于缓存的数据包得到处理和转发,导致交换机转发数据包的顺序和收到数据包的顺序不同, 同一数据流的数据包发生乱序。
针对上述基于OpenFlow协议规定的数据发送过程中存在的缺点,现有技术提出了一种方案,通过在交换机中增加Pi(packet-in)缓存管理模块和流动作预处理模块来解决交换机生成较多数量的packet-in消息和数据包发生乱序的问题。
图3为现有技术提供的一种基于Pi缓存管理模块和流动作预处理模块的数据发送过程的示意图,其中,Pi缓存管理模块用于对数据包进行缓存并控制packet-in消息的生成,流动作预处理模块用于处理控制器发送的flow-mod消息和packet-out消息,对于同一数据流,保证先处理packet-in消息对应的数据包,再在OpenFlow流水线中写入新的流表项。Pi缓存管理模块使用Pi缓存表(Pi Buffer Table,PiBT)管理缓存。图4为现有技术提供的一种Pi缓存表的结构示意图。Pi缓存表包括匹配字段、缓存区首地址、当前缓存区地址、缓存报文计数和超时时间四部分。每一项表示对同一数据流的数据包的缓存规则。匹配字段为数据包包括的包头信息中各字段的取值,例如,源MAC地址、目的MAC地址、源IP地址、目的IP地址等;缓存区首地址表示缓存同一数据流的数据包的缓存队列的首地址;当前缓存区地址表示缓存同一数据流的数据包的最后一个数据包的地址;缓存报文计数表示已经缓存的数据包的数量;超时时间表示这条缓存规则失效的时间。
下面结合图3简要介绍下现有技术中基于Pi缓存管理模块和流动作预处理模块进行数据发送的过程,包括:
(1)交换机接收到数据包后,数据包进入OpenFlow流水线,OpenFlow流水线中的报文解析模块对数据包进行解析得到数据包包括的包头信息,然后,OpenFlow流水线中的流表查找模块根据数据包的包头信息在流表中查询转发数据包的流表项,如果没有查询到匹配的流表项,将数据包发送至Pi缓存管理模块。
(2)Pi缓存管理模块根据数据包包括的包头信息,查找Pi缓存表中的匹配字段,如果找到匹配项,则将数据包插入对应的缓存队列;否则,新建缓存队列,将数据包缓存至新建的缓存队列,并生成packet-in消息发送到控制器,packet-in消息中包括数据包的包头信息以及新建队列的缓存区首地址。从而,保证了每条数据流只会生成一个packet-in消息。
(3)控制器根据packet-in消息中的包头信息计算转发数据包的新的流表项,并通过flow-mod消息向交换机发送该新的流表项,flow-mod消息中携带了需要写入的新的流表项和缓存区首地址;并且,控制器还需要向交换机发送packet-out消息,指示交换机根据新的流表项处理数据包,packet-out消息携带了缓存区首地址(Buffer_id)。
(4)交换机接收到flow-mod消息和packet-out消息后,流动作预处理模块先根据缓存区首地址找到缓存队列,根据新的流表项中的处理动作逐个处理缓存队列中的数据包;缓存队列中的数据包处理完之后,再在OpenFlow流水线中写入新的流表项。从而,保证按序转发每条数据流中的数据包,避免同一数据流的数据包发生乱序。
但是,上述基于Pi缓存管理模块和流动作预处理模块的数据发送过程中也存在不足。第一,需要在已有的OpenFlow流水线之外增加新的处理模块(Pi缓存管理和流动作预处理),会增加实现交换机的复杂度,并且占用更多的交换机资源;第二,被缓存的数据包在缓存之前需要经过两次查找匹配的过程,第一次是OpenFlow流水线 的查找匹配,第二次是Pi缓存表的查找匹配,两次查找匹配会增加交换机处理一个数据包的计算和存储资源开销;第三,流动作预处理模块处理缓存区中数据包的同时,会有新的数据包到达,并且在写入新的流表项之前,仍然没有流表项与这些数据包匹配,因此它们会继续被存储到缓存区,当新的数据包到达速率大于处理缓存区中数据包的速率时,缓存区中将一直有数据包积累,流动作预处理会持续很长时间,许多数据包都要先在缓存区缓存,然后再进行处理,增加了处理数据包的延时。
为了解决在交换机转发数据包时没有查询到转发数据包的流表项的情况下,既要保证减少交换机生成的packet-in消息的数量以及避免同一数据流的数据包发生乱序,又要降低交换机的资源开销的问题,本申请实施例提供一种基于SDN的数据发送方法,其基本原理是:首先,交换机从第一端口接收第i数据包,第i数据包包括包头信息,其中,i为正整数,i的取值为2到N,N个数据包的包头信息相同,N个数据包属于同一个数据流的数据包;然后,交换机根据第一端口的端口号或/和包头信息查询流表,得到第一流表项,根据第一流表项将第i数据包存储到缓存队列标识j对应的缓存队列,其中,第一流表项包括第一匹配特征和第一处理方式,第一匹配特征包括第一入口号和第一匹配字段,第一入口号为第一端口的端口号,第一匹配字段包括全部或部分包头信息,第一处理方式为与第一匹配特征匹配的数据包存储到缓存队列标识j对应的缓存队列;交换机接收到控制器发送的第一flow-mod消息和open-cache-port消息,其中,第一flow-mod消息包括第二流表项,第二流表项包括第二匹配特征和第二处理方式,第二匹配特征包括第二入口号和第一匹配字段,第二入口号为缓存队列标识j,第二处理方式为从交换机的第二端口转发缓存队列标识j对应的缓存队列中存储的数据包,open-cache-port消息用于指示交换机转发缓存队列标识j对应的缓存队列中存储的数据包,缓存队列标识j对应的缓存队列的优先级高于交换机的入口队列的优先级;最后,交换机根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包。本申请实施例提供的基于SDN的数据发送方法,第一、通过扩展OpenFlow协议使流表支持转发至某个缓存队列的操作,然后利用流表控制数据包的缓存,能够有效地减少针对同一数据流包括的数据包未查询到流表项时而生成的Packet-in消息的数量,不用单独实现用于缓存的逻辑,可以节省交换机的资源;第二、将缓存队列作为OpenFlow流水线的一个输入,直接利用OpenFlow流水线处理缓存的数据包,也不用单独实现处理缓存的数据包的逻辑,可以节省交换机的资源;第三、通过优先级的方式区分交换机端口对应的入口队列和缓存队列,保证OpenFlow流水线优先处理缓存队列的数据包,使OpenFlow流水线按序处理和转发同一数据流的数据包,避免同一数据流的数据包发生乱序。
下面将结合附图对本申请实施例的实施方式进行详细描述。
本申请实施例的系统架构可以参考图1所示的软件定义网络的示意及详述,本申请实施例在此步骤赘述。
图5为本申请实施例提供的一种计算机设备的结构示意图,如图5所示,计算机设备可以包括至少一个处理器51,存储器52、通信接口53、通信总线54。
下面结合图5对计算机设备的各个构成部件进行具体的介绍:
处理器51是计算机设备的控制中心,可以是一个处理器,也可以是多个处理元件 的统称。在具体的实现中,作为一种实施例,处理器51可以包括一个中央处理器(Central Processing Unit,CPU)或多个CPU,例如图5中所示的CPU0和CPU1。处理器51也可以是特定集成电路(Application Specific Integrated Circuit,ASIC),或者是被配置成实施本申请实施例的一个或多个集成电路,例如:一个或多个微处理器(Digital Signal Processor,DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,FPGA)。
其中,以处理器51是一个或多个CPU为例,处理器51可以通过运行或执行存储在存储器52内的软件程序,以及调用存储在存储器52内的数据,执行计算机设备的各种功能。
在具体实现中,作为一种实施例,计算机设备可以包括多个处理器,例如图5中所示的处理器51和处理器55。这些处理器中的每一个可以是一个单核处理器(single-CPU),也可以是一个多核处理器(multi-CPU)。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在本申请实施例的一种可实现的方式中,计算机设备可以是交换机,处理器51主要用于根据第一端口的端口号或/和包头信息查询流表,得到第一流表项,根据第一流表项将第i数据包存储到缓存队列标识j对应的缓存队列,根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包。
在本申请实施例的另一种可实现的方式中,计算机设备可以是控制器,处理器51主要用于生成第二流表项、第一flow-mod消息和open-cache-port消息。
存储器52可以是只读存储器(Read-Only Memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(Random Access Memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器52可以是独立存在,通过通信总线54与处理器51相连接。存储器52也可以和处理器51集成在一起。
其中,所述存储器52用于存储执行本申请方案的软件程序,并由处理器51来控制执行。存储器52用于存储数据包。
通信接口53,用于与其他设备或通信网络通信,如以太网,无线接入网(Radio Access Network,RAN),无线局域网(Wireless Local Area Networks,WLAN)等。通信接口53可以包括接收单元实现接收功能,以及发送单元实现发送功能。
通信总线54,可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(Peripheral Component Interconnect,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图5中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
图5中示出的设备结构并不构成计算机设备的限定,可以包括比图示更多或更少 的部件,或者组合某些部件,或者不同的部件布置。
图6为本申请实施例提供的一种基于软件定义网络的数据发送方法的流程图,如图6所示,该方法可以包括:
S601、交换机从第一端口接收第i数据包。
交换机从第一端口接收到了N个数据包,每个数据包都包括了包头信息和数据,N个数据包包括的数据不同,N个数据包包括的包头信息相同,包头信息相同的数据包属于同一个数据流,N个数据包属于同一个数据流的数据包。第一端口可以是交换机的任意一个物理端口,即交换机从哪个物理端口接收数据包是不作限定的,可以从任意一个物理端口接收数据包,而交换机从哪个物理端口转发数据包是需要控制器指示交换机的。其中,i为正整数,i的取值为2到N。N表示交换机获取到转发数据包的流表项之前接收到的数据包的数量,N小于等于数据流包括的数据包的数量。不同的数据流包括不同数量的数据包。
S602、交换机根据第一端口的端口号或/和包头信息查询流表,得到第一流表项。
交换机从第一端口接收到第i数据包后,第i数据包进入入口队列等待处理。交换机知道第一端口的端口号,交换机可以根据包头信息查询流表,或者交换机根据第一端口的端口号查询流表,或者交换机根据第一端口的端口号和包头信息查询流表,得到第一流表项。第一流表项用于将第i数据包转发到缓存队列进行存储,避免交换机未查询到转发第i数据包的流表项时,生成packet-in消息,向控制器发送packet-in消息来获取转发第i数据包的流表项,而占用控制链路的带宽和控制器的处理资源。
其中,第一流表项包括第一匹配特征和第一处理方式。第一匹配特征包括第一入口号和第一匹配字段,第一入口号为第一端口的端口号,第一匹配字段包括全部或部分包头信息,第一匹配字段的具体的内容可以参考上述现有技术的阐述,本申请实施例在此不再赘述。第一处理方式为与第一匹配特征匹配的数据包存储到缓存队列标识j对应的缓存队列。第一流表项用于交换机将从第一端口接收到的具有相同包头信息的数据包存储到缓存队列标识j对应的缓存队列。
S603、交换机根据第一流表项将第i数据包存储到缓存队列标识j对应的缓存队列。
交换机根据第一端口的端口号或/和包头信息查询流表,得到第一流表项后,根据第一流表项中的第一处理方式将第i数据包存储到缓存队列标识j对应的缓存队列。
S604、控制器生成第二流表项。
第二流表项用于交换机转发存储在缓存队列标识j对应的缓存队列里的数据包,第二流表项包括第二匹配特征和第二处理方式,第二匹配特征包括第二入口号和第一匹配字段,第二入口号为缓存队列标识j,第一匹配字段包括全部或部分包头信息,包头信息为交换机从第一端口接收到的数据包包括的包头信息,第二处理方式为从交换机的第二端口转发缓存队列标识j对应的缓存队列中存储的数据包。
S605、控制器生成第一flow-mod消息,第一flow-mod消息包括第二流表项。
控制器生成第二流表项后,生成第一flow-mod消息。
S606、控制器向交换机发送第一flow-mod消息。
控制器通过第一flow-mod消息将第二流表项发送给交换机。
S607、交换机接收控制器发送的第一flow-mod消息。
S608、控制器生成open-cache-port消息。
控制器生成第一flow-mod消息后,生成open-cache-port消息,open-cache-port消息用于指示交换机转发缓存队列标识j对应的缓存队列中存储的数据包,且缓存队列标识j对应的缓存队列的优先级高于交换机的入口队列的优先级。例如,假设交换机入口队列的优先级为5,将缓存队列的优先级设置为10。
因此,通过控制器的指示,交换机优先读取高优先级队列的数据包进行处理,保证缓存的数据包被优先转发,从而保证了同一数据流的数据包顺序转发。另外,在缓存队列中的数据包处理完之前,交换机不会读取交换机的入口队列的数据包进行处理,从而,不会有新的数据包被转发至缓存队列,避免了现有技术中,当新的数据包到达速率大于处理缓存区中数据包的速率时,缓存区中将一直有数据包积累,导致交换机需要先将数据包存到缓存区,然后再处理数据包,增加了处理数据包的时延的问题。
需要说明的是,本申请实施例提供的基于软件定义网络的数据发送方法步骤的先后顺序可以进行适当调整。示例的,如S606控制器向交换机发送第一flow-mod消息和S608控制器生成open-cache-port消息之间的前后顺序可以互换,即可先执行S606再执行S608,或S606和S608同时执行,本申请实施例所述的方案是对基于软件定义网络的数据发送方法的一种示例性的实现方式,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化的方法,都应涵盖在本发明的保护范围之内,因此不再赘述。
S609、控制器向交换机发送open-cache-port消息。
S610、交换机接收控制器发送的open-cache-port消息。
需要说明的是,在交换机接收控制器发送的open-cache-port消息,获取到第二流表项,转发缓存队列标识j对应的缓存队列中存储的数据包之前,交换机接收到的数据包根据第一流表项将数据包存储到缓存队列标识j对应的缓存队列,此时,控制器也在生成第二流表项,再向交换机配置第二流表项,即交换机执行S601至S603的期间,控制器可以同时执行S604、S605、S606、S608、S609。
S611、交换机根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包。
交换机接收到控制器发送的open-cache-port消息后,根据open-cache-port消息的指示处理缓存队列标识j对应的缓存队列中存储的数据包。交换机根据缓存队列标识j或/和数据包的包头信息查询流表,由于交换机通过第一flow-mod消息获取到第二流表项,数据包是存在缓存队列标识j对应的缓存队列中的,能够与第二流表项的第二入口号(缓存队列标识j)匹配,或/和数据包的包头信息也能够与第二流表项的第一匹配字段(全部或部分包头信息)匹配,即确定根据缓存队列标识j或/和包头信息在流表中查询到转发存储在缓存队列标识j对应的缓存队列中的数据包的流表项,交换机根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包。
为了避免交换机根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包后,数据包转发到软件定义网络中其他交换机,而其他交换机任然未查询到转发同一数据流的数据包的流表项,向控制器发送packet-in消息的情况,在S604之后, 控制器还需要向转发路径上的其他交换机发送所需要使用的流表项,如图7所示,本申请实施例还包括以下步骤:
S612、控制器根据包头信息计算转发路径。
转发路径上包括M个交换机,M个交换机包括接收第i数据包的交换机。
S613、控制器生成转发路径上除了接收第i数据包的交换机之外的M-1个其他交换机中每个交换机所需要使用的流表项。
S614、控制器向转发路径上除了接收第i数据包的交换机之外的M-1个其他交换机中每个交换机发送所需要使用的流表项。
需要说明的是,在本申请实施例中,可以通过扩展OpenFlow协议使流表支持转发至某个缓存队列的操作,即在交换机转发数据包时没有查询到转发数据包的流表项的情况下,使用第一流表项控制数据包的缓存,第一流表项可以通过交换机自身生成也可以由控制器生成,下面结合附图详细介绍具体的实现方式。
下面结合图8对交换机自身生成第一流表项的方式进行详细说明。如图8所示,在交换机从第一端口接收第i数据包之前,本申请实施例还包括以下步骤:
S615、交换机从第一端口接收第一数据包。
第一数据包包括包头信息,第一数据包包括的包头信息与第i数据包包括的包头信息相同,第一数据包与第i数据包属于同一个数据流的数据包。
S616、交换机确定根据第一端口的端口号或/和包头信息在流表中未查询到转发第一数据包的流表项。
交换机从第一端口接收到第一数据包后,根据第一端口的端口号或/和包头信息在流表中查询转发第一数据包的流表项,交换机确定根据第一端口的端口号或/和包头信息在流表中未查询到转发第一数据包的流表项执行S617;若交换机确定根据第一端口的端口号或/和包头信息在流表中查询到转发第一数据包的流表项,根据查询到的转发第一数据包的流表项对第一数据包进行转发。
S617、交换机确定缓存队列标识j对应的缓存队列。
交换机确定根据第一端口的端口号或/和包头信息在流表中未查询到转发第一数据包的流表项后,从空的缓存队列列表中选择一个缓存队列,如果没有空的缓存队列,交换机在本地创建一个缓存队列。本申请实施例假设确定的空的缓存队列为缓存队列标识j对应的缓存队列。
S618、交换机将第一数据包存储到缓存队列标识j对应的缓存队列。
交换机确定缓存队列标识j对应的缓存队列后,将第一数据包存储到缓存队列标识j对应的缓存队列。
S619、交换机生成第一流表项。
交换机生成第一流表项后,以便于交换机在获取到第二流表项之前,将接收到的同一数据流的数据包存储到缓存队列标识j对应的缓存队列,避免生成packet-in消息,占用控制链路的带宽和控制器的处理资源。
需要说明的是,本申请实施例提供的基于软件定义网络的数据发送方法步骤的先后顺序可以进行适当调整。示例的,如S618和S619之间的前后顺序可以互换,即可先执行S619再执行S618,或S618和S619同时执行,本申请实施例所述的方案是对 基于软件定义网络的数据发送方法的一种示例性的实现方式,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化的方法,都应涵盖在本发明的保护范围之内,因此不再赘述。
S620、交换机生成第一packet-in消息,第一packet-in消息包括包头信息和缓存队列标识j。
交换机确定缓存队列标识j对应的缓存队列后,生成第一packet-in消息。
S621、交换机向控制器发送第一packet-in消息。
交换机生成第一packet-in消息之后,向控制器发送第一packet-in消息。
S622、控制器接收交换机发送的第一packet-in消息。
第一packet-in消息包括包头信息和缓存队列标识j,以便于控制器生成第二流表项。
下面结合图9对控制器生成第一流表项,并发送给交换机的方式进行详细说明。如图9所示,在交换机从第一端口接收第i数据包之前,本申请实施例还包括以下步骤:
S623、交换机从第一端口接收第一数据包。
详细解释可以参考S615,本申请实施例在此不再赘述。
S624、交换机确定根据第一端口的端口号或/和包头信息在流表中未查询到转发第一数据包的流表项。
详细解释可以参考S616,本申请实施例在此不再赘述。交换机确定根据第一端口的端口号或/和包头信息在流表中未查询到转发第一数据包的流表项执行S625。
S625、交换机生成第二packet-in消息,第二packet-in消息包括包头信息和第一端口的端口号。
在交换机确定根据第一端口的端口号或/和包头信息在流表中未查询到转发第一数据包的流表项,生成第二packet-in消息。
S626、交换机向控制器发送第二packet-in消息。
交换机生成第二packet-in消息后,向控制器发送第二packet-in消息。
S627、控制器接收交换机发送的第二packet-in消息。
交换机向控制器发送第二packet-in消息,控制器接收交换机发送的第二packet-in消息。
S628、控制器确定缓存队列标识j对应的缓存队列和第一端口的端口号。
由于软件定义网络中的所有网络信息控制器都知道,因此,控制器接收到交换机发送的第二packet-in消息后,解析第二packet-in消息得到包头信息,可以确定第一端口的端口号,第一端口的端口号为交换机接收第一数据包和第i数据包的端口对应的端口号,第一数据包与N个数据包属于同一个数据流的数据包。另外,控制器还知道交换机中的哪个缓存队列为空,即从空的缓存队列列表中选择一个缓存队列,如果没有空的缓存队列,控制器可以创建一个缓存队列。本申请实施例假设确定的空的缓存队列为缓存队列标识j对应的缓存队列。
S629、控制器生成第一流表项。
控制器确定缓存队列标识j对应的缓存队列和第一端口的端口号后,生成第一流 表项,第一流表项包括第一匹配特征和第一处理方式,第一匹配特征包括第一入口号和第一匹配字段,第一入口号为第一端口的端口号,第一处理方式为与第一匹配特征匹配的数据包存储到缓存队列标识j对应的缓存队列。
S630、控制器生成第二flow-mod消息,第二flow-mod消息包括第一流表项。
控制器生成第一流表项后,生成第二flow-mod消息。
S631、控制器向交换机发送第二flow-mod消息。
控制器通过第二flow-mod消息,将第一流表项发送给交换机。
S632、交换机接收控制器发送的第二flow-mod消息。
控制器向交换机发送第二flow-mod消息之后,交换机接收控制器发送的第二flow-mod消息,获取第一流表项。
S633、控制器生成packet-out消息。
packet-out消息用于指示交换机从第二端口转发第一数据包。
需要说明的是,本申请实施例提供的基于软件定义网络的数据发送方法步骤的先后顺序可以进行适当调整。示例的,如S633和S631之间的前后顺序可以互换,即可先执行S633再执行S631,或S633和S631同时执行,本申请实施例所述的方案是对基于软件定义网络的数据发送方法的一种示例性的实现方式,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化的方法,都应涵盖在本发明的保护范围之内,因此不再赘述。
S634、控制器向交换机发送packet-out消息。
S635、交换机接收控制器发送的packet-out消息。
S636、交换机通过第二端口转发第一数据包。
交换机接收控制器发送的packet-out消息后,根据packet-out消息的指示,通过第二端口转发第一数据包。
进一步的,在交换机获取到第二流表项,根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包,即S611之后,交换机还可以获取第三流表项,以便于交换机从入口队列获取到同一数据流的数据包后,根据第三流表项转发数据包。如图10所示,本申请实施例还包括以下详细步骤:
S637、控制器生成第三流表项。
第三流表项用于交换机转发入口队列里的数据包。第三流表项包括第一匹配特征和第二处理方式。第一匹配特征包括第一入口号和第一匹配字段,第一入口号为第一端口的端口号。第二处理方式为从交换机的第二端口转发缓存队列标识j对应的缓存队列中存储的数据包。
S638、控制器生成第三flow-mod消息,第三flow-mod消息包括第三流表项。
S639、控制器向交换机发送第三flow-mod消息。
控制器通过第三flow-mod消息,将第三流表项发送给交换机。
S640、交换机接收控制器发送的第三flow-mod消息。
交换机接收控制器发送的第三flow-mod消息,获取到第三流表项后,以便于根据第三流表项转发从入口队列接收到的数据包。
在交换机接收控制器发送的第三flow-mod消息之后,交换机可以执行S641和 S642。
S641、交换机删除第一流表项。
从而,在交换机获取到转发从入口队列接收到的同一数据流的数据包的第三流表项后,用于缓存数据包的第一流表项就失去了存在的意义了,此时,通过删除第一流表项,增加交换机的存储空间。
S642、交换机删除第二流表项。
从而,在交换机通过第二流表项将存储到缓存队列的数据包转发完成后,用于处理缓存数据包的第二流表项就失去了存在的意义了,此时,通过删除第二流表项,增加交换机的存储空间。可选的,也可以在交换机根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包之后,执行S642。
另外,控制器向交换机发送第三flow-mod消息之后,还可以执行S643。
S643、控制器删除第一流表项和第二流表项。
以便于增加控制器的存储空间。
为了更清楚的理解本申请实施例所述的基于软件定义网络的数据发送方法,下面基于图1所示的软件定义网络,对本申请实施例所述的基于软件定义网络的数据发送方法进行举例说明。
示例的,假设需要将数据包从交换机102传输到交换机106,第一数据包的源IP地址为102.224.112.01,目的IP地址为126.136.134.221。交换机102从第一端口(S1-1)接收到第一数据包,第一数据包包括包头信息和数据,包头信息包括源IP地址和目的IP地址,第一端口的端口号为S1-1,源IP地址为102.224.112.01,目的IP地址为126.136.134.221,然后,交换机102确定根据第一端口的端口号或/和包头信息在流表中未查询到转发第一数据包的流表项,交换机102确定缓存队列标识j对应的缓存队列为空,交换机102将第一数据包存储到缓存队列标识j对应的缓存队列,同时,生成第一流表项,如表1所示。本申请实施例假设交换机102自己生成的第一流表项,这里只是示例性说明,对此不作限定,当然,交换机102也可以根据上述实施例所述的由控制器101生成,接收控制器101发送的第一流表项。
表1第一流表项
Figure PCTCN2018112756-appb-000001
同时,交换机102生成第一packet-in消息,第一packet-in消息包括包头信息和缓存队列标识j,向控制器101发送第一packet-in消息。
控制器101接收到第一packet-in消息后,生成第二流表项,如表2所示。
表2第二流表项
Figure PCTCN2018112756-appb-000002
随后,控制器101生成第一flow-mod消息,第一flow-mod消息包括第二流表项, 向交换机102发送第一flow-mod消息,生成open-cache-port消息,open-cache-port消息用于指示交换机102转发缓存队列标识j对应的缓存队列中存储的数据包,缓存队列标识j对应的缓存队列的优先级高于交换机102的入口队列的优先级,向交换机102发送open-cache-port消息。
需要说明的是,控制器101接收到第一packet-in消息后,还需要根据包头信息计算转发路径,本申请实施例假设转发路径为交换机102到交换机103,再到交换机106,这里只是示例性说明,对此不作限定,当然,转发路径也可以为交换机102-交换机104-交换机105-交换机106。然后,针对转发路径上除了交换机102之外的所有交换机,即交换机103和交换机106生成转发数据包的流表项,并向交换机103和交换机106发送转发数据包的流表项。
同时,在控制器101生成第二流表项的期间,交换机102还可以从第一端口(S1-1)接收第i数据包,根据第一端口的端口号或/和包头信息查询流表,得到第一流表项,交换机102根据第一流表项将第i数据包存储到缓存队列标识j对应的缓存队列。
在交换机102接收到第一flow-mod消息和open-cache-port消息后,根据第二流表项转发缓存队列标识j对应的缓存队列中存储的数据包。
进一步的,控制器101生成第三流表项,如表3所示。
表3第三流表项
Figure PCTCN2018112756-appb-000003
控制器101生成第三flow-mod消息,向交换机102发送第三flow-mod消息,第三flow-mod消息包括第三流表项。然后,控制器101删除第一流表项和所述第二流表项。
交换机102接收控制器101发送的第三flow-mod消息,获取第三流表项。之后,再在第一端口(S1-1)上接收到同一数据流的数据包时,根据第三流表项进行转发。
本申请实施例可应用于任何基于软件定义网络架构的网络中,包括有线网络和无线网络。另外,本申请实施例中的交换机不限于OpenFlow交换机,可以是任何支持“匹配-转发”操作的可编程交换机。
上述主要从各个网元之间交互的角度对本申请实施例提供的方案进行了介绍。可以理解的是,各个网元,例如交换机和控制器为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对交换机和控制器进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一 种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图11示出了上述和实施例中涉及的交换机的一种可能的组成示意图,如图11所示,该交换机可以包括:接收单元1101和处理单元1102。
其中,接收单元1101,用于支持交换机执行图6所示的基于软件定义网络的数据发送方法中的S601、S607、S610,图7所示的基于软件定义网络的数据发送方法中的S601、S607、S610,图8所示的基于软件定义网络的数据发送方法中的S601、S607、S610、S615,图9所示的基于软件定义网络的数据发送的方法中的S601、S607、S610、S623,S632、S635,图10所示的基于软件定义网络的数据发送的方法中的S640。
处理单元1102,用于支持交换机执行图6所示的基于软件定义网络的数据发送方法中的S602、S603、S611,图7所示的基于软件定义网络的数据发送方法中的S602、S603、S611,图8所示的基于软件定义网络的数据发送方法中的S616、S617、S618、S619、S620、S602、S603、S611,图9所示的基于软件定义网络的数据发送的方法中的S624、S625、S636、S602、S603、S611,图10所示的基于软件定义网络的数据发送的方法中的S641、S642。
在本申请实施例中,进一步的,如图11所示,该交换机还可以包括:发送单元1103。
发送单元1103,用于支持交换机执行图8所示的基于软件定义网络的数据发送方法中的S621,图9所示的基于软件定义网络的数据发送的方法中的S626。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本申请实施例提供的交换机,用于执行上述基于软件定义网络的数据发送方法,因此可以达到与上述基于软件定义网络的数据发送方法相同的效果。
在采用集成的单元的情况下,图12示出了上述实施例中所涉及的交换机的另一种可能的组成示意图。如图12所示,该交换机包括:处理模块1201和通信模块1202。
处理模块1201用于对交换机的动作进行控制管理,例如,处理模块1201用于支持交换机执行图6所示的S602、S603、S611,图7所示的S602、S603、S611,图8所示的S616、S617、S618、S619、S620、S602、S603、S611,图9所示的S624、S625、S636、S602、S603、S611,图10所示的S641、S642、和/或用于本文所描述的技术的其它过程。通信模块1202用于支持交换机与其他网络实体的通信,例如与图6、图7、图8、图9和图10中示出的功能模块或网络实体之间的通信。交换机还可以包括存储模块1203,用于存储交换机的程序代码和数据。
其中,处理模块1201可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块1202可以是收发器、收发电路或通信接口等。存储模块1203可以是存储器。
当处理模块1201为处理器,通信模块1202为通信接口,存储模块1203为存储器时,本申请实施例所涉及的交换机可以为图5所示的计算机设备。
在采用对应各个功能划分各个功能模块的情况下,图13示出了上述和实施例中涉 及的控制器的一种可能的组成示意图,如图13所示,该控制器可以包括:处理单元1301和发送单元1302。
其中,处理单元1301,用于支持控制器执行图6所示的基于软件定义网络的数据发送方法中的S604、S605、S608,图7所示的基于软件定义网络的数据发送方法中的S604、S605、S608、S612、S613,图8所示的基于软件定义网络的数据发送方法中的S604、S612、S613、S605、S608,图9所示的基于软件定义网络的数据发送的方法中的S628、S629、S630、S633、S604、S612、S613、S605、S608,图10所示的基于软件定义网络的数据发送的方法中的S637、S638、S643。
发送单元1302,用于支持控制器执行图6所示的基于软件定义网络的数据发送方法中的S606、S609,图7所示的基于软件定义网络的数据发送方法中的S606、S609、S614,图8所示的基于软件定义网络的数据发送方法中的S614、S606、S609,图9所示的基于软件定义网络的数据发送的方法中的S614、S606、S609、S631、S634,图10所示的基于软件定义网络的数据发送的方法中的S639。
在本申请实施例中,进一步的,如图13所示,该控制器还可以包括:接收单元1303。
接收单元1303,用于支持控制器执行图8所示的基于软件定义网络的数据发送方法中的S622,图9所示的基于软件定义网络的数据发送的方法中的S627。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本申请实施例提供的控制器,用于执行上述基于软件定义网络的数据发送方法,因此可以达到与上述基于软件定义网络的数据发送方法相同的效果。
在采用集成的单元的情况下,图14示出了上述实施例中所涉及的控制器的另一种可能的组成示意图。如图14所示,该控制器包括:处理模块1401和通信模块1402。
处理模块1401用于对控制器的动作进行控制管理。例如,处理模块1401用于支持控制器执行图6所示的S604、S605、S608,图7所示的S604、S612、S613、S605、S608,图8所示的S604、S612、S613、S605、S608,图9所示的S628、S629、S630、S633、S604、S612、S613、S605、S608,图10所示的S637、S638、S643、和/或用于本文所描述的技术的其它过程。通信模块1402用于支持控制器与其他网络实体的通信,例如与图6、图7、图8、图9和图10中示出的功能模块或网络实体之间的通信。控制器还可以包括存储模块1403,用于存储控制器的程序代码和数据。
其中,处理模块1401可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块1402可以是收发器、收发电路或通信接口等。存储模块1403可以是存储器。
当处理模块1401为处理器,通信模块1402为收发器,存储模块1403为存储器时,本申请实施例所涉及的控制器可以为图5所示的计算机设备。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模 块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-OnlyMemory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (32)

  1. 一种基于软件定义网络SDN的数据发送方法,其特征在于,包括:
    交换机从第一端口接收第i数据包,所述第i数据包包括包头信息,其中,i为正整数,i的取值为2到N,N个数据包的包头信息相同,所述N个数据包属于同一个数据流的数据包;
    所述交换机根据所述第一端口的端口号或/和所述包头信息查询流表,得到第一流表项,所述第一流表项包括第一匹配特征和第一处理方式,所述第一匹配特征包括第一入口号和第一匹配字段,所述第一入口号为所述第一端口的端口号,所述第一匹配字段包括全部或部分所述包头信息,所述第一处理方式为与所述第一匹配特征匹配的数据包存储到缓存队列标识j对应的缓存队列;
    所述交换机根据所述第一流表项将所述第i数据包存储到所述缓存队列标识j对应的缓存队列;
    所述交换机接收所述控制器发送的第一流配置flow-mod消息,所述第一flow-mod消息包括第二流表项,所述第二流表项包括第二匹配特征和第二处理方式,所述第二匹配特征包括第二入口号和所述第一匹配字段,所述第二入口号为所述缓存队列标识j,所述第二处理方式为从所述交换机的第二端口转发所述缓存队列标识j对应的缓存队列中存储的数据包;
    所述交换机接收所述控制器发送的开缓存口open-cache-port消息,所述open-cache-port消息用于指示所述交换机转发所述缓存队列标识j对应的缓存队列中存储的数据包,所述缓存队列标识j对应的缓存队列的优先级高于所述交换机的入口队列的优先级;
    所述交换机根据所述第二流表项转发所述缓存队列标识j对应的缓存队列中存储的数据包。
  2. 根据权利要求1所述的方法,其特征在于,在所述交换机从第一端口接收第i数据包之前,所述方法还包括:
    所述交换机从所述第一端口接收第一数据包,所述第一数据包包括所述包头信息,所述第一数据包与所述N个数据包属于同一个数据流的数据包;
    所述交换机确定根据所述第一端口的端口号或/和所述包头信息在所述流表中未查询到转发所述第一数据包的流表项;
    所述交换机确定所述缓存队列标识j对应的缓存队列;
    所述交换机将所述第一数据包存储到所述缓存队列标识j对应的缓存队列;
    所述交换机生成第一入包packet-in消息,所述第一packet-in消息包括所述包头信息和所述缓存队列标识j;
    所述交换机向所述控制器发送所述第一packet-in消息。
  3. 根据权利要求2所述的方法,其特征在于,在所述交换机将所述第一数据包存储到所述缓存队列标识j对应的缓存队列之后,所述方法还包括:
    所述交换机生成所述第一流表项。
  4. 根据权利要求1所述的方法,其特征在于,在所述交换机从第一端口接收第i数据包之前,所述方法还包括:
    所述交换机从所述第一端口接收第一数据包,所述第一数据包包括所述包头信息,所述第一数据包与所述N个数据包属于同一个数据流的数据包;
    所述交换机确定根据所述第一端口的端口号或/和所述包头信息在所述流表中未查询到转发所述第一数据包的流表项;
    所述交换机生成第二packet-in消息,所述第二packet-in消息包括所述包头信息和所述第一端口的端口号;
    所述交换机向所述控制器发送所述第二packet-in消息;
    所述交换机接收所述控制器发送的第二flow-mod消息,所述第二flow-mod消息包括所述第一流表项;
    所述交换机接收所述控制器发送的出包packet-out消息,所述packet-out消息用于指示所述交换机从所述第二端口转发所述第一数据包;
    所述交换机通过所述第二端口转发所述第一数据包。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,在所述交换机接收所述控制器发送的开缓存口open-cache-port消息之后,所述方法还包括:
    所述交换机接收所述控制器发送的第三flow-mod消息,所述第三flow-mod消息包括第三流表项,所述第三流表项包括所述第一匹配特征和所述第二处理方式。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,在所述交换机接收所述控制器发送的第三flow-mod消息之后,所述方法还包括:
    所述交换机删除所述第一流表项。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,在所述交换机根据所述第二流表项转发所述缓存队列标识j对应的缓存队列中存储的数据包之后,所述方法还包括:
    所述交换机删除所述第二流表项。
  8. 一种基于软件定义网络SDN的数据发送方法,其特征在于,包括:
    控制器生成第二流表项,所述第二流表项包括第二匹配特征和第二处理方式,所述第二匹配特征包括第二入口号和第一匹配字段,所述第二入口号为缓存队列标识j,所述第一匹配字段包括全部或部分包头信息,所述包头信息为交换机从第一端口接收到的第i数据包包括的包头信息,其中,i为正整数,i的取值为2到N,N个数据包的包头信息相同,所述N个数据包属于同一个数据流的数据包,所述第二处理方式为从所述交换机的第二端口转发所述缓存队列标识j对应的缓存队列中存储的数据包;
    所述控制器生成第一流配置flow-mod消息,所述第一flow-mod消息包括所述第二流表项;
    所述控制器向所述交换机发送所述第一flow-mod消息;
    所述控制器生成开缓存口open-cache-port消息,所述open-cache-port消息用于指示所述交换机转发所述缓存队列标识j对应的缓存队列中存储的数据包,所述缓存队列标识j对应的缓存队列的优先级高于所述交换机的入口队列的优先级;
    所述控制器向所述交换机发送所述open-cache-port消息。
  9. 根据权利要求8所述的方法,其特征在于,在所述控制器生成第二流表项之前,所述方法还包括:
    所述控制器接收所述交换机发送的第一入包packet-in消息,所述第一packet-in消息包括所述包头信息和所述缓存队列标识j。
  10. 根据权利要求8所述的方法,其特征在于,在所述控制器生成第二流表项之前,所述方法还包括:
    所述控制器接收所述交换机发送的第二packet-in消息,所述第二packet-in消息包括所述包头信息和所述第一端口的端口号;
    所述控制器确定所述缓存队列标识j对应的缓存队列和所述第一端口的端口号,所述第一端口的端口号为所述交换机接收第一数据包和所述第i数据包的端口对应的端口号,所述第一数据包与所述N个数据包属于同一个数据流的数据包;
    所述控制器生成第一流表项,所述第一流表项包括第一匹配特征和第一处理方式,所述第一匹配特征包括第一入口号和所述第一匹配字段,所述第一入口号为所述第一端口的端口号,所述第一处理方式为与所述第一匹配特征匹配的数据包存储到所述缓存队列标识j对应的缓存队列;
    所述控制器生成第二flow-mod消息,所述第二flow-mod消息包括所述第一流表项;
    所述控制器向所述交换机发送所述第二flow-mod消息;
    所述控制器生成出包packet-out消息,所述packet-out消息用于指示所述交换机从所述第二端口转发所述第一数据包;
    所述控制器向所述交换机发送所述packet-out消息。
  11. 根据权利要求9或10所述的方法,其特征在于,在所述控制器接收所述交换机发送的第一入包packet-in消息或第二packet-in消息之后,所述方法还包括:
    所述控制器根据所述包头信息计算转发路径,所述转发路径上包括M个交换机,所述M个交换机包括接收第i数据包的所述交换机;
    所述控制器生成所述转发路径上除了接收第i数据包的所述交换机之外的M-1个其他交换机中每个交换机所需要使用的流表项;
    所述控制器向所述转发路径上除了接收第i数据包的所述交换机之外的M-1个其他交换机中每个交换机发送所需要使用的流表项。
  12. 根据权利要求8-11任一项所述的方法,其特征在于,在所述控制器向所述交换机发送所述open-cache-port消息之后,所述方法还包括:
    所述控制器生成第三流表项,所述第三流表项包括第一匹配特征和所述第二处理方式,所述第一匹配特征包括第一入口号和所述第一匹配字段,所述第一入口号为所述第一端口的端口号;
    所述控制器生成第三flow-mod消息,所述第三flow-mod消息包括所述第三流表项;
    所述控制器向所述交换机发送所述第三flow-mod消息。
  13. 根据权利要求8-12任一项所述的方法,其特征在于,所述控制器向所述交换机发送第三flow-mod消息之后,所述方法还包括:
    所述控制器删除第一流表项和所述第二流表项。
  14. 一种交换机,其特征在于,包括:
    接收单元,用于从第一端口接收第i数据包,所述第i数据包包括包头信息,其中,i为正整数,i的取值为2到N,N个数据包的包头信息相同,所述N个数据包属于同一个数据流的数据包;
    处理单元,用于根据所述第一端口的端口号或/和所述包头信息查询流表,得到第一流表项,所述第一流表项包括第一匹配特征和第一处理方式,所述第一匹配特征包括第一入口号和第一匹配字段,所述第一入口号为所述第一端口的端口号,所述第一匹配字段包括全部或部分所述包头信息,所述第一处理方式为与所述第一匹配特征匹配的数据包存储到缓存队列标识j对应的缓存队列;
    所述处理单元,还用于根据所述第一流表项将所述第i数据包存储到所述缓存队列标识j对应的缓存队列;
    所述接收单元,还用于接收所述控制器发送的第一流配置flow-mod消息,所述第一flow-mod消息包括第二流表项,所述第二流表项包括第二匹配特征和第二处理方式,所述第二匹配特征包括第二入口号和所述第一匹配字段,所述第二入口号为所述缓存队列标识j,所述第二处理方式为从所述交换机的第二端口转发所述缓存队列标识j对应的缓存队列中存储的数据包;
    所述接收单元,还用于接收所述控制器发送的开缓存口open-cache-port消息,所述open-cache-port消息用于指示所述交换机转发所述缓存队列标识j对应的缓存队列中存储的数据包,所述缓存队列标识j对应的缓存队列的优先级高于所述交换机的入口队列的优先级;
    所述处理单元,还用于根据所述第二流表项转发所述缓存队列标识j对应的缓存队列中存储的数据包。
  15. 根据权利要求14所述的交换机,其特征在于,
    所述接收单元,还用于从所述第一端口接收第一数据包,所述第一数据包包括所述包头信息,所述第一数据包与所述N个数据包属于同一个数据流的数据包;
    所述处理单元,还用于确定根据所述第一端口的端口号或/和所述包头信息在所述流表中未查询到转发所述第一数据包的流表项;
    所述处理单元,还用于确定所述缓存队列标识j对应的缓存队列;
    所述处理单元,还用于将所述第一数据包存储到所述缓存队列标识j对应的缓存队列;
    所述处理单元,还用于生成第一入包packet-in消息,所述第一packet-in消息包括所述包头信息和所述缓存队列标识j;
    所述交换机还包括:
    发送单元,用于向所述控制器发送所述第一packet-in消息。
  16. 根据权利要求15所述的交换机,其特征在于,
    所述处理单元,还用于生成所述第一流表项。
  17. 根据权利要求14所述的交换机,其特征在于,
    所述接收单元,还用于从所述第一端口接收第一数据包,所述第一数据包包括所述包头信息,所述第一数据包与所述N个数据包属于同一个数据流的数据包;
    所述处理单元,还用于确定根据所述第一端口的端口号或/和所述包头信息在所述 流表中未查询到转发所述第一数据包的流表项;
    所述处理单元,还用于生成第二packet-in消息,所述第二packet-in消息包括所述包头信息和所述第一端口的端口号;
    所述交换机还包括:
    发送单元,用于向所述控制器发送所述第二packet-in消息;
    所述接收单元,还用于接收所述控制器发送的第二flow-mod消息,所述第二flow-mod消息包括所述第一流表项;
    所述接收单元,还用于接收所述控制器发送的出包packet-out消息,所述packet-out消息用于指示所述交换机从所述第二端口转发所述第一数据包;
    所述发送单元,还用于通过所述第二端口转发所述第一数据包。
  18. 根据权利要求14-17任一项所述的交换机,其特征在于,
    所述接收单元,还用于接收所述控制器发送的第三flow-mod消息,所述第三flow-mod消息包括第三流表项,所述第三流表项包括所述第一匹配特征和所述第二处理方式。
  19. 根据权利要求14-18任一项所述的交换机,其特征在于,
    所述处理单元,还用于删除所述第一流表项。
  20. 根据权利要求14-19任一项所述的交换机,其特征在于,
    所述处理单元,还用于删除所述第二流表项。
  21. 一种控制器,其特征在于,包括:
    处理单元,用于生成第二流表项,所述第二流表项包括第二匹配特征和第二处理方式,所述第二匹配特征包括第二入口号和第一匹配字段,所述第二入口号为缓存队列标识j,所述第一匹配字段包括全部或部分包头信息,所述包头信息为交换机从第一端口接收到的第i数据包包括的包头信息,其中,i为正整数,i的取值为2到N,N个数据包的包头信息相同,所述N个数据包属于同一个数据流的数据包,所述第二处理方式为从所述交换机的第二端口转发所述缓存队列标识j对应的缓存队列中存储的数据包;
    所述处理单元,还用于生成第一流配置flow-mod消息,所述第一flow-mod消息包括所述第二流表项;
    发送单元,用于向所述交换机发送所述第一flow-mod消息;
    所述处理单元,还用于生成开缓存口open-cache-port消息,所述open-cache-port消息用于指示所述交换机转发所述缓存队列标识j对应的缓存队列中存储的数据包,所述缓存队列标识j对应的缓存队列的优先级高于所述交换机的入口队列的优先级;
    所述发送单元,还用于向所述交换机发送所述open-cache-port消息。
  22. 根据权利要求21所述的控制器,其特征在于,所述控制器还包括:
    接收单元,用于接收所述交换机发送的第一入包packet-in消息,所述第一packet-in消息包括所述包头信息和所述缓存队列标识j。
  23. 根据权利要求21所述的控制器,其特征在于,所述控制器还包括:
    接收单元,用于接收所述交换机发送的第二packet-in消息,所述第二packet-in消息包括所述包头信息和所述第一端口的端口号;
    所述处理单元,还用于确定所述缓存队列标识j对应的缓存队列和所述第一端口的端口号,所述第一端口的端口号为所述交换机接收第一数据包和所述第i数据包的端口对应的端口号,所述第一数据包与所述N个数据包属于同一个数据流的数据包;
    所述处理单元,还用于生成第一流表项,所述第一流表项包括第一匹配特征和第一处理方式,所述第一匹配特征包括第一入口号和所述第一匹配字段,所述第一入口号为所述第一端口的端口号,所述第一处理方式为与所述第一匹配特征匹配的数据包存储到所述缓存队列标识j对应的缓存队列;
    所述处理单元,还用于生成第二flow-mod消息,所述第二flow-mod消息包括所述第一流表项;
    所述发送单元,还用于向所述交换机发送所述第二flow-mod消息;
    所述处理单元,还用于生成出包packet-out消息,所述packet-out消息用于指示所述交换机从所述第二端口转发所述第一数据包;
    所述发送单元,还用于向所述交换机发送所述packet-out消息。
  24. 根据权利要求22或23所述的控制器,其特征在于,
    所述处理单元,还用于根据所述包头信息计算转发路径,所述转发路径上包括M个交换机,所述M个交换机包括接收第i数据包的所述交换机;
    所述处理单元,还用于生成所述转发路径上除了接收第i数据包的所述交换机之外的M-1个其他交换机中每个交换机所需要使用的流表项;
    所述发送单元,还用于向所述转发路径上除了接收第i数据包的所述交换机之外的M-1个其他交换机中每个交换机发送所需要使用的流表项。
  25. 根据权利要求21-24任一项所述的控制器,其特征在于,在所述控制器向所述交换机发送所述open-cache-port消息之后,所述方法还包括:
    所述处理单元,还用于生成第三流表项,所述第三流表项包括第一匹配特征和所述第二处理方式,所述第一匹配特征包括第一入口号和所述第一匹配字段,所述第一入口号为所述第一端口的端口号;
    所述处理单元,还用于生成第三flow-mod消息,所述第三flow-mod消息包括所述第三流表项;
    所述发送单元,还用于向所述交换机发送所述第三flow-mod消息。
  26. 根据权利要求21-25任一项所述的控制器,其特征在于,
    所述处理单元,还用于删除第一流表项和所述第二流表项。
  27. 一种软件定义网络系统,其特征在于,包括:
    上述权利要求14-20中任一项权利要求所述的交换机和上述权利要求21-26中任一项权利要求所述的控制器。
  28. 一种通信装置,其特征在于,包括:至少一个处理器、存储器、总线和收发器,其中,所述存储器用于存储计算机程序,使得所述计算机程序被所述至少一个处理器执行时实现如权利要求1-7中任一项所述的基于软件定义网络SDN的数据发送方法或权利要求8-13中任一项所述的基于SDN的数据发送方法。
  29. 一种计算机可读存储介质,其特征在于,包括:计算机软件指令;
    当所述计算机软件指令在交换机或内置在交换机的芯片中运行时,使得所述交换 机执行如权利要求1-7中任一项所述的基于软件定义网络SDN的数据发送方法。
  30. 一种计算机可读存储介质,其特征在于,包括:计算机软件指令;
    当所述计算机软件指令在控制器或内置在控制器的芯片中运行时,使得所述控制器执行如权利要求8-13中任一项所述的基于软件定义网络SDN的数据发送方法。
  31. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在交换机或内置在交换机的芯片中运行时,使得所述交换机执行如权利要求1-7中任一项所述的基于软件定义网络SDN的数据发送方法。
  32. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在控制器或内置在控制器的芯片中运行时,使得所述控制器执行如权利要求8-13中任一项所述的基于软件定义网络SDN的数据发送方法。
PCT/CN2018/112756 2017-10-30 2018-10-30 一种基于软件定义网络的数据发送方法、装置及系统 WO2019085907A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711035868.5 2017-10-30
CN201711035868.5A CN109729022B (zh) 2017-10-30 2017-10-30 一种基于软件定义网络的数据发送方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2019085907A1 true WO2019085907A1 (zh) 2019-05-09

Family

ID=66291410

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/112756 WO2019085907A1 (zh) 2017-10-30 2018-10-30 一种基于软件定义网络的数据发送方法、装置及系统

Country Status (2)

Country Link
CN (1) CN109729022B (zh)
WO (1) WO2019085907A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110198270A (zh) * 2019-05-10 2019-09-03 华中科技大学 一种sdn网络中基于路径与ip地址跳变的主动防御方法
CN110177060B (zh) * 2019-05-15 2020-12-08 华中科技大学 一种面向sdn网络的时序侧信道攻击的主动防御方法
CN112242914B (zh) * 2019-07-18 2023-10-03 华为技术有限公司 网络异常根因定位方法、装置及系统、计算机存储介质
CN115208841B (zh) * 2021-07-09 2024-01-26 江苏省未来网络创新研究院 一种基于sdn的工业互联网标识流量缓存处理方法
CN115037708B (zh) * 2022-08-10 2022-11-18 深圳星云智联科技有限公司 一种报文处理方法、系统、装置及计算机可读存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140269320A1 (en) * 2013-03-15 2014-09-18 International Business Machines Corporation Scalable Flow and Cogestion Control with OpenFlow
CN104301249A (zh) * 2014-10-14 2015-01-21 杭州华三通信技术有限公司 一种sdn流表下发方法和装置
CN105099920A (zh) * 2014-04-30 2015-11-25 杭州华三通信技术有限公司 一种设置sdn流表项的方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2692096A1 (en) * 2011-03-29 2014-02-05 NEC Europe Ltd. User traffic accountability under congestion in flow-based multi-layer switches
CN103023728B (zh) * 2013-01-15 2016-03-02 中国人民解放军信息工程大学 流监控方法
CN105791169A (zh) * 2014-12-16 2016-07-20 电信科学技术研究院 软件定义网络中交换机转发控制、转发方法及相关设备
CN106453138B (zh) * 2016-11-25 2020-03-06 新华三技术有限公司 一种报文处理方法和装置
CN107181663A (zh) * 2017-06-28 2017-09-19 联想(北京)有限公司 一种报文处理方法、相关设备及计算机可读存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140269320A1 (en) * 2013-03-15 2014-09-18 International Business Machines Corporation Scalable Flow and Cogestion Control with OpenFlow
CN105099920A (zh) * 2014-04-30 2015-11-25 杭州华三通信技术有限公司 一种设置sdn流表项的方法和装置
CN104301249A (zh) * 2014-10-14 2015-01-21 杭州华三通信技术有限公司 一种sdn流表下发方法和装置

Also Published As

Publication number Publication date
CN109729022A (zh) 2019-05-07
CN109729022B (zh) 2020-07-28

Similar Documents

Publication Publication Date Title
WO2019085907A1 (zh) 一种基于软件定义网络的数据发送方法、装置及系统
US20230239368A1 (en) Accelerated network packet processing
JP6938766B2 (ja) パケット制御方法およびネットワーク装置
US10057387B2 (en) Communication traffic processing architectures and methods
US9654406B2 (en) Communication traffic processing architectures and methods
CN110022264B (zh) 控制网络拥塞的方法、接入设备和计算机可读存储介质
WO2019174536A1 (zh) 拥塞控制方法及网络设备
US10193831B2 (en) Device and method for packet processing with memories having different latencies
WO2021047515A1 (zh) 一种服务路由方法及装置
US20220303217A1 (en) Data Forwarding Method, Data Buffering Method, Apparatus, and Related Device
WO2020063298A1 (zh) 处理tcp报文的方法、toe组件以及网络设备
WO2019129167A1 (zh) 一种处理数据报文的方法和网卡
KR20190112804A (ko) 패킷 처리 방법 및 장치
WO2023186046A1 (zh) 一种发送报文的方法和装置
US11165705B2 (en) Data transmission method, device, and computer storage medium
CN109802894B (zh) 流量控制方法及装置
CN112953967A (zh) 网络协议卸载装置和数据传输系统
CN106603409B (zh) 一种数据处理系统、方法及设备
CN106850440B (zh) 一种面向多地址共享数据路由包的路由器、路由方法及其芯片
US6728778B1 (en) LAN switch with compressed packet storage
Lu et al. Impact of hpc cloud networking technologies on accelerating hadoop rpc and hbase
CN113297117B (zh) 数据传输方法、设备、网络系统及存储介质
US11271897B2 (en) Electronic apparatus for providing fast packet forwarding with reference to additional network address translation table
WO2024098757A1 (zh) 网络集群系统、报文传输方法及网络设备
Tzanakaki et al. Optical networking in support of user plane functions in 5G systems and beyond

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18874881

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18874881

Country of ref document: EP

Kind code of ref document: A1