WO2023010730A1 - 数据包解析的方法和服务器 - Google Patents

数据包解析的方法和服务器 Download PDF

Info

Publication number
WO2023010730A1
WO2023010730A1 PCT/CN2021/135683 CN2021135683W WO2023010730A1 WO 2023010730 A1 WO2023010730 A1 WO 2023010730A1 CN 2021135683 W CN2021135683 W CN 2021135683W WO 2023010730 A1 WO2023010730 A1 WO 2023010730A1
Authority
WO
WIPO (PCT)
Prior art keywords
thread
data
tunnel
tunnel type
vxlan
Prior art date
Application number
PCT/CN2021/135683
Other languages
English (en)
French (fr)
Inventor
吴情彪
曾伟
Original Assignee
武汉绿色网络信息服务有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 武汉绿色网络信息服务有限责任公司 filed Critical 武汉绿色网络信息服务有限责任公司
Publication of WO2023010730A1 publication Critical patent/WO2023010730A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • the invention relates to the technical field of communication, in particular to a data packet analysis method and server.
  • VXLAN Virtual eXtensible LAN, scalable virtual local area network
  • VXLAN Virtual eXtensible LAN, scalable virtual local area network
  • the same service process includes multiple threads, and different threads can share data and rules in the service process. Among them, multiple threads need to perform different operations on the data packets of the network card to obtain corresponding multiple target data, and each The target obtained by a thread cannot be shared with other threads, and at least two threads need to perform the same operation to obtain the corresponding target data before performing subsequent operations, resulting in low overall work efficiency of the service process.
  • Embodiments of the present invention provide a data packet parsing method and a server.
  • the first thread By opening a second buffer area in the buffer area of the network card, the first thread obtains the inner layer IP, and judges whether the original data accesses the business system through the tunnel rules. If so, the Determine the corresponding tunnel ID according to the inner layer IP and tunnel rules and save it to the second buffer for the second thread to obtain and use; to solve the problem that the second thread needs to perform the same operation as the first thread in the same service process The operation obtains the corresponding information, which leads to the problem that the overall work efficiency of the service process is low.
  • the embodiment of the present invention provides a data packet parsing method, which is applied to a server, the server includes a network card and a service process, the network card includes a cache area, the cache area includes a first cache area and a second cache area, and the service process Including the first thread and the second thread, the method for analyzing the data packet includes:
  • the network card receives a first vxlan data packet, and saves the first vxlan data packet to the first buffer area, the first vxlan data packet includes first encapsulation data and original data, and the first encapsulation data Including vni and outer layer IP, the original data includes inner layer IP;
  • the first thread parses the first encapsulated data from the first vxlan data packet and obtains the outer IP and the vni, and judges whether the corresponding tunnel type is a preset tunnel according to the outer IP type;
  • the first thread saves the tunnel type and the vni to the second cache area;
  • the first thread judges whether the tunnel type is a first preset tunnel type
  • the first thread obtains the inner layer IP from the first vxlan packet, and judges whether to access the service system according to the inner layer IP;
  • the first thread saves the corresponding tunnel ID to the second cache area according to the inner layer IP;
  • the second thread extracts the vni and the tunnel ID from the second cache area, determines second package data according to the vni and the tunnel ID, and saves the second package data to the A second buffer area, such that the original data located in the first buffer area and the second encapsulated data located in the second buffer area together form a second vxlan data packet.
  • the second thread extracts the vni and the tunnel ID from the second buffer, and determines the second encapsulation data according to the vni and the tunnel ID, and converts the second The encapsulation data is saved to the second buffer area, so that the original data located in the first buffer area and the second encapsulated data located in the second buffer area together constitute a second vxlan data packet Before, including:
  • the second thread judges whether the tunnel type is a second preset tunnel type
  • the second thread extracts the vni from the second cache area, and determines a corresponding tunnel ID according to the vni.
  • the first thread parses the first encapsulated data from the first vxlan data packet and obtains the outer IP and the vni, and determines the corresponding tunnel according to the outer IP
  • the steps of whether the type is a preset tunnel type include:
  • the first thread obtains the outer layer IP, and determines the corresponding tunnel type according to the configuration table;
  • the first thread judges whether the corresponding tunnel type is a preset tunnel type according to the tunnel type.
  • the first thread obtains the inner layer IP from the first vxlan data packet, and the step of judging whether to access the service system according to the inner layer IP includes:
  • the first thread obtains the inner layer IP from the first vxlan packet, and searches for a plurality of service IP segments in the tunnel rules;
  • the first thread judges whether to access the service system according to whether the inner layer IP is included in one of the multiple service IP segments.
  • the first thread parses the first encapsulated data from the first vxlan data packet and obtains the outer IP and the vni, and determines the corresponding tunnel according to the outer IP Before the step of whether the type is a preset tunnel type, include:
  • the network card sends the pointer of the first vxlan packet to the first thread
  • the first thread accesses the first vxlan data packet according to the pointer of the first vxlan data packet.
  • the second thread extracts the vni, the tunnel type, and the tunnel ID from the second cache area, and determines according to the vni, the tunnel type, and the tunnel ID second packaged data, saving the second packaged data to the second cache area, so that the original data located in the first cache area and the second packaged data located in the second cache area Before the steps of the data together forming the second vxlan packet, include:
  • the network card sends the pointer of the original data to the second thread
  • the second thread accesses the original data according to the pointer of the original data.
  • the second thread extracts the vni, the tunnel type, and the tunnel ID from the second cache area, and determines according to the vni, the tunnel type, and the tunnel ID second packaged data, saving the second packaged data to the second cache area, so that the original data located in the first cache area and the second packaged data located in the second cache area After the steps of data together forming the second vxlan packet, including:
  • the network card determines a corresponding sending tunnel according to the second vxlan data packet
  • the network card sends the second vxlan data packet according to the sending tunnel.
  • An embodiment of the present invention provides a server, the server includes a network card and a service process, the network card includes a cache area, the cache area includes a first cache area and a second cache area, and the service process includes a first thread and a second thread ;
  • the network card is used to receive a first vxlan data packet, and save the first vxlan data packet to the first buffer area, the first vxlan data packet includes first encapsulation data and original data, and the first Encapsulation data includes vni and outer layer IP, and the original data includes inner layer IP;
  • the first thread is used to parse the first encapsulation data from the first vxlan data packet and obtain the outer IP and the vni, and judge whether the corresponding tunnel type is preset according to the outer IP. Set the tunnel type;
  • the first thread is also used to save the tunnel type and the vni to the second cache area;
  • the first thread is also used to determine whether the tunnel type is a first preset tunnel type
  • the first thread is also used to obtain the inner layer IP from the first vxlan data packet, and judge whether to access according to the inner layer IP business system;
  • the first thread is also used to save the corresponding tunnel ID to the second cache area according to the inner layer IP;
  • the second thread is used to extract the vni, the tunnel type and the tunnel ID from the second cache area, and determine the second encapsulation data according to the vni, the tunnel type and the tunnel ID , saving the second packaged data to the second buffer, so that the original data located in the first buffer and the second packaged data located in the second buffer together constitute a first Two vxlan packets.
  • the first thread is also used to obtain the outer IP, and determine the corresponding tunnel type according to the configuration table;
  • the first thread is further configured to determine whether the corresponding tunnel type is a preset tunnel type according to the tunnel type.
  • the first thread is also used to obtain the inner layer IP from the first vxlan data packet, and search for multiple service IP segments in the tunnel rules;
  • the first thread is also used to judge whether to access the service system according to whether the inner layer IP is included in one of the multiple service IP segments.
  • the present invention provides a method and a server for data packet analysis, the server includes a network card and a service process, the network card includes a buffer area, the buffer area includes a first buffer area and a second buffer area, and the service process includes a first thread and a second buffer area.
  • the second thread, the first thread obtains the outer layer IP and vni from the first encapsulation data, if the corresponding tunnel type is a preset tunnel type, then save the tunnel type and vni to the second cache area; further, if the corresponding tunnel If the type is the first preset tunnel type, the inner layer IP is obtained.
  • the corresponding tunnel ID is saved to the second cache area according to the inner layer IP; the second thread extracts the tunnel ID from the second buffer area.
  • the vni and the tunnel ID are used to determine the second encapsulated data, and the second encapsulated data is stored in the second cache area, so that the original data and the second encapsulated data together constitute the second vxlan data packet.
  • the corresponding tunnel ID obtained by the first thread through data packet parsing and the vni obtained by processing the first vxlan data packet are stored in the second buffer area for the second buffer area.
  • Two threads obtain and use, avoid second thread to carry out the step of data packet parsing or other repetition with first thread; And obtain the corresponding information in the first vxlan packet and described first thread and described second thread and While performing the corresponding operations, the integrity of the first vxlan data packet is still guaranteed, so that other threads can normally obtain the information of the first vxlan data packet, preventing the server from re-acquiring the first vxlan data packet from the outside. for other threads to use.
  • this solution improves the overall work efficiency of the service process.
  • FIG. 1 is a schematic diagram of a scene of a system for parsing data packets provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a section of a cache area in a network card provided by an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a first method for parsing data packets provided by an embodiment of the present invention
  • FIG. 4 is a schematic diagram of an interval of a cache area in another network card provided by an embodiment of the present invention.
  • Fig. 5 is a schematic structural diagram of the first vxlan data packet provided by the embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of a second data packet parsing method provided by an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of a third data packet parsing method provided by an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of a fourth data packet parsing method provided by an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of a fifth data packet parsing method provided by an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart of a sixth data packet parsing method provided by an embodiment of the present invention.
  • FIG. 11 is a schematic flowchart of a seventh data packet parsing method provided by an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of signaling interaction of a data packet parsing method provided by an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of a server provided by an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of another server provided by an embodiment of the present invention.
  • first”, “second”, etc. in the present invention are used to distinguish different objects, not to describe a specific order.
  • the terms “include” and “have”, as well as any variations thereof, are intended to cover a non-exclusive inclusion.
  • a process, method, system, product, or device that includes a series of steps or modules is not limited to the listed steps or modules, but optionally also includes steps or modules that are not listed, or optionally includes For other steps or modules inherent in these processes, methods, products or devices.
  • the execution subject of the method for analyzing data packets provided in the embodiment of the present invention may be the server for performing the method for analyzing data packets provided in the embodiments of the present invention, or an electronic device integrated with a server for performing the method for analyzing data packets , the server for performing the method for parsing data packets may be implemented in hardware or software.
  • Network card A piece of computer hardware designed to allow computers to communicate on a computer network, allowing users to connect to each other via cables or wirelessly. Each network card has a unique 48-bit serial number called a MAC address, which is written in a piece of ROM on the network card.
  • a network card is not a self-contained autonomous unit because it does not carry its own power source but must use the power of the computer it is plugged into and be controlled by that computer. When the network card receives an erroneous frame, it discards the frame without notifying the computer it's plugged into. When the network card receives a correct frame, it notifies the computer using an interrupt and delivers it to the network layer in the protocol stack. When the computer wants to send an IP data packet, it is handed down from the protocol stack to the network card to be assembled into a frame and sent to the LAN.
  • a thread is the smallest unit of program execution, and a process is the smallest unit of resource allocation by the operating system; a process is composed of one or more threads, and threads are different execution routes of code in a process; Independent, but the memory space of the program (including code segments, data sets, heaps, etc.) and some process-level resources (such as open files and signals) are shared between threads under the same process. Invisible; thread context switches are much faster than process context switches.
  • Cache A data storage area shared by multiple hardware or program processes running at different speeds or priorities. It acts as a speed smoother between high-speed and low-speed devices, temporarily stores data, and frequently accessed data can be put into the buffer, reducing access to slow devices to improve system efficiency.
  • Packet In a packet-switched network, a single message is divided into blocks of data called packets, which contain address information for the sender and receiver. These packets are then transported along different paths in one or more networks and reassembled at the destination.
  • Tunneling An encapsulation technology that uses a network transmission protocol to encapsulate data packets generated by other protocols in its own data packets, and then transmit them in the network.
  • the tunnel can be regarded as a virtual point-to-point connection.
  • the original data is encapsulated at A, and after arriving at B, the encapsulation is removed and restored to the original data, thus forming a communication tunnel from A to B.
  • Tunnel technology refers to the whole process including encapsulation, transmission and decapsulation.
  • the tunnel is realized through the tunnel protocol, which stipulates the establishment, maintenance and deletion rules of the tunnel, and how to encapsulate the original data in the tunnel for transmission.
  • the embodiment of the invention provides a method and a server for parsing data packets. The details will be described respectively below.
  • FIG. 1 is a schematic diagram of a scene of a system for parsing data packets provided by an embodiment of the present invention.
  • the system for parsing data packets may include a network card 100 and a service process 10, and the service process 10 includes a first thread 200 and a first thread 200.
  • the cache area is located in the network card 100.
  • the network card 100 configures a space of 2048 bytes for the cache area, and each label indicates the corresponding The serial number of the byte, for example, "0" means the 0th byte, and "2047" means the 2047th byte.
  • the interval of the first 1600 bytes is the first buffer area for storing data packets, that is, the 0th byte to the 1599th byte are used to store data packets; further, it can be from the 1600th byte to the 2047th byte Choose up to 256 bytes in the interval where the bytes are located as the second buffer area for saving part of the information in the data packet. It should be noted that after the first buffer area and the second buffer area are determined, if the first buffer area is known, the first byte of the first buffer area and the preset The relative position of the first byte of the second buffer area is determined to determine the second buffer area.
  • a preset interval may be reserved between the second buffer area and the first buffer area, so as to properly distinguish the data packet from the part of information stored in the data packet, for example, as shown in FIG. 2 , the
  • the second buffer area can be the interval of the 1663th byte to the 1918th byte in the buffer area; or the second buffer area and the first buffer area can also be set adjacently, only according to The relative position of the first byte of the first buffer area and the first byte of the second buffer area is set to determine the second buffer area.
  • the corresponding space in the second buffer area can be reasonably selected according to the length of the partial information in the data packet.
  • the network card 100 is mainly used to receive the first vxlan data packet, and save the first vxlan data packet to the first buffer area, and the first vxlan data packet includes the first encapsulation data and original data, the first encapsulation data includes vni and outer layer IP, the original data includes inner layer IP; the first thread is mainly used for parsing the first encapsulation data from the first vxlan packet And obtain the outer layer IP and the vni, and judge whether the corresponding tunnel type is a preset tunnel type according to the outer layer IP; if the tunnel type is the preset tunnel type, then the first thread Save the tunnel type and the vni to the second cache area; the first thread is mainly used to determine that the tunnel type is the first preset tunnel type; if the tunnel type is the The first preset tunnel type, then the first thread obtains the inner layer IP from the first vxlan data packet, and judges whether to access the business system according to the inner layer IP; if access
  • the system for parsing data packets may be included in a server, that is, the network card 100, the first thread 200, and the second thread 300 may all be included in a server.
  • the server may be an independent server, or a server network or server cluster composed of servers.
  • the server includes but is not limited to a computer, a network host, a single network server, multiple network server sets, or a server cluster composed of multiple servers.
  • Cloud Server is composed of a large number of computers or network servers based on cloud computing.
  • the server may include a physical port and a virtual port.
  • the physical port may be included in the network card 100, and the physical port is used to receive a data packet sent by a terminal or a service system, or to send a data packet to a service system or a terminal.
  • the network card 100, the first thread 200 and the second thread 300 may communicate through the virtual port.
  • the network card driver can send the data packet to the first thread 200 or the second thread 300 sends a "data packet pointer" and notifies the first thread 200 or the second thread 300 to process the data packet, and the second thread 300 can send the "processing data packet task completion instruction" to the physical port through the network card driver. " to indicate that the corresponding task has been completed, and the physical port may also be notified to send a data packet to the outside of the server.
  • the terminal may be a general-purpose computer device or a special-purpose computer device.
  • the terminal can be a desktop computer, a portable computer, a network server, a personal digital assistant (PDA), a mobile phone, a tablet computer, a wireless terminal device, a communication device, an embedded device, etc., and the present embodiment does not Define the type of the terminal.
  • PDA personal digital assistant
  • a service process is created in the server, the service process includes the first thread 200 and the second thread 300, and the service process is a system for resource allocation and scheduling.
  • Independent units the first thread 200 and the second thread 300 are an entity of the service process, and are the basic units of independent operation and independent scheduling, and the first thread 200 and the second thread 300 can share All resources owned by the service process.
  • different service processes can communicate through pipes, sockets, signal interaction, shared memory, message queues, etc.; and the first thread 200 and the second thread 300 share the same service process memory, using the same address space, the first thread 200 and the second thread 300 follow the agreed rules to cooperate with each other, the first thread 200 and the second thread 300 can pass wait /notify wait, Volatile memory sharing, CountDownLatch concurrent tool, CyclicBarrier concurrent tool for communication.
  • the application environment shown in Figure 1 is only an application scenario of the solution of this application, and does not constitute a limitation on the application scenario of the solution of this application.
  • Other application environments can also be: the service process Include more threads than those shown in FIG. 1, for example, only two threads are shown in FIG. 1, it can be understood that the system for parsing data packets may also include one or more other threads that can access the network card 100 , not limited here.
  • An embodiment of the present invention provides a data packet parsing method
  • the execution body of the data packet parsing method is the server, the server includes a network card, a first thread, and a second thread, and the network card includes a cache area, so The buffer area includes a first buffer area and a second buffer area
  • the method for parsing the data packet includes: the network card receives the first vxlan data packet, and saves the first vxlan data packet to the first buffer area , the first vxlan data packet includes the first package data and original data, the first package data includes vni and the outer layer IP, the original data includes the inner layer IP; the first thread from the first vxlan Analyzing the first encapsulation data in the data packet and obtaining the outer IP and the vni, and judging whether the corresponding tunnel type is a preset tunnel type according to the outer IP; if the tunnel type is the preset If the tunnel type is set, then the first thread saves the tunnel type and the vni to
  • the method for data packet analysis includes:
  • the network card receives a first vxlan data packet, and saves the first vxlan data packet in the first buffer area, the first vxlan data packet includes first encapsulated data and original data, and the first The encapsulated data includes vni and outer IP, and the original data includes inner IP.
  • the network card may be a network card 100 as shown in FIG. 1 , wherein the first vxlan data packet may be a data packet received by a physical port of the network card 100 from a terminal or a business system.
  • the cache area includes the first cache area and the second cache area, and the first cache area may be located before the second cache area, that is, the first cache area may be located in the The previous part in the cache.
  • the first vxlan data packet may include the first encapsulation data and the original data, the first encapsulation data is located before the original data, the vni and the outer IP are included in the For the first encapsulation data, the inner layer IP is included in the original data. It should be noted that the interval length division of the vni, the outer layer IP, the inner layer IP, the first packaged data, and the original data in FIG. 4 is only for the convenience of drawing, and does not The proportional relationship of the interval length of the operator is restricted.
  • the first vxlan packet may include the first encapsulation data and the original data, specifically described as follows:
  • the first encapsulation data may include VXLAN header 901, Outer UDP header 902, Outer IP header 903 and Outer Ethernet header 904 in turn, further, according to the distance from the original data Recently, VXLAN header901 includes VXLAN Flags905 and VNI906.
  • the VNI is the vni above
  • the vni (VNI) is the VXLAN network identifier, which is used to identify the tenant to which the first vxlan data packet belongs.
  • a tenant can have one or more VNIs. Between tenants of different VNIs Layer 2 intercommunication cannot be carried out directly; among them, VXLAN Flags is a flag bit, including 8 bits, and the format is "RRRRRRRR".
  • Outer IP header 903 includes the outer IP in the above, and the outer IP specifically includes IP SA802 and IP DA803, IP SA is the source IP address, that is, the IP address of the source end VTEP of the tunnel, and IP DA is the destination IP address, That is, the IP address of the destination VTEP of the tunnel.
  • the original data may include Inner Ethernet header 907, Inner IP header 908 and Payload 909 in sequence.
  • the Inner Ethernet header includes the MAC address of the sending end and the MAC address of the next-hop device
  • the Inner IP header includes the inner IP mentioned above
  • the inner IP specifically includes the IP address of the sending end and the IP address of the receiving end.
  • the sending end and the receiving end respectively correspond to the terminal and the service system mentioned above according to the actual sending and receiving of the first vxlan data packet; wherein, the Payload may include instruction information or data information.
  • the first thread parses the first encapsulated data from the first vxlan data packet and obtains the outer IP and the vni, and judges whether the corresponding tunnel type is preset according to the outer IP Set the tunnel type.
  • the first thread can access the first vxlan data packet, and parse the first encapsulation data in the first vxlan data packet, and obtain the foreign language in the first encapsulation data according to the parsing result Layer IP and the vni. Specifically, the first thread can obtain the VXLAN Flags information in the first encapsulated data, and for the VXLAN Flags whose format is "RRRRRRRR", if the "1" bit is 1, then perform the step S102, if "I " bit is 0, then step S102 is not executed. Further, the first thread may determine a corresponding tunnel type according to the outer IP, and then compare the tunnel type with the preset tunnel type to determine whether the corresponding tunnel type is a preset tunnel type.
  • this A preset space can be reserved between the outer IP, the vni, and the original data, according to the preset first byte of the first buffer and the first byte of the second buffer
  • this embodiment can further determine the outer layer IP and the vni, for example, on the premise of the above, the outer layer can be determined according to the above steps Whether there is a preset interval between the IP, the vni and the first cache area respectively, to determine whether the "determined outer IP" and the "determined vni" are the real outer IP, The vni.
  • the first thread saves the tunnel type and the vni in the second cache area.
  • the first thread saves the tunnel type and the vni to the second cache area; otherwise, the The first thread discards the first vxlan data packet, that is, the first thread does not perform any related processing on the first vxlan data packet from this step.
  • the first thread determines whether the tunnel type is a first preset tunnel type.
  • the preset tunnel type may include multiple tunnel types, and the first preset tunnel type is one of the tunnel types included in the preset tunnel types.
  • the first thread obtains the inner IP from the first vxlan data packet, and judges whether to access the service according to the inner IP system.
  • the first preset tunnel type may be a lan-type tunnel, and when the tunnel type corresponding to the first vxlan data packet is a lan-type tunnel, the first thread will link the tunnel type and the vni Save to the second cache area; furthermore, the first thread can access the first vxlan data packet, and parse the original data in the first vxlan data packet, and obtain the inner layer according to the parsing result IP, that is, to obtain the Inner IP header, specifically to obtain the IP address of the receiving end in the inner IP.
  • the service process may include a first tunnel forwarding rule, and the first thread may obtain the first tunnel forwarding rule according to the inner layer IP, that is, the first tunnel forwarding rule is applicable to the first tunnel forwarding rule.
  • the first thread saves the corresponding tunnel ID in the second cache area.
  • the first tunnel forwarding rule includes a plurality of IP segments and a plurality of tunnel IDs, and the plurality of IP segments correspond to the plurality of tunnel IDs one by one, that is, each tunnel ID corresponds to the first vxlan data packet
  • the IP segment where the IP address of the receiving end is located corresponds to, since the premise of this step is that the tunnel type of the first vxlan data packet is the first preset tunnel type, in combination with the step S104, when When the IP address of the receiving end in the inner layer IP meets the requirements for accessing the service system, the first thread will obtain the corresponding tunnel ID according to the IP segment where the IP address of the receiving end is located, and pass the tunnel ID Save to the second buffer; otherwise, the first thread discards the first vxlan data packet, that is, the first thread does not perform any related processing on the first vxlan data packet from this step.
  • the first thread may send related instructions such as "processing data packet task completion instruction" to the network card to Informing the network card that the first thread has completed related operations such as "processing the data packet task", so that the network card can perform the next operation.
  • the first thread can obtain the pointer of the original data at this time, and the virtual port of the first thread can also send the pointer of the original data to the virtual port of the network card.
  • the following steps may be included: the first thread sends the pointer of the original data to the virtual port in the server through the network card driver, and the network card driver notifies the second thread to process the corresponding part according to the pointer of the original data data packets.
  • the second thread extracts the vni and the tunnel ID from the second cache area, and determines second encapsulation data according to the vni and the tunnel ID, and saves the second encapsulation data in The second buffer area is such that the original data located in the first buffer area and the second encapsulated data located in the second buffer area together form a second vxlan data packet.
  • the second thread can determine the new corresponding Outer UDP header, Outer IP header and Outer Ethernet header, specifically, the second thread can configure the original data according to the vni and the tunnel ID
  • the VXLAN header together constitutes the second encapsulation data.
  • the tunnel type indicates where the first vxlan data packet comes from, such as from a terminal or a service system
  • the tunnel ID indicates where the second vxlan data packet is sent to, such as a terminal or a service system.
  • the tunnel type is a lan tunnel
  • the tunnel ID indicates where the second vxlan data packet is sent to, such as a terminal or a service system.
  • the tunnel type is a lan tunnel
  • the MAC address of the source end of the new tunnel can be determined according to the tunnel ID and the vni , the MAC address of the destination end, the IP address of the source end, the IP address of the destination end, the UDP port number of the source end, and the UDP port number of the destination end.
  • the MAC address of the source end, the MAC address of the destination end, the IP address of the source end, the IP address of the destination end, the UDP port number of the source end and the UDP port number of the destination end form a new corresponding Outer Ethernet header, Outer IP header, and Outer UDP header.
  • the second cache area contains the second package data, the vni, the tunnel type and the tunnel ID, and the second package data and the tunnel ID located in the second cache area can be selected.
  • the original data in a buffer together constitute the second vxlan data packet.
  • steps before step S106 may include the following steps:
  • the second thread determines whether the tunnel type is a second preset tunnel type.
  • the second preset tunnel type is also one of the tunnel types included in the preset tunnel types.
  • the second thread may obtain the tunnel type from the second cache area, and determine whether the tunnel type is a second preset tunnel type.
  • the second thread extracts the vni from the second cache area, and determines a corresponding tunnel ID according to the vni.
  • the second preset tunnel type may be a mec-type tunnel, and when the tunnel type corresponding to the first vxlan data packet is a mec-type tunnel, the second thread extracts all The above vni, and determine the corresponding tunnel ID according to the vni.
  • the service process may also include a second tunnel forwarding rule, and the second thread may obtain the second tunnel forwarding rule according to the vni, that is, the second tunnel forwarding rule is applicable to the second tunnel forwarding rule.
  • the second tunnel forwarding rule contains multiple tunnel IDs, and it can be seen from the above analysis that each tunnel ID corresponds to a vni , therefore, the second thread can find the corresponding tunnel ID from the second tunnel forwarding rule according to the vni.
  • step S106 can be executed.
  • steps before step S102 may include the following steps:
  • the network card sends a pointer to the first vxlan data packet to the first thread.
  • the first vxlan data packet is located in the first buffer area in the network card, therefore, when the physical port of the network card receives the first vxlan data packet and sends the first vxlan data packet After the packet is stored in the first cache area, the network card driver can send the pointer of the first vxlan data packet to the first thread to inform the first thread of the first address of the first vxlan data packet, And notify the first thread to process the first vxlan data packet.
  • the first thread accesses the first vxlan data packet according to the pointer of the first vxlan data packet.
  • the first thread obtains the pointer of the first vxlan data packet, it can quickly locate the first vxlan data packet, so as to execute the step S102 on the first vxlan data packet related operations.
  • the step S102 may include the following steps:
  • the first thread obtains the outer IP, and determines the corresponding tunnel type according to the configuration table.
  • the configuration table is preserved in the service process, and the configuration table includes multiple Outer UDP headers, multiple Outer IP headers, multiple Outer Ethernet headers, and multiple tunnel types, and the multiple Outer UDP
  • the header, the multiple Outer IP headers, the multiple Outer Ethernet headers and the multiple tunnel types correspond one-to-one.
  • the outer IP is the Outer IP header in the first vxlan packet, at this time, the corresponding tunnel type can be found in the configuration table according to the outer IP, wherein, the The above tunnel types may include lan tunnels, wan tunnels and mec tunnels.
  • the lan-type tunnel may indicate that the tunnel is connected to the terminal, that is, the first vxlan data packet is from the terminal;
  • the wan-type tunnel may indicate that the tunnel is connected to the bras, that is, the first vxlan data packet is from the bras ;
  • the mec-type tunnel may indicate that the tunnel is connected to the service system, that is, the first vxlan data packet is from the service system.
  • the first thread determines whether the corresponding tunnel type is a preset tunnel type according to the tunnel type.
  • the preset tunnel type is stored in the service process, and the preset tunnel type includes at least one of the lan-type tunnel, the wan-type tunnel and the mec-type tunnel;
  • the preset tunnel type includes the lan-type tunnel and the mec-type tunnel as an example for subsequent description, that is, the tunnel type of the first vxlan data packet is the lan-type tunnel or the mec-type tunnel belongs to all Describe the default tunnel type.
  • the step S104 may include the following steps:
  • the first thread obtains the inner layer IP from the first vxlan data packet, and searches for multiple service IP segments in tunnel rules.
  • the first tunnel forwarding rule in the service process specifies that when the IP address of the receiving end in the inner layer IP meets the requirements, the service system can be accessed.
  • the first tunnel forwarding rule includes the multiple service IP segments, each of the multiple service IP segments includes multiple IP addresses, and each of the service IP segments includes the Multiple IP addresses can be continuous or discontinuous multiple IP addresses, that is, each service IP segment can be understood as a set of corresponding multiple IP addresses.
  • the first thread determines whether to access the service system according to whether the inner layer IP is included in one of the multiple service IP segments.
  • accessing the service system can be understood as transmitting the original data to the service system.
  • IP address of the receiving end in the inner layer IP is included in one of the service IP segments among the plurality of service IP segments, it means that the first vxlan data packet can access the
  • the service system corresponding to the IP segment combined with the step S105, it can be known that the service IP segment can correspond to a tunnel ID, which means that the first vxlan data packet can access the corresponding service system through the tunnel ID corresponding to the service IP segment .
  • the steps before step S106 may include the following steps:
  • the network card sends the pointer of the original data to the second thread.
  • the original data is located after the first encapsulated data in the first cache area, therefore, after the first thread finishes processing the first vxlan data packet, the network card driver can send The second thread sends the pointer of the original data to inform the second thread of the first address of the original data, and notifies the second thread to process the content after the original first address.
  • the second thread accesses the original data according to the pointer of the original data.
  • the second thread obtains the pointer of the original data, it can quickly locate the original data, so as to perform the relevant operation of the step S106 on the original data.
  • the first buffer area if the first buffer area is known, it can be determined according to the preset relative position of the first byte of the first buffer area and the first byte of the second buffer area The second cache area, so the second thread can determine the address of the second cache area after obtaining the pointer of the original data, and extract the vni, the tunnel type and The tunnel ID.
  • the original data located in the first buffer area and the second encapsulated data located in the second buffer area together constitute a second vxlan data packet, that is, the second vxlan data
  • the package does not contain the second encapsulation data, that is, the second thread accesses the original data according to the pointer of the original data, so that the second thread can automatically shield the address of the original data, such as the
  • the first encapsulates information such as data to improve the work efficiency of the second thread.
  • step S106 the following steps may be included after the step S106:
  • the network card determines a corresponding sending tunnel according to the second vxlan data packet.
  • the second encapsulation data in the second vxlan data packet can determine the IP addresses at both ends of a pair of tunnels and the source MAC address of the tunnel, however, one end of different tunnels may correspond to the same source MAC address and IP address; further, the vni in the second vxlan data packet can determine the sending tunnel through the VXLAN network identifier.
  • the network card sends the second vxlan data packet according to the sending tunnel.
  • the sending tunnel is the transmission path of the second vxlan data packet.
  • the second vxlan obtained after the first vxlan data packet undergoes the above-mentioned multiple transformations
  • the second vxlan data packet determines One end of the sending tunnel is the physical port of the network card, and the other end of the sending tunnel is the physical port of the service system, that is, the second vxlan data packet can be transmitted from the network card to the service system system; for another example, when the business system sends the first vxlan data packet to the network card, the second vxlan obtained after the first vxlan data packet undergoes the above-mentioned multiple transformations, the second vxlan data One end of the sending tunnel determined by the packet is the physical port of the network card, and the other end of the sending tunnel is the physical port of the terminal, that is, the second vxlan data packet can be transmitted from the network card to the
  • FIG. 12 it is a schematic diagram of the signaling interaction of the data packet parsing method in the embodiment of the present invention.
  • the schematic diagram of the signaling interaction of the data packet parsing method includes the following steps:
  • the network card receives the first vxlan data packet, and stores the first vxlan data packet in the first buffer area;
  • the network card sends the pointer of the first vxlan data packet to the first thread
  • the first thread parses the first encapsulated data from the first vxlan data packet, and obtains the outer IP and vni therein, and judges whether the corresponding tunnel type is a preset tunnel type according to the outer IP;
  • the first thread saves the tunnel type and the vni to the second cache area;
  • the first thread obtains the inner layer IP from the first vxlan data packet, and judges whether to access the service system according to the inner layer IP;
  • the first thread determines the corresponding tunnel ID according to the inner layer IP, and saves the tunnel ID to the second cache area;
  • the first thread sends a "processing data packet task completion instruction" to the network card
  • the network card sends a pointer to the original data in the first vxlan data packet to the second thread;
  • the second thread extracts the vni and the tunnel ID from the second cache area
  • the second thread determines second encapsulation data according to the vni and the tunnel ID, and saves the second encapsulation data in the second cache area;
  • the second thread sends a "processing data packet task completion instruction" to the network card.
  • the server 400 includes a network card 401 and A service process 402, the network card 401 includes a cache area, the cache area includes a first cache area and a second cache area, and the service process 402 includes a first thread 4021 and a second thread 4022;
  • the network card 401 is configured to receive a first vxlan data packet, and save the first vxlan data packet to the first buffer area, the first vxlan data packet includes first encapsulation data and original data, and the first vxlan data packet includes first encapsulation data and original data, and the first An encapsulation data includes vni and outer layer IP, and the original data includes inner layer IP;
  • the first thread 4021 is used to parse the first encapsulation data from the first vxlan data packet and obtain the outer IP and the vni, and judge whether the corresponding tunnel type is Default tunnel type;
  • the first thread 4021 is further configured to save the tunnel type and the vni to the second cache area;
  • the first thread 4021 is also used to determine whether the tunnel type is the first preset tunnel type
  • the first thread 4021 is also used to obtain the inner layer IP from the first vxlan data packet, and judge whether to access to business systems;
  • the first thread 4021 is also used to save the corresponding tunnel ID to the second cache area;
  • the second thread 4022 is also used to extract the vni and the tunnel ID from the second cache area, and determine the second encapsulation data according to the vni and the tunnel ID, and store the second encapsulation data saving to the second buffer, so that the original data located in the first buffer and the second encapsulated data located in the second buffer together constitute a second vxlan data packet.
  • the second thread 4022 is also used to determine whether the tunnel type is the second preset tunnel type; and if the tunnel type is the second preset tunnel type, the second The thread 4022 is further configured to extract the vni from the second cache area, and determine the corresponding tunnel ID according to the vni.
  • the first thread 4021 is also used to obtain the outer layer IP, and determine the corresponding tunnel type according to the configuration table; and the first thread 4021 is also used to obtain the corresponding tunnel type according to the tunnel type , to determine whether the corresponding tunnel type is a preset tunnel type.
  • the first thread 4021 is also used to obtain the inner layer IP from the first vxlan data packet, and search for multiple service IP segments in the tunnel rules; and the first The thread 4021 is also used to judge whether to access the service system according to whether the inner layer IP is included in one of the multiple service IP segments.
  • the network card 401 is further configured to send the pointer of the first vxlan data packet to the first thread 4021; the first thread 4021 is also configured to A pointer to access the first vxlan packet.
  • the network card 401 sends the pointer of the original data to the second thread 4022; the second thread 4022 accesses the original data according to the pointer of the original data.
  • the network card 401 determines a corresponding sending tunnel according to the second vxlan data packet; the network card 401 sends the second vxlan data packet according to the sending tunnel.
  • the present invention provides a data packet analysis method and a server, the server includes a network card and a service process, the network card includes a buffer area, the buffer area includes a first buffer area and a second buffer area, and the service process includes a first thread and a second buffer area.
  • the second thread, the first thread obtains the outer layer IP and vni from the first encapsulation data, if the corresponding tunnel type is a preset tunnel type, then save the tunnel type and vni to the second cache area; further, if the corresponding tunnel If the type is the first preset tunnel type, the inner layer IP is obtained.
  • the corresponding tunnel ID is saved to the second cache area according to the inner layer IP; the second thread extracts the tunnel ID from the second buffer area.
  • the vni and the tunnel ID are used to determine the second encapsulated data, and the second encapsulated data is stored in the second cache area, so that the original data and the second encapsulated data together constitute the second vxlan data packet.
  • the corresponding tunnel ID obtained by the first thread through data packet parsing and the vni obtained by processing the first vxlan data packet are stored in the second buffer area for the second buffer area.
  • Two threads obtain and use, avoid second thread to carry out the step of data packet parsing or other repetition with first thread; And obtain the corresponding information in the first vxlan packet and described first thread and described second thread and While performing the corresponding operations, the integrity of the first vxlan data packet is still guaranteed, so that other threads can normally obtain the information of the first vxlan data packet, preventing the server from re-acquiring the first vxlan data packet from the outside. for other threads to use.
  • this solution improves the overall work efficiency of the service process.
  • the embodiment of the present invention also provides a server, as shown in FIG. 14 , which shows a schematic structural diagram of the server involved in the embodiment of the present invention, specifically:
  • the server may include a processor 801 of one or more processing cores, a memory 802 of one or more computer-readable storage media, a power supply 803, an input unit 804 and other components.
  • a processor 801 of one or more processing cores may include a processor 801 of one or more processing cores, a memory 802 of one or more computer-readable storage media, a power supply 803, an input unit 804 and other components.
  • FIG. 14 is not limited to the server, and may include more or less components than shown in the figure, or combine some components, or arrange different components. in:
  • the processor 801 is the control center of the server, and uses various interfaces and lines to connect various parts of the entire server, by running or executing software programs and/or modules stored in the memory 802, and calling data stored in the memory 802, Execute various functions of the server and process data to monitor the server as a whole.
  • the processor 801 can include one or more processing cores; the processor 801 can be a central processing unit (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP ), Application Specific Integrated Circuit (ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the general-purpose processor can be a microprocessor or the processor can also be any conventional processor, etc., preferably, the processor 801 can integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, User interface and application programs, etc., modem processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 801 .
  • the memory 802 can be used to store software programs and modules, and the processor 801 executes various functional applications and data processing by running the software programs and modules stored in the memory 802 .
  • the memory 802 can mainly include a program storage area and a data storage area, wherein the program storage area can store an operating system, at least one application program required by a function (such as a sound playback function, an image playback function, etc.); The data created by the use of the server, etc.
  • the memory 802 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
  • the memory 802 may further include a memory server to provide the processor 801 with access to the memory 802 .
  • the server also includes a power supply 803 for supplying power to various components.
  • the power supply 803 can be logically connected to the processor 801 through the power management system, so that functions such as charging, discharging, and power consumption management can be realized through the power management system.
  • the power supply 803 may also include one or more DC or AC power supplies, recharging systems, power failure detection circuits, power converters or inverters, power status indicators and other arbitrary components.
  • the server can also include an input unit 804, which can be used to receive input numbers or character information, and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • an input unit 804 can be used to receive input numbers or character information, and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • the server may also include a display unit, etc., which will not be repeated here.
  • the processor 801 in the server will load the executable file corresponding to the process of one or more application programs into the memory 802 according to the following instructions, and the processor 801 will run the executable file stored in the memory. 802 to implement various functions, the processor 801 can send instructions to the network card in the server and the first thread and the second thread belonging to the same service process, so that the network card, the first thread The first thread and the second thread execute the following steps in sequence:
  • the network card receives a first vxlan data packet, and saves the first vxlan data packet to the first buffer area, the first vxlan data packet includes first encapsulation data and original data, and the first encapsulation data Including vni and outer layer IP, the original data includes inner layer IP;
  • the first thread parses the first encapsulated data from the first vxlan data packet and obtains the outer IP and the vni, and judges whether the corresponding tunnel type is a preset tunnel according to the outer IP type;
  • the first thread saves the tunnel type and the vni to the second cache area;
  • the first thread judges whether the tunnel type is a first preset tunnel type
  • the first thread obtains the inner IP from the first vxlan packet, and judges whether to access the service system according to the inner IP;
  • the first thread determines to save the corresponding tunnel ID to the second cache area according to the inner layer IP;
  • the second thread extracts the vni and the tunnel ID from the second cache area, determines second package data according to the vni and the tunnel ID, and saves the second package data to the A second buffer area, such that the original data located in the first buffer area and the second encapsulated data located in the second buffer area together form a second vxlan data packet.
  • the embodiment of the present invention provides a kind of computer-readable storage medium, and this storage medium can comprise: Read Only Memory (ROM, Read Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk etc. .
  • ROM Read Only Memory
  • RAM Random Access Memory
  • a computer program is stored on it, and the computer program is loaded by the processor to issue instructions to the network card in the server and the first thread and the second thread belonging to the same service process, so that the network card, the The first thread and the second thread execute the following steps in sequence:
  • the network card receives a first vxlan data packet, and saves the first vxlan data packet to the first buffer area, the first vxlan data packet includes first encapsulation data and original data, and the first encapsulation data Including vni and outer layer IP, the original data includes inner layer IP;
  • the first thread parses the first encapsulated data from the first vxlan data packet and obtains the outer IP and the vni, and judges whether the corresponding tunnel type is a preset tunnel according to the outer IP type;
  • the first thread saves the tunnel type and the vni to the second cache area;
  • the first thread judges whether the tunnel type is a first preset tunnel type
  • the first thread obtains the inner IP from the first vxlan packet, and judges whether to access the service system according to the inner IP;
  • the first thread saves the corresponding tunnel ID to the second cache area according to the inner layer IP;
  • the second thread extracts the vni and the tunnel ID from the second cache area, determines second package data according to the vni and the tunnel ID, and saves the second package data to the A second buffer area, such that the original data located in the first buffer area and the second encapsulated data located in the second buffer area together form a second vxlan data packet.
  • each of the above units or structures can be implemented as an independent entity, or can be combined arbitrarily as the same or several entities.
  • each of the above units or structures please refer to the previous method embodiments, here No longer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明提供了数据包解析的方法和服务器,该方法包括:网卡接收包括第一封装数据和原始数据的第一vxlan数据包,第一封装数据包括vni和外层IP,原始数据包括内层IP;第一线程从第一封装数据获取外层IP和vni,若对应的隧道类型为预设隧道类型,则将隧道类型和vni保存至第二缓存区;若为第一预设隧道类型,则获取内层IP,若访问业务系统,则根据内层IP将对应的隧道ID保存至第二缓存区;第二线程从第二缓存区中提取vni和隧道ID,并将以此确定的第二封装数据保存至第二缓存区,使得原始数据和第二封装数据共同构成第二vxlan数据包。该方案通过开辟第二缓存区保存数据包中的部分信息以减少查询信息的次数,提高了服务进程整体的工作效率。

Description

数据包解析的方法和服务器 技术领域
本发明涉及通信技术领域,具体涉及数据包解析的方法和服务器。
背景技术
VXLAN(Virtual eXtensible LAN,可扩展虚拟局域网络)可以将二层数据包封装到三层网络,很好地解决了现有VLAN(Virtual Local AreaNetwork,虚拟局域网)技术无法满足大二层网络需求的问题。
同一个服务进程包括多个线程,不同的线程可以共享服务进程中的数据和规则,其中,多个线程需要分别对网卡的数据包执行不同的操作以得到对应的多个目标数据,并且每一线程得到的目标无法和其它线程共享,并且,至少两个线程需要执行同一的操作以分别获取对应的目标数据才能进行后续操作,导致服务进程整体的工作效率较低。
因此,有必要提供数据包解析的方法和服务器,以提高服务进程整体的工作效率的。
发明内容
本发明实施例提供数据包解析的方法和服务器,通过在网卡的缓存区中开辟第二缓存区,第一线程获取内层IP,通过隧道规则判断出原始数据是否访问业务系统,若是,则将根据内层IP和隧道规则确定对应的隧道ID保存至第二缓存区以供第二线程获取和使用;以解决目前的同一个服务进程中第二线程需要执行与第一线程所执行的操作相同的操作获取相应的信息,导致服务进程整体的工作效率较低的问题。
本发明实施例提供数据包解析的方法,应用于服务器,所述服务器包括网卡和服务进程,所述网卡包括缓存区,所述缓存区包括第一缓存区和第二缓存区,所述服务进程包括第一线程和第二线程,所述数据包解析的方法包括:
所述网卡接收第一vxlan数据包,并将所述第一vxlan数据包保存至所述第一缓存区,所述第一vxlan数据包包括第一封装数据和原始数据,所述第一封装数据包括vni和外层IP,所述原始数据包括内层IP;
所述第一线程从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型;
若所述隧道类型为所述预设隧道类型,则所述第一线程将所述隧道类型和所述vni保存至所述第二缓存区;
所述第一线程判断所述隧道类型是否为第一预设隧道类型;
若所述隧道类型为所述第一预设隧道类型,则所述第一线程从所述第一 vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统;
若访问业务系统,则所述第一线程根据所述内层IP将对应的隧道ID保存至所述第二缓存区;
所述第二线程从所述第二缓存区中提取所述vni和所述隧道ID,并根据所述vni和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包。
在一实施例中,所述第二线程从所述第二缓存区中提取所述vni和所述隧道ID,并根据所述vni和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包的步骤之前,包括:
若所述隧道类型不为所述第一预设隧道类型,则所述第二线程判断所述隧道类型是否为第二预设隧道类型;
若所述隧道类型为第二预设隧道类型,则所述第二线程从所述第二缓存区中提取所述vni,并根据所述vni确定对应的隧道ID。
在一实施例中,所述第一线程从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型的步骤,包括:
所述第一线程获取所述外层IP,并根据配置表中确定对应的隧道类型;
所述第一线程根据隧道类型,判断对应的隧道类型是否为预设隧道类型。
在一实施例中,所述第一线程从所述第一vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统的步骤,包括:
所述第一线程从所述第一vxlan数据包中获取所述内层IP,并在隧道规则中查找多个业务IP段;
所述第一线程根据所述内层IP是否包含于所述多个业务IP段中的其中一个业务IP段判断是否访问业务系统。
在一实施例中,所述第一线程从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型的步骤之前,包括:
所述网卡向所述第一线程发送所述第一vxlan数据包的指针;
所述第一线程根据所述第一vxlan数据包的指针访问所述第一vxlan数据包。
在一实施例中,所述第二线程从所述第二缓存区中提取所述vni、所述隧道类型和所述隧道ID,并根据所述vni、所述隧道类型和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包的步骤之前,包括:
所述网卡向所述第二线程发送所述原始数据的指针;
所述第二线程根据所述原始数据的指针访问所述原始数据。
在一实施例中,所述第二线程从所述第二缓存区中提取所述vni、所述隧道类型和所述隧道ID,并根据所述vni、所述隧道类型和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包的步骤之后,包括:
所述网卡根据所述第二vxlan数据包确定对应的发送隧道;
所述网卡根据所述发送隧道发送所述第二vxlan数据包。
本发明实施例提供服务器,所述服务器包括网卡和服务进程,所述网卡包括缓存区,所述缓存区包括第一缓存区和第二缓存区,所述服务进程包括第一线程和第二线程;
所述网卡用于接收第一vxlan数据包,并将所述第一vxlan数据包保存至所述第一缓存区,所述第一vxlan数据包包括第一封装数据和原始数据,所述第一封装数据包括vni和外层IP,所述原始数据包括内层IP;
所述第一线程用于从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型;
若所述隧道类型为所述预设隧道类型,则所述第一线程还用于将所述隧道类型和所述vni保存至所述第二缓存区;
所述第一线程还用于判断所述隧道类型是否为第一预设隧道类型;
若所述隧道类型为所述第一预设隧道类型,则所述第一线程还用于从所述第一vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统;
若访问业务系统,则所述第一线程还用于根据所述内层IP将对应的隧道ID保存至所述第二缓存区;
所述第二线程用于从所述第二缓存区中提取所述vni、所述隧道类型和所述隧道ID,并根据所述vni、所述隧道类型和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包。
在一实施例中,所述第一线程还用于获取所述外层IP,并根据配置表中确定对应的隧道类型;以及
所述第一线程还用于根据所述隧道类型,判断对应的隧道类型是否为预设隧道类型。
在一实施例中,所述第一线程还用于从所述第一vxlan数据包中获取所述内层IP,并在隧道规则中查找多个业务IP段;以及
所述第一线程还用于根据所述内层IP是否包含于所述多个业务IP段中的其中一个业务IP段判断是否访问业务系统。
本发明提供了数据包解析的方法和服务器,服务器包括网卡和服务进程, 所述网卡包括缓存区,所述缓存区包括第一缓存区和第二缓存区,所述服务进程包括第一线程和第二线程,第一线程从第一封装数据获取外层IP和vni,若对应的隧道类型为预设隧道类型,则将隧道类型和vni保存至第二缓存区;进一步的,若对应的隧道类型为第一预设隧道类型,则获取内层IP,若判断出为访问业务系统,则根据内层IP将对应的隧道ID保存至第二缓存区;第二线程从第二缓存区中提取vni和隧道ID以确定第二封装数据,将第二封装数据保存至第二缓存区,使得位于原始数据和第二封装数据共同构成第二vxlan数据包。该方案通过在网卡中的缓存区中开辟扩展信息缓存,将第一线程通过数据包解析得到的对应的隧道ID、以及处理第一vxlan数据包得到的vni均保存至第二缓存区以供第二线程获取和使用,避免第二线程执行数据包解析或者其它与第一线程重复的步骤;并且在所述第一线程和所述第二线程获取所述第一vxlan数据包中的相应信息以及执行相应操作的同时,仍然保证所述第一vxlan数据包的完整性,使得其他的线程可以正常获取所述第一vxlan数据包的信息,避免服务器重新从外界获取所述第一vxlan数据包以供给其它线程使用。综上,本方案提高了服务进程整体的工作效率。
附图说明
下面通过附图来对本发明进行进一步说明。需要说明的是,下面描述中的附图仅仅是用于解释说明本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的数据包解析的系统的场景示意图;
图2为本发明实施例提供的一种网卡中缓存区的区间示意图;
图3为本发明实施例提供的第一种数据包解析的方法的流程示意图;
图4为本发明实施例提供的另一种网卡中缓存区的区间示意图;
图5为本发明实施例提供的第一vxlan数据包的结构示意图;
图6为本发明实施例提供的第二种数据包解析的方法的流程示意图;
图7为本发明实施例提供的第三种数据包解析的方法的流程示意图;
图8为本发明实施例提供的第四种数据包解析的方法的流程示意图;
图9为本发明实施例提供的第五种数据包解析的方法的流程示意图;
图10为本发明实施例提供的第六种数据包解析的方法的流程示意图;
图11为本发明实施例提供的第七种数据包解析的方法的流程示意图;
图12为本发明实施例提供的数据包解析的方法的信令交互示意图;
图13为本发明实施例提供的一种服务器的结构示意图;
图14为本发明实施例提供的另一种服务器的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳 动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或模块的过程、方法、系统、产品或设备没有限定于已列出的步骤或模块,而是可选地还包括没有列出的步骤或模块,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或模块。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本发明的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本发明实施例提供的数据包解析的方法的执行主体,可以为本发明实施例提供的用于执行数据包解析的方法的服务器,或者集成了用于执行数据包解析的方法的服务器的电子设备,所述用于执行数据包解析的方法的服务器可以采用硬件或者软件的方式实现。
下面首先对本发明实施例中涉及到的一些基本概念进行介绍。
网卡:一块被设计用来允许计算机在计算机网络上进行通讯的计算机硬件,使得用户可以通过电缆或无线相互连接。每一个网卡都有一个被称为MAC地址的独一无二的48位串行号,它被写在网卡上的一块ROM中。网卡并不是独立的自治单元,因为网卡本身不带电源而是必须使用所插入的计算机的电源,并受该计算机的控制。当网卡收到一个有差错的帧时,它就将这个帧丢弃而不必通知它所插入的计算机。当网卡收到一个正确的帧时,它就使用中断来通知该计算机并交付给协议栈中的网络层。当计算机要发送一个IP数据包时,它就由协议栈向下交给网卡组装成帧后发送到局域网。
进程与线程的区别:线程是程序执行的最小单位,而进程是操作系统分配资源的最小单位;一个进程由一个或多个线程组成,线程是一个进程中代码的不同执行路线;进程之间相互独立,但同一进程下的各个线程之间共享程序的内存空间(包括代码段,数据集,堆等)及一些进程级的资源(如打开文件和信号等),某进程内的线程在其他进程不可见;线程上下文切换比进程上下文切换要快得多。
缓存区:多个以不同速度或优先级运行的硬件或程序进程共享的数据存储区。在高速和低速设备之间起一个速度平滑作用,暂时存储数据,经常访问的数据可以放进缓冲区,减少对慢速设备的访问以提高系统的效率。
数据包:在包交换网络里,单个消息被划分为多个数据块,这些数据块称为包,它包含发送者和接收者的地址信息。这些包然后沿着不同的路径在一个或多个网络中传输,并且在目的地重新组合。
隧道:一种封装技术,利用一种网络传输协议,将其他协议产生的数据数据包封装在它自己的数据包中,然后在网络中传输。实际上隧道可以看作一个 虚拟的点到点连接。简单地说就是,原始数据在A地进行封装,到达B地后把封装去掉,还原成原始数据,这样就形成了一条由A到B的通信隧道。隧道技术就是指包括封装、传输和解封装在内的全过程。隧道是通过隧道协议实现的,隧道协议规定了隧道的建立,维护和删除规则,以及怎样将原始数据封装在隧道中进行传输。
本发明实施例提供了数据包解析的方法和服务器。以下将分别进行详细说明。
请参阅图1,图1为本发明实施例所提供数据包解析的系统的场景示意图,该数据包解析的系统可以包括网卡100和服务进程10,所述服务进程10包括第一线程200和第二线程300,所述网卡100包括缓存区,所述缓存区包括第一缓存区和第二缓存区。
本申请实施例中,所述缓存区位于所述网卡100中,如图2所示,所述网卡100为所述缓存区配置2048字节的空间,其中每一个标号表示所述缓存区中对应的字节的序号,例如“0”表示第0个字节,“2047”表示第2047个字节。其中,前1600个字节所在区间为保存数据包的第一缓存区,即第0个字节至第1599个字节用于保存数据包;进一步的,可以从第1600个字节至第2047个字节所在区间中选择长达256个字节所在区间为保存数据包中部分信息的第二缓存区。需要注意的是,当所述第一缓存区和所述第二缓存区确定后,若已知所述第一缓存区,可以根据预先设置的所述第一缓存区的首个字节和所述第二缓存区的首个字节的相对位置,以确定所述第二缓存区。
其中,所述第二缓存区和所述第一缓存区之间可以预留预设区间,以适当区分所述数据包和所述数据包中被保存的部分信息,例如图2所示,所述第二缓存区可以为所述缓存区中的第1663个字节至第1918个字节所在区间;或者所述第二缓存区和所述第一缓存区也可以相邻设置,仅根据预先设置的所述第一缓存区的首个字节和所述第二缓存区的首个字节的相对位置,以确定所述第二缓存区。当然,可以根据所述数据包中部分信息的长度合理地选择所述第二缓存区相应的空间。
本申请实施例中,所述网卡100主要用于接收第一vxlan数据包,并将所述第一vxlan数据包保存至所述第一缓存区,所述第一vxlan数据包包括第一封装数据和原始数据,所述第一封装数据包括vni和外层IP,所述原始数据包括内层IP;所述第一线程主要用于从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型;若所述隧道类型为所述预设隧道类型,则所述第一线程将所述隧道类型和所述vni保存至所述第二缓存区;所述第一线程主要还用于判断所述隧道类型为所述第一预设隧道类型;若所述隧道类型为所述第一预设隧道类型,则所述第一线程从所述第一vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统;若访问业务系统,则所述第一线程还用于根据所述内层IP将对应的隧道ID保存至所述第二缓存区;所述第二线程主要用于从所述第二缓存区中提取所述vni和所述隧道ID,并根据所述vni和所述 隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包。
本申请实施例中,所述数据包解析的系统可以包含于服务器中,即所述网卡100、所述第一线程200和所述第二线程300均可以包含于服务器中。所述服务器可以是独立的服务器,也可以是服务器组成的服务器网络或服务器集群,例如,所述服务器包括但不限于计算机、网络主机、单个网络服务器、多个网络服务器集或多个服务器构成的云服务器。其中,云服务器由基于云计算的大量计算机或网络服务器构成。
进一步的,所述服务器可以包括物理口和虚拟口。其中,所述物理口可以包含于所述网卡100,所述物理口用于接收终端或者业务系统发送的数据包,或者用于向业务系统或者终端发送数据包。其中,所述网卡100、所述第一线程200和所述第二线程300之间可以通过所述虚拟口进行通信。如图1所示,例如,所述网卡100的物理口收到数据包后或者所述第一线程200处理完数据包后,网卡驱动程序可以向所述第一线程200或者所述第二线程300发送“数据包指针”并通知所述第一线程200或者所述第二线程300处理数据包,所述第二线程300可以通过网卡驱动程序向所述物理口发送“处理数据包任务完成指令”,以表示已完成相应的任务,也可以通知所述物理口发送数据包至所述服务器外界。
本申请实施例中,所述终端可以是一个通用计算机设备或者是一个专用计算机设备。在具体实现中所述终端可以是台式机、便携式电脑、网络服务器、掌上电脑(Personal Digital Assistant,PDA)、移动手机、平板电脑、无线终端设备、通信设备、嵌入式设备等,本实施例不限定所述终端的类型。
本申请的实施例中,在所述服务器中创建一个服务进程,所述服务进程中包括所述第一线程200和所述第二线程300,所述服务进程是系统进行资源分配和调度的一个独立单位,所述第一线程200和所述第二线程300是所述服务进程的一个实体,是独立运行和独立调度的基本单位,所述第一线程200和所述第二线程300可共享所述服务进程所拥有的全部资源。其中,不同的服务进程之间可以通过管道、套接字、信号交互、共享内存、消息队列等等进行通信;而所述第一线程200和所述第二线程300共享同一所述服务进程中的内存,使用相同的地址空间,所述第一线程200和所述第二线程300之间遵循约定的规则以相互合作,所述第一线程200和所述第二线程300之间可以通过wait/notify等待、Volatile内存共享、CountDownLatch并发工具、CyclicBarrier并发工具进行通信。
本领域技术人员可以理解,图1中示出的应用环境,仅仅是与本申请方案一种应用场景,并不构成对本申请方案应用场景的限定,其他的应用环境还可以为:所述服务进程中包括比图1中所示更多的线程,例如图1中仅示出2个线程,可以理解的,该数据包解析的系统还可以包括一个或多个可访问所述网卡100的其它线程,具体此处不作限定。
需要说明的是,图1所示的数据包解析的场景示意图仅仅是一个示例,本发明实施例描述的数据包解析的系统以及场景是为了更加清楚的说明本发明实施例的技术方案,并不构成对于本发明实施例提供的技术方案的限定,本领域普通技术人员可知,随着数据包解析的系统的演变和新业务场景的出现,本发明实施例提供的技术方案对于类似的技术问题,同样适用。
本发明实施例中提供一种数据包解析的方法,该数据包解析的方法的执行主体为所述服务器,所述服务器包括网卡、第一线程和第二线程,所述网卡包括缓存区,所述缓存区包括第一缓存区和第二缓存区,所述数据包解析的方法包括:所述网卡接收第一vxlan数据包,并将所述第一vxlan数据包保存至所述第一缓存区,所述第一vxlan数据包包括第一封装数据和原始数据,所述第一封装数据包括vni和外层IP,所述原始数据包括内层IP;所述第一线程从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型;若所述隧道类型为所述预设隧道类型,则所述第一线程将所述隧道类型和所述vni保存至所述第二缓存区;所述第一线程判断所述隧道类型是否为第一预设隧道类型;若所述隧道类型为所述第一预设隧道类型,则所述第一线程从所述第一vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统;若访问业务系统,则所述第一线程根据所述内层IP将对应的隧道ID保存至所述第二缓存区;所述第二线程从所述第二缓存区中提取所述vni和所述隧道ID,并根据所述vni和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包。
如图3所示,为本发明实施例中数据包解析的方法的一个实施例流程示意图,该数据包解析的方法包括:
S101、所述网卡接收第一vxlan数据包,并将所述第一vxlan数据包保存至所述第一缓存区,所述第一vxlan数据包包括第一封装数据和原始数据,所述第一封装数据包括vni和外层IP,所述原始数据包括内层IP。
本实施例中,所述网卡可以为如图1中所示的网卡100,其中,所述第一vxlan数据包可以是所述网卡100的物理端口接收到的来自终端或者业务系统的数据包。
其中,如图4所示,所述缓存区包括所述第一缓存区和第二缓存区,所述第一缓存区可以位于第二缓存区之前,即所述第一缓存区可以位于所述缓存区中的前一部分。进一步的,所述第一vxlan数据包可以包括所述第一封装数据和所述原始数据,所述第一封装数据位于所述原始数据之前,所述vni和所述外层IP包含于所述第一封装数据,所述内层IP包含于所述原始数据。需要注意的是,图4中对于所述vni、所述外层IP、所述内层IP、所述第一封装数据和所述原始数据三者的区间长度划分只是为了便于绘图,并不对三者的区间长度的比例关系做出限制。
具体的,如图5所示,所述第一vxlan数据包可以包括所述第一封装数据和 所述原始数据,具体描述如下:
根据与所述原始数据的距离由近到远,所述第一封装数据可以依次包括VXLAN header901、Outer UDP header902、Outer IP header903和Outer Ethernet header904,进一步的,根据与所述原始数据的距离由远到近,VXLAN header901包括VXLAN Flags905和VNI906。其中,VNI为上文中的vni,所述vni(VNI)为VXLAN网络标识,用于标识所述第一vxlan数据包所属的租户,一个租户可以有一个或多个VNI,不同VNI的租户之间不能直接进行二层相互通信;其中,VXLAN Flags为标记位,包括8位,格式为“RRRRIRRR”,“I”位为1时,表示所述vni(VNI)有效,为0,表示所述vni(VNI)无效,“R”位保留未用,设置为0;其中,在VXLAN Flags905和VNI906之间、VNI906和所述原始数据之间也包括Reserved801,用于保留未用,设置为0。其中,Outer IP header903包括上文中的外层IP,所述外层IP具体包括IP SA802和IP DA803,IP SA为源IP地址,即隧道的源端VTEP的IP地址,IP DA为目的IP地址,即隧道的目的端VTEP的IP地址。
如图5所示,根据与所述第一封装数据的距离由近到远,所述原始数据可以依次包括Inner Ethernet header907、Inner IP header908和Payload909。其中,Inner Ethernet header包括发送端的MAC地址和下一跳设备的的MAC地址,Inner IP header包括上文中的内层IP,所述内层IP具体包括发送端的IP地址和接收端的IP地址,所述发送端和所述接收端根据实际收发所述第一vxlan数据包的情况,分别与上文中提到的终端和业务系统相对应;其中,Payload可以包括指令信息或者数据信息。
S102、所述第一线程从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型。
其中,所述第一线程可以访问所述第一vxlan数据包,并且解析所述第一vxlan数据包中的所述第一封装数据,根据解析结果获取所述第一封装数据中的所述外层IP和所述vni。具体的,所述第一线程可以获取所述第一封装数据中的VXLAN Flags信息,对于格式为“RRRRIRRR”的VXLAN Flags,若“I”位为1,则执行所述步骤S102,若“I”位为0,则不执行所述步骤S102。进一步的,所述第一线程可以根据所述外层IP确定对应的隧道类型,再将所述隧道类型和所述预设隧道类型做对比,判断对应的隧道类型是否为预设隧道类型。
可以理解的,此时保存于所述第一缓存区中的为所述原始数据,保存于所述第二缓存区中的为所述外层IP和所述vni。根据上文分析可知,所述第二缓存区和所述第一缓存区之间可以预留预设区间,以适当区分所述数据包和所述数据包中被保存的部分信息,因此,此处所述外层IP、所述vni和所述原始数据之间可以预留预设空间,在根据预先设置的所述第一缓存区的首个字节和所述第二缓存区的首个字节的相对位置,以确定所述第二缓存区的前提下,该实施例可以进一步确定所述外层IP、所述vni,例如在上述前提下可以根据经上述步骤确定的所述外层IP、所述vni分别和所述第一缓存区之间是否有预设区间,以判 断所述“确定的外层IP”、所述“确定的vni”是否为真正的所述外层IP、所述vni。
S103、若所述隧道类型为所述预设隧道类型,则所述第一线程将所述隧道类型和所述vni保存至所述第二缓存区。
其中,若所述第一vxlan数据包对应的隧道类型为所述预设隧道类型,则所述第一线程才将所述隧道类型和所述vni保存至所述第二缓存区;否则,所述第一线程丢弃所述第一vxlan数据包,即所述第一线程从此步骤开始对所述第一vxlan数据包不做任何相关的处理。
S01、所述第一线程判断所述隧道类型是否为第一预设隧道类型。
其中,所述预设隧道类型可以包括多种隧道类型,所述第一预设隧道类型为包含于所述预设隧道类型中的其中一种隧道类型。
S104、若所述隧道类型为所述第一预设隧道类型,则所述第一线程从所述第一vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统。
其中,所述第一预设隧道类型可以为lan型隧道,则当所述第一vxlan数据包对应的隧道类型为lan型隧道时,所述第一线程才将所述隧道类型和所述vni保存至所述第二缓存区;进而,所述第一线程可以访问所述第一vxlan数据包,并且解析所述第一vxlan数据包中的所述原始数据,根据解析结果获取所述内层IP,也即获取Inner IP header,具体为获取所述内层IP中的接收端的IP地址。进一步的,所述服务进程中可以包含第一隧道转发规则,所述第一线程可以根据所述内层IP获取所述第一隧道转发规则,即所述第一隧道转发规则适用于所述第一预设隧道类型的数据包;其中,所述第一隧道转发规则中规定了当所述第一预设隧道类型的数据包中的所述接收端的IP地址符合什么要求时,可以访问业务系统。因此,根据所述第一vxlan数据包中的所述内层IP中的所述接收端的IP地址,结合所述第一隧道转发规则,就可以判断所述第一vxlan数据包是否访问业务系统。
S105、若访问业务系统,则所述第一线程将对应的隧道ID保存至所述第二缓存区。
其中,所述第一隧道转发规则中包含了多个IP段和多个隧道ID,所述多个IP段和多个隧道ID一一对应,即每一个隧道ID与所述第一vxlan数据包中的所述接收端的IP地址所处于的IP段相对应,由于此步骤前提是所述第一vxlan数据包的所述隧道类型为所述第一预设隧道类型,结合所述步骤S104,当所述内层IP中的接收端的IP地址符合访问业务系统的要求时,所述第一线程才会根据所述接收端的IP地址所处于的IP段获取对应的隧道ID,并将所述隧道ID保存至所述第二缓存区;否则,所述第一线程丢弃所述第一vxlan数据包,即所述第一线程从此步骤开始对所述第一vxlan数据包不做任何相关的处理。
可以理解的,当所述第一线程将所述隧道ID保存至所述第二缓存区后,所述第一线程可以向所述网卡发送“处理数据包任务完成指令”等相关的指令,以告知所述网卡所述第一线程已完成“处理数据包任务”等相关的操作,以便于所述网卡进行下一步操作。同时,所述第一线程此时可以获取所述原始数据 的指针,并且所述第一线程的虚拟口也可以向所述网卡的虚拟口发送所述原始数据的指针。具体可以包括如下步骤:所述第一线程通过网卡驱动程序向所述服务器中的虚拟口发送所述原始数据的指针,网卡驱动程序通知所述第二线程根据所述原始数据的指针处理对应部分的数据包。
S106、所述第二线程从所述第二缓存区中提取所述vni和所述隧道ID,并根据所述vni和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包。
其中,所述第二线程可以确定出新的对应的Outer UDP header、Outer IP header和Outer Ethernet header,具体的,所述第二线程可以根据所述vni和所述隧道ID为所述原始数据配置新的对应的Outer UDP header、Outer IP header和Outer Ethernet header,由于所述vni没有发生变化,对应的VXLAN heade也没有发生变化,以上新的对应的Outer UDP header、Outer IP header和Outer Ethernet header、VXLAN header共同构成所述第二封装数据。
其中,所述隧道类型表示所述第一vxlan数据包来自哪里,例如来自终端或者业务系统,所述隧道ID表示所述第二vxlan数据包发往哪里,例如发往终端或者业务系统。例如,若所述隧道类型为lan型隧道,则表示此步骤中的所述第一vxlan数据包从终端到达服务器,根据所述隧道ID和所述vni可以确定出新的隧道的源端的MAC地址、目的端的MAC地址、源端的IP地址、目的端的IP地址、源端的UDP端口号和目的端的UDP端口号。其中,所述源端的MAC地址、所述目的端的MAC地址、所述源端的IP地址、所述目的端的IP地址、所述源端的UDP端口号和所述目的端的UDP端口号形成新的对应的Outer Ethernet header、Outer IP header和Outer UDP header。
根据上文分析可知,此时所述第二缓存区中包含所述第二封装数据、所述vni、所述隧道类型和所述隧道ID,可以选取其中的第二封装数据和位于所述第一缓存区中的所述原始数据共同构成所述第二vxlan数据包。
本实施例中,如图6所示,所述步骤S106之前可以包括如下步骤:
S02、若所述隧道类型不为所述第一预设隧道类型,所述第二线程判断所述隧道类型是否为第二预设隧道类型。
同理,所述第二预设隧道类型也为包含于所述预设隧道类型中的其中一种隧道类型。具体的,所述第二线程可以从所述第二缓存区获取所述隧道类型,并且判断所述隧道类型是否为第二预设隧道类型。
S03、若所述隧道类型为第二预设隧道类型,则所述第二线程从所述第二缓存区中提取所述vni,并根据所述vni确定对应的隧道ID。
其中,所述第二预设隧道类型可以为mec型隧道,则当所述第一vxlan数据包对应的隧道类型为mec型隧道时,所述第二线程从所述第二缓存区中提取所述vni,并根据所述vni确定对应的隧道ID。进一步的,所述服务进程中还可以包含第二隧道转发规则,所述第二线程可以根据所述vni获取所述第二隧道转发规则,即所述第二隧道转发规则适用于所述第二预设隧道类型的数据包;其中, 所述第二隧道转发规则中包含了多个vni和多个隧道ID,所述多个vni和所述多个隧道ID一一对应。由于此时所述第一vxlan数据包的隧道类型为所述第二预设隧道类型,所述第二隧道转发规则中包含多个隧道ID,且由上述分析可知,每一个隧道ID对应一个vni,因此,所述第二线程根据所述vni即可从所述第二隧道转发规则中找到对应的隧道ID。
需要注意的是,当执行完所述步骤S105或者所述步骤S03之后,均可以执行所述步骤S106。
本实施例中,如图7所示,所述步骤S102之前可以包括如下步骤:
S201、所述网卡向所述第一线程发送所述第一vxlan数据包的指针。
可以理解的,所述第一vxlan数据包位于所述网卡中的所述第一缓存区,因此,当所述网卡的物理口接收到所述第一vxlan数据包并将所述第一vxlan数据包保存于所述第一缓存区后,网卡驱动程序可以向所述第一线程发送所述第一vxlan数据包的指针以将所述第一vxlan数据包的首地址告知所述第一线程,并通知所述第一线程处理所述第一vxlan数据包。
S202、所述第一线程根据所述第一vxlan数据包的指针访问所述第一vxlan数据包。
可以理解的,当所述第一线程获取所述第一vxlan数据包的指针后,即可以快速定位至所述第一vxlan数据包,以对所述第一vxlan数据包执行所述步骤S102的相关操作。
本实施例中,如图8所示,所述步骤S102可以包括如下步骤:
S1021、所述第一线程获取所述外层IP,并根据配置表中确定对应的隧道类型。
具体的,所述服务进程中保存有所述配置表,所述配置表包括了多个Outer UDP header、多个Outer IP header、多个Outer Ethernet header和多个隧道类型,所述多个Outer UDP header、所述多个Outer IP header、所述多个Outer Ethernet header和所述多个隧道类型一一对应。由上文分析可知,所述外层IP为所述第一vxlan数据包中的Outer IP header,此时根据所述外层IP可以在所述配置表中查找到对应的隧道类型,其中,所述隧道类型可以包括lan型隧道、wan型隧道和mec型隧道。具体的,所述lan型隧道可以表示该隧道和终端相连,即所述第一vxlan数据包来自终端;所述wan型隧道可以表示该隧道和bras相连,即所述第一vxlan数据包来自bras;所述mec型隧道可以表示该隧道和业务系统相连,即所述第一vxlan数据包来自业务系统。
S1022、所述第一线程根据隧道类型,判断对应的隧道类型是否为预设隧道类型。
其中,所述服务进程中保存有所述预设隧道类型,所述预设隧道类型包括所述lan型隧道、所述wan型隧道和所述mec型隧道三者中的至少一种;此处以所述预设隧道类型包括所述lan型隧道和所述mec型隧道为例进行后续说明,即所述第一vxlan数据包的隧道类型为所述lan型隧道或者所述mec型隧道均属于所述预设隧道类型。
本实施例中,如图9所示,所述步骤S104可以包括如下步骤:
S1041、所述第一线程从所述第一vxlan数据包中获取所述内层IP,并在隧道规则中查找多个业务IP段。
由上文分析可知,所述服务进程中的所述第一隧道转发规则规定了当所述内层IP中的所述接收端的IP地址符合什么要求时,可以访问业务系统。具体的,所述第一隧道转发规则包括了所述多个业务IP段,所述多个业务IP段中每一个业务IP段中包括了多个IP地址,每一个业务IP段中的所述多个IP地址可以为连续或者是不连续的多个IP地址,即每一个业务IP段可以理解为对应的多个IP地址的集合。
S1042、所述第一线程根据所述内层IP是否包含于所述多个业务IP段中的其中一个业务IP段判断是否访问业务系统。
其中,访问业务系统可以理解为向所述业务系统传输所述原始数据。具体的,当所述内层IP中的所述接收端的IP地址包含于所述多个业务IP段中的其中一个业务IP段时,即表示所述第一vxlan数据包可以访问与所述业务IP段对应的业务系统,结合所述步骤S105可知,所述业务IP段可以对应一个隧道ID,即表示所述第一vxlan数据包可以通过所述业务IP段对应的隧道ID访问对应的业务系统。
本实施例中,如图10所示,所述步骤S106之前可以包括如下步骤:
S301、所述网卡向所述第二线程发送所述原始数据的指针。
可以理解的,所述原始数据位于所述第一缓存区中的所述第一封装数据之后,因此,当所述第一线程处理完所述第一vxlan数据包后,网卡驱动程序可以向所述第二线程发送所述原始数据的指针以将所述原始数据的首地址告知所述第二线程,并通知所述第二线程处理原始的首地址之后的内容。
S302、所述第二线程根据所述原始数据的指针访问所述原始数据。
可以理解的,当所述第二线程获取所述原始数据的指针后,即可以快速定位至所述原始数据,以对所述原始数据执行所述步骤S106的相关操作。
根据上文分析可知,若已知所述第一缓存区,可以根据预先设置的所述第一缓存区的首个字节和所述第二缓存区的首个字节的相对位置,以确定所述第二缓存区,因此所述第二线程获取所述原始数据的指针后可以确定所述第二缓存区的地址,从所述第二缓存区中提取所述vni、所述隧道类型和所述隧道ID。需要注意的是,由于位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包,即所述第二vxlan数据包中不包含所述第二封装数据,即所述第二线程根据所述原始数据的指针访问所述原始数据可以便于所述第二线程可以自动屏蔽所述原始数据的地址之前的例如所述第一封装数据等信息,提高所述第二线程的工作效率。
本实施例中,如图11所示,所述步骤S106之后可以包括如下步骤:
S401、所述网卡根据所述第二vxlan数据包确定对应的发送隧道。
其中,所述第二vxlan数据包中的所述第二封装数据可以确定出一对隧道两端的IP地址和隧道的源MAC地址,但是,不同的隧道其中一端可能会对应 相同的源MAC地址和IP地址;进一步的,所述第二vxlan数据包中的vni可以通过VXLAN网络标识确定出所述发送隧道。
S402、所述网卡根据所述发送隧道发送所述第二vxlan数据包。
可以理解的,所述发送隧道即为所述第二vxlan数据包传输的路径。例如,当所述终端向所述网卡发送所述第一vxlan数据包时,所述第一vxlan数据包经过上述多种转化后得到的所述第二vxlan,所述第二vxlan数据包确定的所述发送隧道的一端即为所述网卡的物理口,所述发送隧道的另一端即为所述业务系统的物理口,即所述第二vxlan数据包可以从所述网卡传输至所述业务系统;又例如当所述业务系统向所述网卡发送所述第一vxlan数据包时,所述第一vxlan数据包经过上述多种转化后得到的所述第二vxlan,所述第二vxlan数据包确定的所述发送隧道的一端即为所述网卡的物理口,所述发送隧道的另一端即为所述终端的物理口,即所述第二vxlan数据包可以从所述网卡传输至所述终端。
本实施例中,如图12所示,为本发明实施例中数据包解析的方法的信令交互的示意图,该数据包解析的方法的信令交互的示意图包括如下步骤:
S1、网卡接收第一vxlan数据包,并将所述第一vxlan数据包保存至第一缓存区;
S2、网卡向第一线程发送所述第一vxlan数据包的指针;
S3、第一线程从所述第一vxlan数据包中解析第一封装数据,并获取其中的外层IP和vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型;
S4、若所述隧道类型为所述预设隧道类型,则第一线程将所述隧道类型和所述vni保存至所述第二缓存区;
S5、第一线程从所述第一vxlan数据包中获取内层IP,并根据所述内层IP判断是否访问业务系统;
S6、若访问业务系统,则第一线程根据所述内层IP确定对应的隧道ID,并将所述隧道ID保存至所述第二缓存区;
S7、第一线程向网卡发送“处理数据包任务完成指令”;
S8、网卡向第二线程发送所述第一vxlan数据包中原始数据的指针;
S9、第二线程从所述第二缓存区中提取所述vni和所述隧道ID;
S10、第二线程根据所述vni和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区;
S11、第二线程向网卡发送“处理数据包任务完成指令”。
为了更好实施本发明实施例中数据包解析的方法,在数据包解析的方法基础之上,本发明实施例中还提供一种服务器,如图13所示,所述服务器400包括网卡401和服务进程402,所述网卡401包括缓存区,所述缓存区包括第一缓存区和第二缓存区,所述服务进程402包括第一线程4021和第二线程4022;
所述网卡401用于接收第一vxlan数据包,并将所述第一vxlan数据包保存至所述第一缓存区,所述第一vxlan数据包包括第一封装数据和原始数据,所述第一封装数据包括vni和外层IP,所述原始数据包括内层IP;
所述第一线程4021用于从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型;
若所述隧道类型为所述预设隧道类型,则所述第一线程4021还用于将所述隧道类型和所述vni保存至所述第二缓存区;
所述第一线程4021还用于判断所述隧道类型是否为第一预设隧道类型;
若所述隧道类型为所述第一预设隧道类型,则所述第一线程4021还用于从所述第一vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统;
若访问业务系统,则所述第一线程4021还用于将对应的隧道ID保存至所述第二缓存区;
所述第二线程4022还用于从所述第二缓存区中提取所述vni和所述隧道ID,并根据所述vni和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包。
在本申请一些实施例中,所述第二线程4022还用于判断所述隧道类型是否为第二预设隧道类型;以及若所述隧道类型为第二预设隧道类型,则所述第二线程4022还用于从所述第二缓存区中提取所述vni,并根据所述vni确定对应的隧道ID。
在本申请一些实施例中,所述第一线程4021还用于获取所述外层IP,并根据配置表中确定对应的隧道类型;以及所述第一线程4021还用于根据所述隧道类型,判断对应的隧道类型是否为预设隧道类型。
在本申请一些实施例中,所述第一线程4021还用于从所述第一vxlan数据包中获取所述内层IP,并在隧道规则中查找多个业务IP段;以及所述第一线程4021还用于根据所述内层IP是否包含于所述多个业务IP段中的其中一个业务IP段判断是否访问业务系统。
在本申请一些实施例中,所述网卡401还用于向所述第一线程4021发送所述第一vxlan数据包的指针;所述第一线程4021还用于根据所述第一vxlan数据包的指针访问所述第一vxlan数据包。
在本申请一些实施例中,所述网卡401向所述第二线程4022发送所述原始数据的指针;所述第二线程4022根据所述原始数据的指针访问所述原始数据。
在本申请一些实施例中,所述网卡401根据所述第二vxlan数据包确定对应的发送隧道;所述网卡401根据所述发送隧道发送所述第二vxlan数据包。
本发明提供了数据包解析的方法和服务器,服务器包括网卡和服务进程,所述网卡包括缓存区,所述缓存区包括第一缓存区和第二缓存区,所述服务进程包括第一线程和第二线程,第一线程从第一封装数据获取外层IP和vni,若对应的隧道类型为预设隧道类型,则将隧道类型和vni保存至第二缓存区;进一步的,若对应的隧道类型为第一预设隧道类型,则获取内层IP,若判断出为访问业务系统,则根据内层IP将对应的隧道ID保存至第二缓存区;第二线程 从第二缓存区中提取vni和隧道ID以确定第二封装数据,将第二封装数据保存至第二缓存区,使得位于原始数据和第二封装数据共同构成第二vxlan数据包。该方案通过在网卡中的缓存区中开辟扩展信息缓存,将第一线程通过数据包解析得到的对应的隧道ID、以及处理第一vxlan数据包得到的vni均保存至第二缓存区以供第二线程获取和使用,避免第二线程执行数据包解析或者其它与第一线程重复的步骤;并且在所述第一线程和所述第二线程获取所述第一vxlan数据包中的相应信息以及执行相应操作的同时,仍然保证所述第一vxlan数据包的完整性,使得其他的线程可以正常获取所述第一vxlan数据包的信息,避免服务器重新从外界获取所述第一vxlan数据包以供给其它线程使用。综上,本方案提高了服务进程整体的工作效率。
本发明实施例还提供一种服务器,如图14所示,其示出了本发明实施例所涉及的服务器的结构示意图,具体来讲:
该服务器可以包括一个或者一个以上处理核心的处理器801、一个或一个以上计算机可读存储介质的存储器802、电源803和输入单元804等部件。本领域技术人员可以理解,图14中示出的服务器结构并不构成对服务器的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
处理器801是该服务器的控制中心,利用各种接口和线路连接整个服务器的各个部分,通过运行或执行存储在存储器802内的软件程序和/或模块,以及调用存储在存储器802内的数据,执行服务器的各种功能和处理数据,从而对服务器进行整体监控。可选的,处理器801可包括一个或多个处理核心;处理器801可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等,优选的,处理器801可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器801中。
存储器802可用于存储软件程序以及模块,处理器801通过运行存储在存储器802的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器802可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据服务器的使用所创建的数据等。此外,存储器802可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器802还可以包括存储器服务器,以提供处理器801对存储器802的访问。
服务器还包括给各个部件供电的电源803,优选的,电源803可以通过电 源管理系统与处理器801逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源803还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
该服务器还可包括输入单元804,该输入单元804可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
尽管未示出,服务器还可以包括显示单元等,在此不再赘述。具体在本实施例中,服务器中的处理器801会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器802中,并由处理器801来运行存储在存储器802中的应用程序,从而实现各种功能,所述处理器801可以向服务器中的网卡以及属于同一个服务进程中的第一线程、第二线程发出指令,以使得所述网卡、所述第一线程和所述第二线程依次执行以下步骤:
所述网卡接收第一vxlan数据包,并将所述第一vxlan数据包保存至所述第一缓存区,所述第一vxlan数据包包括第一封装数据和原始数据,所述第一封装数据包括vni和外层IP,所述原始数据包括内层IP;
所述第一线程从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型;
若所述隧道类型为所述预设隧道类型,则所述第一线程将所述隧道类型和所述vni保存至所述第二缓存区;
所述第一线程判断所述隧道类型是否为第一预设隧道类型;
若所述隧道类型为所述第一预设隧道类型,则所述第一线程从所述第一vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统;
若访问业务系统,则所述第一线程根据所述内层IP确定将对应的隧道ID保存至所述第二缓存区;
所述第二线程从所述第二缓存区中提取所述vni和所述隧道ID,并根据所述vni和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包。
本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储介质中,并由处理器进行加载和执行。
为此,本发明实施例提供一种计算机可读存储介质,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。其上存储有计算机程序,所述计算机程序被处理器进行加载,以向向服务器中的网卡以及属于同一个服务进程中的第一线程、第二线程发出指令,以使得所述网卡、所述第一线程和所述第二线程依次执行以下步骤:
所述网卡接收第一vxlan数据包,并将所述第一vxlan数据包保存至所述第一缓存区,所述第一vxlan数据包包括第一封装数据和原始数据,所述第一封装数据包括vni和外层IP,所述原始数据包括内层IP;
所述第一线程从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型;
若所述隧道类型为所述预设隧道类型,则所述第一线程将所述隧道类型和所述vni保存至所述第二缓存区;
所述第一线程判断所述隧道类型是否为第一预设隧道类型;
若所述隧道类型为所述第一预设隧道类型,则所述第一线程从所述第一vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统;
若访问业务系统,则所述第一线程根据所述内层IP将对应的隧道ID保存至所述第二缓存区;
所述第二线程从所述第二缓存区中提取所述vni和所述隧道ID,并根据所述vni和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对其他实施例的详细描述,此处不再赘述。
具体实施时,以上各个单元或结构可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个单元或结构的具体实施可参见前面的方法实施例,在此不再赘述。
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。
以上对本发明实施例所提供的数据包解析的方法和服务器进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种数据包解析的方法,其特征在于,应用于服务器,所述服务器包括网卡和服务进程,所述网卡包括缓存区,所述缓存区包括第一缓存区和第二缓存区,所述服务进程包括第一线程和第二线程,所述数据包解析的方法包括:
    所述网卡接收第一vxlan数据包,并将所述第一vxlan数据包保存至所述第一缓存区,所述第一vxlan数据包包括第一封装数据和原始数据,所述第一封装数据包括vni和外层IP,所述原始数据包括内层IP;
    所述第一线程从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型;
    若所述隧道类型为所述预设隧道类型,则所述第一线程将所述隧道类型和所述vni保存至所述第二缓存区;
    所述第一线程判断所述隧道类型是否为第一预设隧道类型;
    若所述隧道类型为所述第一预设隧道类型,则所述第一线程从所述第一vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统;
    若访问业务系统,则所述第一线程根据所述内层IP将对应的隧道ID保存至所述第二缓存区;
    所述第二线程从所述第二缓存区中提取所述vni和所述隧道ID,并根据所述vni和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包。
  2. 根据权利要求1所述的数据包解析的方法,其特征在于,所述第二线程从所述第二缓存区中提取所述vni和所述隧道ID,并根据所述vni和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包的步骤之前,包括:
    若所述隧道类型不为所述第一预设隧道类型,则所述第二线程判断所述隧道类型是否为第二预设隧道类型;
    若所述隧道类型为第二预设隧道类型,则所述第二线程从所述第二缓存区中提取所述vni,并根据所述vni确定对应的隧道ID。
  3. 根据权利要求1所述的数据包解析的方法,其特征在于,所述第一线程从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型的步骤,包括:
    所述第一线程获取所述外层IP,并根据配置表中确定对应的隧道类型;
    所述第一线程根据隧道类型,判断对应的隧道类型是否为预设隧道类型。
  4. 根据权利要求1所述的数据包解析的方法,其特征在于,所述第一线程 从所述第一vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统的步骤,包括:
    所述第一线程从所述第一vxlan数据包中获取所述内层IP,并在隧道规则中查找多个业务IP段;
    所述第一线程根据所述内层IP是否包含于所述多个业务IP段中的其中一个业务IP段判断是否访问业务系统。
  5. 根据权利要求1所述的数据包解析的方法,其特征在于,所述第一线程从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型的步骤之前,包括:
    所述网卡向所述第一线程发送所述第一vxlan数据包的指针;
    所述第一线程根据所述第一vxlan数据包的指针访问所述第一vxlan数据包。
  6. 根据权利要求1所述的数据包解析的方法,其特征在于,所述第二线程从所述第二缓存区中提取所述vni、所述隧道类型和所述隧道ID,并根据所述vni、所述隧道类型和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包的步骤之前,包括:
    所述网卡向所述第二线程发送所述原始数据的指针;
    所述第二线程根据所述原始数据的指针访问所述原始数据。
  7. 根据权利要求1所述的数据包解析的方法,其特征在于,所述第二线程从所述第二缓存区中提取所述vni、所述隧道类型和所述隧道ID,并根据所述vni、所述隧道类型和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包的步骤之后,包括:
    所述网卡根据所述第二vxlan数据包确定对应的发送隧道;
    所述网卡根据所述发送隧道发送所述第二vxlan数据包。
  8. 一种服务器,其特征在于,所述服务器包括网卡和服务进程,所述网卡包括缓存区,所述缓存区包括第一缓存区和第二缓存区,所述服务进程包括第一线程和第二线程;
    所述网卡用于接收第一vxlan数据包,并将所述第一vxlan数据包保存至所述第一缓存区,所述第一vxlan数据包包括第一封装数据和原始数据,所述第一封装数据包括vni和外层IP,所述原始数据包括内层IP;
    所述第一线程用于从所述第一vxlan数据包中解析所述第一封装数据并获取所述外层IP和所述vni,并根据所述外层IP判断对应的隧道类型是否为预设隧道类型;
    若所述隧道类型为所述预设隧道类型,则所述第一线程还用于将所述隧道 类型和所述vni保存至所述第二缓存区;
    所述第一线程还用于判断所述隧道类型是否为第一预设隧道类型;
    若所述隧道类型为所述第一预设隧道类型,则所述第一线程还用于从所述第一vxlan数据包中获取所述内层IP,并根据所述内层IP判断是否访问业务系统;
    若访问业务系统,则所述第一线程还用于根据所述内层IP将对应的隧道ID保存至所述第二缓存区;
    所述第二线程用于从所述第二缓存区中提取所述vni、所述隧道类型和所述隧道ID,并根据所述vni、所述隧道类型和所述隧道ID确定第二封装数据,将所述第二封装数据保存至所述第二缓存区,使得位于所述第一缓存区中的所述原始数据和位于所述第二缓存区中的所述第二封装数据共同构成第二vxlan数据包。
  9. 根据权利要求8所述的服务器,其特征在于,所述第一线程还用于获取所述外层IP,并根据配置表中确定对应的隧道类型;以及
    所述第一线程还用于根据所述隧道类型,判断对应的隧道类型是否为预设隧道类型。
  10. 根据权利要求8所述的服务器,其特征在于,所述第一线程还用于从所述第一vxlan数据包中获取所述内层IP,并在隧道规则中查找多个业务IP段;以及
    所述第一线程还用于根据所述内层IP是否包含于所述多个业务IP段中的其中一个业务IP段判断是否访问业务系统。
PCT/CN2021/135683 2021-08-02 2021-12-06 数据包解析的方法和服务器 WO2023010730A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110878903.X 2021-08-02
CN202110878903.XA CN113596038B (zh) 2021-08-02 2021-08-02 数据包解析的方法和服务器

Publications (1)

Publication Number Publication Date
WO2023010730A1 true WO2023010730A1 (zh) 2023-02-09

Family

ID=78253457

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135683 WO2023010730A1 (zh) 2021-08-02 2021-12-06 数据包解析的方法和服务器

Country Status (2)

Country Link
CN (1) CN113596038B (zh)
WO (1) WO2023010730A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596038B (zh) * 2021-08-02 2023-04-07 武汉绿色网络信息服务有限责任公司 数据包解析的方法和服务器

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841023A (zh) * 2012-11-22 2014-06-04 华为技术有限公司 数据转发的方法和设备
WO2016035306A1 (ja) * 2014-09-01 2016-03-10 日本電気株式会社 制御システム、通信システム、通信方法および記録媒体
WO2020012491A1 (en) * 2018-07-10 2020-01-16 Telefonaktiebolaget L M Ericsson (Publ) Mechanism for hitless resynchronization during sdn controller upgrades between incompatible versions
CN110943901A (zh) * 2020-01-10 2020-03-31 锐捷网络股份有限公司 一种报文转发方法、装置、设备和存储介质
CN113596038A (zh) * 2021-08-02 2021-11-02 武汉绿色网络信息服务有限责任公司 数据包解析的方法和服务器

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702688B (zh) * 2009-11-24 2012-01-04 武汉绿色网络信息服务有限责任公司 一种数据包收发方法
US10142164B2 (en) * 2014-09-16 2018-11-27 CloudGenix, Inc. Methods and systems for dynamic path selection and data flow forwarding
US10361972B2 (en) * 2015-09-23 2019-07-23 Citrix Systems, Inc. Systems and methods to support VXLAN in partition environment where a single system acts as multiple logical systems to support multitenancy
CN109196473B (zh) * 2017-02-28 2021-10-01 华为技术有限公司 缓存管理方法、缓存管理器、共享缓存和终端
CN109587065B (zh) * 2017-09-28 2021-02-23 北京金山云网络技术有限公司 转发报文的方法、装置、交换机、设备及存储介质
CN109672615B (zh) * 2017-10-17 2022-06-14 华为技术有限公司 数据报文缓存方法及装置
CN112965824B (zh) * 2021-03-31 2024-04-09 北京金山云网络技术有限公司 报文的转发方法及装置、存储介质、电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841023A (zh) * 2012-11-22 2014-06-04 华为技术有限公司 数据转发的方法和设备
WO2016035306A1 (ja) * 2014-09-01 2016-03-10 日本電気株式会社 制御システム、通信システム、通信方法および記録媒体
WO2020012491A1 (en) * 2018-07-10 2020-01-16 Telefonaktiebolaget L M Ericsson (Publ) Mechanism for hitless resynchronization during sdn controller upgrades between incompatible versions
CN110943901A (zh) * 2020-01-10 2020-03-31 锐捷网络股份有限公司 一种报文转发方法、装置、设备和存储介质
CN113596038A (zh) * 2021-08-02 2021-11-02 武汉绿色网络信息服务有限责任公司 数据包解析的方法和服务器

Also Published As

Publication number Publication date
CN113596038B (zh) 2023-04-07
CN113596038A (zh) 2021-11-02

Similar Documents

Publication Publication Date Title
US20220191064A1 (en) Method for sending virtual extensible local area network packet, computer device, and computer readable medium
CN110313163B (zh) 分布式计算系统中的负载平衡
US20190222431A1 (en) Vxlan packet processing method, device, and system
US8913613B2 (en) Method and system for classification and management of inter-blade network traffic in a blade server
US9774532B2 (en) Information processing system, information processing apparatus and control method of information processing system
WO2021013046A1 (zh) 通信方法和网卡
WO2020083016A1 (zh) 数据传输方法及装置
CN110048963B (zh) 虚拟网络中的报文传输方法、介质、装置和计算设备
CN113326228B (zh) 基于远程直接数据存储的报文转发方法、装置及设备
JP2014527768A (ja) 制御方法及び仮想ゲートウェイ
US20220255772A1 (en) Packet sending method, apparatus, and system
CN109412922B (zh) 一种传输报文的方法、转发设备、控制器及系统
WO2020019958A1 (zh) Vxlan报文封装及策略执行方法、设备、系统
CN110311860B (zh) Vxlan下多链路负载均衡方法及装置
CN113132202B (zh) 一种报文传输方法及相关设备
WO2023010730A1 (zh) 数据包解析的方法和服务器
CN106992918B (zh) 报文转发方法和装置
WO2023010731A1 (zh) 数据信息处理的方法和服务器
WO2023179457A1 (zh) 业务连接的标识方法、装置、系统及存储介质
US11962673B2 (en) Packet tunneling and decapsulation with split-horizon attributes
CN113259220B (zh) 共享报文中私有信息的方法和服务器
WO2023005620A1 (zh) 报文处理方法、装置及通信系统
CN113765794B (zh) 数据发送的方法、装置、电子设备及介质
WO2023279990A1 (zh) 报文传输方法、装置和系统、网络设备及存储介质
WO2022135321A1 (zh) 报文传输方法、设备及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21952593

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE