CN114430394A - Message processing method and device, electronic equipment and readable storage medium - Google Patents

Message processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114430394A
CN114430394A CN202111647337.8A CN202111647337A CN114430394A CN 114430394 A CN114430394 A CN 114430394A CN 202111647337 A CN202111647337 A CN 202111647337A CN 114430394 A CN114430394 A CN 114430394A
Authority
CN
China
Prior art keywords
message
processed
network element
forwarding table
port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111647337.8A
Other languages
Chinese (zh)
Other versions
CN114430394B (en
Inventor
吴寒
刘晓忠
周晓杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202111647337.8A priority Critical patent/CN114430394B/en
Publication of CN114430394A publication Critical patent/CN114430394A/en
Application granted granted Critical
Publication of CN114430394B publication Critical patent/CN114430394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention provides a message processing method, a message processing device, electronic equipment and a readable storage medium. In the method, a virtual switch deployed on a host responds to a message to be processed received from a first port of the virtual switch, and transmits the message to be processed to a corresponding target network element in the host based on a memory area corresponding to a second port of the virtual switch under the condition that the flow type of the message to be processed is core flow, so as to perform specified processing on the message to be processed. And acquiring the processed message to be processed based on the memory area corresponding to the second port, and forwarding the processed message to be processed to a corresponding external network element in the core network. And under the condition that the flow type is edge flow, forwarding the message to be processed to a corresponding network element in the edge network through the first port. Thus, hardware cost can be saved to a certain extent.

Description

Message processing method and device, electronic equipment and readable storage medium
Technical Field
The present invention belongs to the field of network communication technologies, and in particular, to a method and an apparatus for processing a packet, an electronic device, and a readable storage medium.
Background
At present, with the continuous development of communication technology, the types of network access requirements of users are more and more. For example, a user may need to access a core network in some cases and an edge network in some cases. How to accurately process the message to ensure that the user access operation can be normally performed has become a problem of great attention.
In the prior art, a hardware switch is often configured to ensure that a message can be successfully and accurately forwarded. However, this method requires high hardware cost.
Disclosure of Invention
The invention provides a message processing method, a message processing device, electronic equipment and a readable storage medium, and aims to solve the technical problem of high hardware cost.
In a first aspect, the present invention provides a packet processing method, where the method is applied to a virtual switch deployed on a host, and the method includes:
responding to a message to be processed received from a first port of the virtual switch, and transmitting the message to be processed to a corresponding target network element in the host machine based on a memory area corresponding to a second port of the virtual switch under the condition that the flow type of the message to be processed is core flow, so as to perform designated processing on the message to be processed;
acquiring the processed message to be processed based on the memory area corresponding to the second port, and forwarding the processed message to be processed to a corresponding external network element in a core network;
and forwarding the message to be processed to a corresponding network element in an edge network through the first port under the condition that the flow type is edge flow.
Optionally, the method further includes:
inquiring a first forwarding table according to a destination address carried by the message to be processed so as to determine whether the destination address hits the first forwarding table;
if the destination address hits the first forwarding table, determining that the stream type of the message to be processed is edge flow;
and if the destination address does not hit the first forwarding table, determining that the stream type of the message to be processed is core flow.
Optionally, the method further includes:
performing message check on the message to be processed based on a message check node in the virtual switch;
and executing the operation of inquiring the first forwarding table according to the destination address carried by the message to be processed under the condition of passing the message verification.
Optionally, before the to-be-processed packet is transmitted to the corresponding target network element in the host based on the memory area corresponding to the second port of the virtual switch, the method further includes:
acquiring a message related identifier currently carried in the message to be processed;
under the condition that the destination address does not hit the first forwarding table, querying a second forwarding table based on the message correlation identification to determine whether the message correlation identification hits the second forwarding table;
and determining the virtual network element hit by the message correlation identifier in the second forwarding table as the target network element under the condition that the message correlation identifier hits the second forwarding table.
Optionally, before forwarding the packet to be processed to a corresponding network element in an edge network through the first port, the method further includes:
under the condition that the destination address hits the first forwarding table, inquiring a third forwarding table based on the message correlation identification so as to determine whether the message correlation identification hits the third forwarding table;
and determining the network element hit by the message correlation identifier in the third forwarding table as the corresponding network element in the edge network under the condition that the message correlation identifier hits the third forwarding table.
Optionally, the method further includes:
under the condition that the message correlation identification does not hit the second forwarding table or the third forwarding table, matching the message correlation identification with a fourth forwarding table to determine whether the message correlation identification hits the fourth forwarding table;
if the message to be processed hits the fourth forwarding table, forwarding the message to be processed to other network elements based on the first port;
and if the message to be processed does not hit the fourth forwarding table, discarding the message to be processed.
Optionally, the first port is configured with at least one communication channel; the forwarding the processed packet to be processed to a corresponding external network element in a core network includes:
acquiring a message related identifier currently carried in the processed message to be processed;
determining an external network element hit by the message correlation identifier in the second forwarding table as a corresponding external network element in the core network under the condition that the message correlation identifier hits the second forwarding table, and acquiring a communication channel index corresponding to the external network element hit by the message correlation identifier in the second forwarding table;
and forwarding the message to be processed to a corresponding external network element in the core network based on the communication channel indicated by the communication channel index.
Optionally, the transmitting the to-be-processed packet to a corresponding target network element in the host based on the memory area corresponding to the second port in the virtual switch includes:
performing hash processing according to the message correlation identification of the message to be processed to obtain a hash value corresponding to the message correlation identification;
writing the message to be processed into a memory area of a second port corresponding to the target network element according to the hash value; and the target network element is used for reading the message to be processed from the memory area.
Optionally, the target network element is further configured to write the processed message to be processed into a receive queue of the memory area; the obtaining the processed message to be processed based on the memory area corresponding to the second port includes: and directly reading the processed message to be processed from the receiving queue.
Optionally, before transmitting the to-be-processed packet to the corresponding target network element in the host, the method further includes:
and writing the message correlation identification in the message to be processed into a specified message area, and performing specified processing on the message to be processed according to the message correlation identification in the message area based on a target network element.
In a second aspect, the present invention provides a packet processing apparatus, where the apparatus is applied to a virtual switch deployed on a host, and the apparatus includes:
a transmission module, configured to respond to a to-be-processed packet received from a first port of the virtual switch, and transmit the to-be-processed packet to a corresponding target network element in the host based on a memory area corresponding to a second port of the virtual switch when a flow type of the to-be-processed packet is a core flow, so as to perform specified processing on the to-be-processed packet;
a first forwarding module, configured to obtain the processed packet to be processed based on the memory area corresponding to the second port, and forward the processed packet to be processed to a corresponding external network element in a core network;
and a second forwarding module, configured to forward the packet to be processed to a corresponding network element in an edge network through the first port when the flow type is edge traffic.
Optionally, the apparatus further comprises:
the first query module is used for querying a first forwarding table according to a destination address carried by the message to be processed so as to determine whether the destination address hits the first forwarding table;
a first determining module, configured to determine that a flow type of the to-be-processed packet is edge traffic if the destination address hits the first forwarding table;
a second determining module, configured to determine that the stream type of the to-be-processed packet is core traffic if the destination address misses the first forwarding table.
Optionally, the apparatus further comprises:
the checking module is used for carrying out message checking on the message to be processed based on the message checking node in the virtual switch;
and the execution module is used for executing the operation of inquiring the first forwarding table according to the destination address carried by the message to be processed under the condition of passing the message verification.
Optionally, the apparatus further comprises:
an obtaining module, configured to obtain a message related identifier currently carried in the message to be processed before the message to be processed is transmitted to a corresponding target network element in the host in the memory area corresponding to the second port based on the virtual switch;
a second query module, configured to query a second forwarding table based on the packet-related identifier when the destination address misses the first forwarding table, so as to determine whether the packet-related identifier hits the second forwarding table;
a third determining module, configured to determine, when the packet correlation identifier hits the second forwarding table, a virtual network element that the packet correlation identifier hits in the second forwarding table as the target network element.
Optionally, the apparatus further comprises:
a third query module, configured to, before forwarding the to-be-processed packet to a corresponding network element in an edge network through the first port, query a third forwarding table based on the packet-related identifier when the destination address hits the first forwarding table, so as to determine whether the packet-related identifier hits the third forwarding table;
a fourth determining module, configured to determine, when the packet correlation identifier hits the third forwarding table, a network element hit by the packet correlation identifier in the third forwarding table as a corresponding network element in the edge network.
Optionally, the apparatus further comprises:
a matching module, configured to match the packet correlation identifier with a fourth forwarding table to determine whether the packet correlation identifier hits the fourth forwarding table when the packet correlation identifier misses the second forwarding table or the third forwarding table;
a third forwarding module, configured to forward the to-be-processed packet to another network element based on the first port if the fourth forwarding table is hit;
and the discarding module is used for discarding the message to be processed if the fourth forwarding table is not hit.
Optionally, the first port is configured with at least one communication channel; the first forwarding module is specifically configured to:
acquiring a message related identifier currently carried in the processed message to be processed;
determining an external network element hit by the message correlation identifier in the second forwarding table as a corresponding external network element in the core network under the condition that the message correlation identifier hits the second forwarding table, and acquiring a communication channel index corresponding to the external network element hit by the message correlation identifier in the second forwarding table;
and forwarding the message to be processed to a corresponding external network element in the core network based on the communication channel indicated by the communication channel index.
Optionally, the transmission module is specifically configured to:
performing hash processing according to the message correlation identification of the message to be processed to obtain a hash value corresponding to the message correlation identification;
writing the message to be processed into a memory area of a second port corresponding to the target network element according to the hash value; and the target network element is used for reading the message to be processed from the memory area.
Optionally, the target network element is further configured to write the processed message to be processed into a receive queue of the memory area; the first forwarding module is further specifically configured to: and directly reading the processed message to be processed from the receiving queue.
Optionally, the apparatus further comprises:
and the writing module is used for writing the message correlation identifier in the message to be processed into an appointed message area so as to carry out appointed processing on the message to be processed based on a target network element according to the message correlation identifier in the message area.
In a third aspect, the present invention provides an electronic device comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the message processing method when executing the program.
In a fourth aspect, the present invention provides a readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to execute the message processing method.
In this embodiment of the present invention, in the message processing method provided in the embodiment of the present invention, the virtual switch deployed on the host responds to the to-be-processed message received from the first port of the virtual switch, and transmits the to-be-processed message to the corresponding target network element in the host based on the memory area corresponding to the second port of the virtual switch when the flow type of the to-be-processed message is the core flow, so as to perform the specified processing on the to-be-processed message. And acquiring the processed message to be processed based on the memory area corresponding to the second port, and forwarding the processed message to be processed to a corresponding external network element in the core network. And under the condition that the flow type is edge flow, forwarding the message to be processed to a corresponding network element in the edge network through the first port. Therefore, by adopting the virtual switch and based on the two ports configured for the virtual switch, the communication with the internal network element of the host can be directly carried out based on the memory area corresponding to the second port, the specified processing is completed, a network card is not required to be specially configured for the communication of the internal network element, and the hardware cost can be saved to a certain extent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a message processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a node scheduling logic according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of another node scheduling logic according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an exemplary packet forwarding framework according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a node according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another node according to an embodiment of the present invention;
fig. 7 is a structural diagram of a message processing apparatus according to an embodiment of the present invention;
fig. 8 is a structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of steps of a message processing method provided in an embodiment of the present invention, where the method may be applied to a virtual switch deployed on a host, and as shown in fig. 1, the method may include:
step 101, in response to a to-be-processed packet received from a first port of the virtual switch, when a flow type of the to-be-processed packet is a core flow, transmitting the to-be-processed packet to a corresponding target network element in the host based on a memory area corresponding to a second port of the virtual switch, so as to perform designated processing on the to-be-processed packet.
In this embodiment of the present invention, a virtual Switch (vSwitch) may also be referred to as an access vSwitch. Virtual switches may be developed based on Vector Packet Processing (VPP) technology. Specifically, the virtual switch may be developed based on a Data Plane Development Kit (DPDK) and a VPP, so as to ensure that the virtual switch can flexibly load plug-ins and adjust functions as needed, thereby avoiding the problems of high maintenance and update costs at a later stage.
Further, the host may be a hardware device such as a server and a terminal, and the virtual switch may be preconfigured with a first port and a second port, where the first port may also be referred to as a DPDK port, and the second port may also be referred to as a memory packet interface (memif). The actual number of the first port and the second port may be set according to actual requirements, which is not limited in the embodiment of the present invention. Specifically, the first port may be configured to receive a message from the outside of the virtual switch and send a message to the outside of the virtual switch, that is, to the outside of the host. The second port may be used to enable data transmission with different network elements within the co-hosted, i.e. to the co-hosted network elements. Each second port may have its own corresponding memory area, the memory area corresponding to each second port may be preset according to an actual situation, and the memory areas corresponding to different second ports may be different. A network element is understood to be an element, a device, in a network, and a network element is a minimum unit that can be monitored and managed in network management. The network element in the host may be a virtual network element or an entity network element, which is not limited in this embodiment of the present invention. For example, in an implementation manner, the second port may be specifically configured to implement data transmission with different virtual network elements within the co-host, where the virtual network element may be a virtual client device (vCPE), the virtual network element may be implemented based on a container (docker) technology, and the virtual network element is implemented by being deployed in the docker. The relevant service processing is realized by performing the specified processing, and the specific content of the specified processing executed by the network element in the host machine may be specified in advance according to the actual requirement, which is not limited in the embodiment of the present invention.
The message to be processed may be a message to be processed received by the virtual switch, and the message to be processed may be communication data sent by the user equipment. One message to be processed may correspond to one user request, and the message to be processed and the user request may correspond one to one, or a plurality of messages to be processed may correspond to the same user request. The user request may represent a user traffic, and the flow type of the message to be processed may be used to represent a traffic type of the user traffic, that is, to represent whether the user request needs to access the core network or the edge network. The core network may be regarded as a core part in an internet network, the core network may be regarded as an "internet side", the edge network may be understood as an edge cloud network to implement network element communication based on a cloud scene, and the edge network may be a network formed in a certain area. The core network plus the edge network may form an overall internet network. The traffic type may include core traffic and edge traffic, the core traffic may be traffic accessing a core network, the core traffic may also be referred to as "internet-side traffic", and the edge traffic may be edge cloud traffic accessing an edge network.
Step 102, obtaining the processed message to be processed based on the memory area corresponding to the second port, and forwarding the processed message to be processed to a corresponding external network element in a core network.
In the embodiment of the present invention, when a user requests to access the internet side, that is, the traffic type is core traffic, the processing flow link of the entire message is often long, and therefore, the message needs to be subjected to specified processing. When a user requests to access the edge cloud network, that is, the traffic type is edge traffic, the processing flow link of the whole packet is short, and the packet only needs to be directly forwarded to a corresponding network element in the edge network.
Furthermore, in the embodiment of the present invention, the second port is connected to the network element in the host machine in a memory exchange manner, so that a network card for transmitting the packet to the corresponding target network element in the host machine is not required to be configured, thereby implementing internal data forwarding with less hardware resources and lower hardware implementation cost.
Step 103, forwarding the packet to be processed to a corresponding network element in an edge network through the first port under the condition that the flow type is edge cloud flow.
For example, the packet to be processed may be forwarded to a corresponding network element in the edge network through the first port based on the first packet forwarding node in the virtual switch. The first message forwarding node may be an interface-output node in the message forwarding module.
In summary, in the message processing method provided in the embodiment of the present invention, the virtual switch deployed on the host responds to the to-be-processed message received from the first port of the virtual switch, and transmits the to-be-processed message to the corresponding target network element in the host based on the memory area corresponding to the second port of the virtual switch when the flow type of the to-be-processed message is the core flow, so as to perform the specified processing on the to-be-processed message. And acquiring the processed message to be processed based on the memory area corresponding to the second port, and forwarding the processed message to be processed to a corresponding external network element in the core network. And under the condition that the flow type is edge flow, forwarding the message to be processed to a corresponding network element in the edge network through the first port. Therefore, by adopting the virtual switch and based on the two ports configured for the virtual switch, the communication with the internal network element of the host can be directly carried out based on the memory area corresponding to the second port, the specified processing is completed, a network card is not required to be additionally configured specially for the communication of the internal network element, and the hardware cost can be saved to a certain extent.
Meanwhile, the network diversity configuration can be enriched through two ports configured for the virtual switch. When the message is forwarded, the configured port can be flexibly selected, so that the message forwarding is more accurately realized.
Optionally, the message to be processed in the embodiment of the present invention may be received based on the following operations: and polling to acquire the message input from the first port as the message to be processed based on a first message receiving node in the virtual switch. For example, the first packet receiving node may be a "dpdk-input" node in a packet receiving module in the virtual switch. The message input by the first port may be written into a dpdk receiving queue, where the message input by the first port may be sent by an external network element. Correspondingly, the dpdk receiving queue can be polled based on the first message receiving node, so as to receive the message from the external network element, and perform subsequent processing as the message to be processed. In the embodiment of the invention, the message to be processed is obtained based on the first message node receiving polling, so that the message to be processed can be orderly obtained to a certain extent, and the message processing efficiency is ensured.
Optionally, in the embodiment of the present invention, detecting the stream type of the to-be-processed packet may be implemented by the following steps:
step S21, querying a first forwarding table according to a destination address carried by the to-be-processed packet, so as to determine whether the destination address hits the first forwarding table.
In the embodiment of the invention, the detection of the stream type of the message to be processed can be realized based on the first message classification node in the message classification module in the virtual switch. By way of example, the first packet classification node may be an "acc-sw-dispatcher-in" node. And detecting the stream type of the message to be processed by inquiring the first forwarding table. Specifically, the first forwarding table may be preset and used to record an address that is used as a destination address when accessing the edge traffic, so that the content recorded in the first forwarding table may be avoided to some extent. The first forwarding table may be one of forwarding tables maintained internally in the first packet classification node. Correspondingly, the destination address carried in the message to be processed can be obtained, the obtained destination address is compared with each address in the first forwarding table, and if the first forwarding table has an address matched with the destination address carried in the message to be processed, it can be determined that the destination address hits the first forwarding table. Otherwise, it may be determined that the first forwarding table is missed. The destination address carried by the message to be processed may be denoted as "dip", and the first forwarding table may be denoted as a dip _ forward table.
Step S22, if the destination address hits the first forwarding table, determining that the flow type of the message to be processed is edge flow; and if the destination address does not hit the first forwarding table, determining that the stream type of the message to be processed is core flow.
In the embodiment of the invention, the first forwarding table is inquired through the destination address carried by the message to be processed, and the flow type of the message to be processed can be detected based on whether the first forwarding table is hit, so that the efficiency of flow classification and message flow type identification can be ensured to a certain extent. By detecting the stream type, a basis can be provided for the subsequent selection of the data exchange mode, so that the flexible selection of the data exchange mode can be realized. The data exchange mode may refer to performing message interaction with other virtual network elements inside the host, and performing interaction with external network elements (e.g., network elements in an edge network) of the virtual switch. Meanwhile, compared with the traditional five-tuple classification method, the embodiment of the invention is based on the message correlation identifier: the Network Identifier (VXLAN Identifier, vni) of the Virtual eXtensible Local Area Network (Virtual eXtensible Local Area Network, VXLAN) and the Identifier of the double-layer Virtual Local Area Network (QinQ) can conveniently realize flow classification, and can ensure the classification efficiency to a certain extent. The QinQ id may include a user Virtual Local Area Network id (Customer Virtual Local Area Network id, cvlan _ id) and a Service Virtual Local Area Network id (Service Virtual Local Area Network id, pvlan _ id).
It should be noted that, in this embodiment of the present invention, the address that will be used as the destination address when the first part in the first forwarding table records access edge traffic, and the address that will be used as the destination address when the second part in the first forwarding table records access core traffic may also be used. Accordingly, the flow type of the packet to be processed may be determined to be an edge flow when the destination address hits the first part. And determining the flow type of the message to be processed as the core flow under the condition that the destination address hits the second part. Otherwise, the message to be processed may be discarded, which is not limited in this embodiment of the present invention. Optionally, in the embodiment of the present invention, before querying the first forwarding table according to the destination address carried in the to-be-processed packet, a packet check may be performed on the to-be-processed packet based on a packet check node in the virtual switch; and under the condition of passing the message verification, executing the operation of inquiring the first forwarding table according to the destination address carried by the message to be processed. Therefore, the problem that processing resources are wasted due to the fact that the message to be processed which does not pass the message verification and does not meet the requirement is subjected to subsequent processing can be avoided. For example, the packet involved in the embodiment of the present invention may be a packet encapsulated by a multi-layer protocol.
The message check node may be configured to check a physical address (MAC address), an Internet Protocol (IP) address, and message integrity of the message. For example, the packet check node may be an "ethernet-input" node, an "ip 4-input" node, and an "ip 4-vxlan-bypass" node in the packet check module. The "ethernet-input" node can be used to perform two-layer header check, and select the next node to process according to the specifically adopted three-layer protocol. Specifically, in the case where the three-layer protocol is an IP protocol, the "IP 4-input" node may be selected as the next node. The ip4-input node can check the destination address and the integrity of the message, and select the next node to process according to the four-layer protocol. Specifically, in the case where the four-layer Protocol is a User Datagram Protocol (UDP) and the destination port is a specific port, the node "ip 4-vxlan-bypass" may be selected as the next node. Wherein, the specific port may be a "4789" port, and in the case that the destination port is the "4789" port, it may be stated that the next layer will carry the vxlan header. The "ip 4-vxlan-bypass" node may be specifically configured to perform vni check, that is, check whether vni in the message to be processed is legal vni.
Optionally, before transmitting the message to be processed to the corresponding target network element in the host based on the memory area corresponding to the second port of the virtual switch, the following steps may be further performed in the embodiment of the present invention:
and step S31, acquiring the message related identification currently carried in the message to be processed.
In the embodiment of the invention, the message related identification can be preset according to the actual situation, the message related identifications of partial messages can be the same, and the message related identification carried by the message is related to the local area network for sending the message. For example, the packet correlation identifiers may include vni and QinQ identifiers. Specifically, vni may be extracted based on a first processing node in the message processing module, and then QinQ carried therein may be extracted. For example, the first processing node may be a "vxlan 4-input" node, and the "vxlan 4-input" node may first perform vxlan header parsing, strip the vxlan header, and then obtain vni. The encapsulated protocol header may then be parsed into an inner layer to obtain QinQ. It should be noted that, the message related identifier may also be written into the message expansion area. The "vxlan 4-input" node may also be referred to as the "vxlan-input" node.
Step S32, when the destination address misses the first forwarding table, querying a second forwarding table based on the packet-related identifier to determine whether the packet-related identifier hits the second forwarding table.
Step S33, when the packet correlation identifier hits the second forwarding table, determining the virtual network element hit by the packet correlation identifier in the second forwarding table as the target network element.
In the embodiment of the present invention, the second forwarding table may be continuously queried when the flow type of the message to be processed is the core flow, that is, the first forwarding table is not hit. The second forwarding table may be preset and configured to record packet correlation identifiers corresponding to virtual network elements in the host, where the virtual network elements are responsible for interfacing with packets whose carried packet correlation identifiers are consistent with the packet correlation identifiers corresponding to the virtual network elements. The second forwarding table may be one of the forwarding tables maintained internally to the first packet classification node. Correspondingly, the message correlation identifier carried in the message to be processed can be obtained, the obtained message correlation identifier is compared with each message correlation identifier in the second forwarding table, and if the message correlation identifier matched with the message correlation identifier carried in the message to be processed exists in the second forwarding table, it can be determined that the message correlation identifier hits the second forwarding table. Otherwise, a miss to the second forwarding table may be determined. Wherein the second forwarding table may be represented as a vni _ forward table. The message-related identification may be obtained simultaneously with the aforementioned destination address. Further, in the case of hitting the second forwarding table, the virtual network element corresponding to the packet correlation identifier in the second forwarding table, which matches the packet correlation identifier carried in the to-be-processed packet, may be determined as the target network element. In the embodiment of the invention, the target network element can be determined by inquiring the second forwarding table through the message related identifier carried by the message to be processed, so that the processing efficiency can be ensured to a certain extent.
Further, the second forwarding table may also record indexes of second ports corresponding to the virtual network elements. The second forwarding table may be queried to obtain an index of the second port corresponding to the target network element when the second forwarding table is hit. Correspondingly, the second port corresponding to the target network element may be located subsequently according to the index of the second port, and then the message to be processed is written into the memory area of the second port corresponding to the target network element. The index of the second port may be a memif interface index.
Optionally, in this embodiment of the present invention, before forwarding the packet to be processed to a corresponding network element in an edge network through the first port, the following operations may be performed:
step S41, when the destination address hits the first forwarding table, querying a third forwarding table based on the packet-related identifier to determine whether the packet-related identifier hits the third forwarding table.
Step S42, when the packet correlation identifier hits the third forwarding table, determining the network element hit by the packet correlation identifier in the third forwarding table as the corresponding network element in the edge network.
In the embodiment of the present invention, the third forwarding table may be continuously queried when the flow type of the message to be processed is edge flow, that is, the first forwarding table is hit. The third forwarding table may be preset and used to record a packet correlation identifier corresponding to a network element in the edge network, where the network element in the edge network is responsible for interfacing with a packet whose carried packet correlation identifier is consistent with the packet correlation identifier corresponding to the network element. The third forwarding table may be one of the forwarding tables maintained inside the first packet classification node. The message correlation identifier may be compared with each message correlation identifier in the third forwarding table, and if a message correlation identifier matching the message correlation identifier carried in the message to be processed exists in the third forwarding table, it may be determined that the message correlation identifier hits the third forwarding table. Otherwise, a miss to the third forwarding table may be determined. Wherein, the third forwarding table may be represented as vni _ business table.
Further, when the third forwarding table is hit, the network element corresponding to the packet correlation identifier in the third forwarding table, which matches with the packet correlation identifier carried in the packet to be processed, may be determined as the network element corresponding to the packet to be processed in the edge network. Specifically, the third forwarding table may further record a communication channel index corresponding to each network element, and a communication channel indicated by the communication channel index corresponding to each network element is used to send information to the network element. The communication channel may be a vxlan tunnel, each first port may be bound with 1 or more vxlan tunnels, and a vxlan tunnel may be understood as a data access implemented based on software. When the packet to be processed is forwarded to the corresponding network element in the edge network through the first port, the packet to be processed may be specifically forwarded to the corresponding network element in the edge network based on the communication channel bound by the first port and corresponding to the communication channel index. The communication address of each network element in the edge network can be recorded in advance, and the message to be processed is forwarded based on the communication address of the corresponding network element in the edge network. It should be noted that, if the user accesses the edge cloud traffic, the user may, after determining the corresponding network element in the edge network, submit the packet to the interface-output node for processing, and forward the packet to be processed to the corresponding network element in the edge network by the interface-output node based on the communication channel, which is bound by the first port and corresponds to the communication channel index.
In the embodiment of the invention, under the condition that the destination address hits the first forwarding table, the third forwarding table is inquired based on the message correlation identifier so as to determine whether the message correlation identifier hits the third forwarding table. And under the condition that the message correlation identification hits the third forwarding table, determining the network element hit by the message correlation identification in the third forwarding table as the corresponding network element in the edge network. Therefore, the third forwarding table is queried through the message related identifier carried by the message to be processed, so that the corresponding network element in the edge network can be determined, and the overall efficiency of data forwarding can be ensured to a certain extent.
Optionally, the following operations may also be performed in the embodiment of the present invention:
step S51, matching the packet correlation identifier with a fourth forwarding table when the packet correlation identifier misses the second forwarding table or the third forwarding table, so as to determine whether the packet correlation identifier hits the fourth forwarding table.
The fourth forwarding table may be preset and used to record the packet correlation identifier corresponding to each other network element in the bridge mode, and the fourth forwarding table may be one of forwarding tables maintained inside the first packet classification node. The message correlation identifier may be compared with each message correlation identifier in the fourth forwarding table, and if a message correlation identifier matching the message correlation identifier carried in the to-be-processed message exists in the fourth forwarding table, it may be determined that the message correlation identifier hits the fourth forwarding table. Otherwise, a miss to the fourth forwarding table may be determined. Wherein the fourth forwarding table may be represented as a vni _ bridge table.
Step S52, if the fourth forwarding table is hit, forwarding the to-be-processed packet to other network elements based on the first port.
Step S53, if the fourth forwarding table is not hit, discarding the to-be-processed packet.
Specifically, when the fourth forwarding table is hit, the other network elements hit in the fourth forwarding table may be used as target other network elements, and based on the pre-recorded communication channel indexes corresponding to the other network elements, the communication channel indexes corresponding to the target other network elements are obtained. And forwarding the message to be processed to other target network elements based on the communication channel corresponding to the communication channel index bound by the first port and corresponding to other target network elements. Specifically, the communication addresses of the other network elements may be recorded in advance, and the forwarding may be performed based on the communication addresses of the other network elements of the target. It should be noted that the other network elements may be network elements in the edge network or may not be network elements in the edge network. The communication channel index corresponding to each other network element may be recorded in the vni _ bussiness table in advance. Further, in case of a miss in the fourth forwarding table, the pending packet may be discarded.
In the embodiment of the invention, under the condition that the message correlation identifier does not hit the second forwarding table or the third forwarding table, the message correlation identifier is matched with the fourth forwarding table so as to determine whether the message correlation identifier hits the fourth forwarding table. And if the message hits the fourth forwarding table, forwarding the message to be processed to other network elements based on the first port. If the fourth forwarding table is not hit, it may be determined that the packet-related identifier is an invalid identifier, the content of the packet to be processed has a problem, belongs to a bad packet, and the packet to be processed may be discarded. Therefore, by further inquiring the fourth forwarding table, the message to be processed is discarded under the condition that the fourth forwarding table is not hit, so that the probability of discarding the message can be reduced to a certain extent, and the processing rate is improved.
For example, fig. 2 is a schematic diagram of a node scheduling logic provided in an embodiment of the present invention, and as shown in fig. 2, 4 forwarding tables may be maintained in a first packet classification node: a dip _ forward table, a vni _ business table, and a vni _ bridge table. The operations of obtaining the messages dip, vni, pvlan _ id, and cvlan _ id, that is, obtaining the destination address and the message-related identifier, may be performed first. And then, a dip _ forward table can be queried, if the dip _ forward table is hit, a vni _ business table can be continuously queried, and under the condition that the vni _ business table is hit, a vxlan outgoing interface index is obtained based on the vni _ business table. Wherein the vxlan outbound interface index may be the communication channel index. After the vxlan outgoing interface index is acquired, the second packet processing node may be used as a next hop node to continue to execute subsequent steps. The second message processing node may be a "vxlan-encap" node. In the case of the IPV4 protocol, the "vxlan-encap" node may also be referred to as "vxlan 4-encap" node, and the second packet processing node may be configured to encapsulate the vxlan header and forward the encapsulated packet to the next node for processing.
If the dip _ forward table is not hit, the vni _ forward table may be queried, and in case of hit, the memif interface index is obtained based on the vni _ forward table, that is, the second port index corresponding to the hit target network element is obtained. If the vni _ forward table is not hit, and the vni _ business table is not hit, the vni _ bridge table can be queried. And executing next-hop node drop operation under the condition that the vni _ bridge table is not hit, namely discarding the message to be processed. If the vni _ bridge table is hit, the vxlan out-interface index may be retrieved. Where "hit Y" indicates a hit and "hit N" indicates a miss.
Optionally, the first port may be configured with at least one communication channel, i.e. with a vxlan tunnel. Correspondingly, the step of forwarding the processed packet to be processed to a corresponding external network element in a core network may specifically include:
and step S61, acquiring the message related identification currently carried in the processed message to be processed.
Step S62, when the packet correlation identifier hits the second forwarding table, determining an external network element hit by the packet correlation identifier in the second forwarding table as a corresponding external network element in the core network, and acquiring a communication channel index corresponding to the external network element hit by the packet correlation identifier in the second forwarding table.
Since the message to be processed is subjected to the designated processing, the message correlation identifier may be changed, for example, vni may be updated. Therefore, in the embodiment of the present invention, the message related identifier currently carried in the processed message to be processed may be obtained first, and the subsequent operation may be executed according to the obtained message related identifier, so as to ensure the accuracy of the processing. Further, the implementation manner of detecting whether the second forwarding table is hit may refer to the foregoing description, and details are not described here again. If the relevant identifier of the packet still hits the second forwarding table, the packet to be processed may be forwarded to a corresponding external network element in the core network based on the first port. Specifically, the second forwarding table may further record a communication channel index corresponding to each network element in the core network, where a communication channel indicated by the communication channel index corresponding to each network element is used to send information to the network element. The communication channel may be a vxlan tunnel, and each first port may be bound with 1 or more vxlan tunnels. By querying the second forwarding table, the vxlan outgoing interface index (i.e., the communication channel index) corresponding to the external network element, where the packet-related identifier is hit in the second forwarding table, may be obtained.
Step S63, based on the communication channel indicated by the communication channel index, forwarding the packet to be processed to a corresponding external network element in the core network.
Illustratively, the external network element may include a broadband access server (BRAS) device or the like in the internet. In this step, the to-be-processed packet is forwarded to a corresponding external network element in the core network based on the first port, which may specifically be based on a vxlan tunnel corresponding to the communication channel index in a vxlan tunnel configured by the first port, and the to-be-processed packet is forwarded to the corresponding external network element. The communication address of each network element in the core network may be recorded in advance, and the message to be processed is forwarded based on the communication address of the corresponding network element in the core network. The steps S61 to S63 may be implemented based on a second packet classification node in the packet classification module, and the second packet classification node may be an "acc-sw-dispatcher-out" node. The acc-sw-dispatcher-out node can acquire and strip the related identification of the message carried currently from the tail part of the processed message to be processed.
For the message traffic received from the first port, it can be processed by the "acc-sw-dispatcher-in" node first, and for the message traffic received from the second port, it can be processed by the "acc-sw-dispatcher-out" node. The acc-sw-dispatcher-in node and the acc-sw-dispatcher-out node can determine the forwarding destination of the message to be processed by looking up a table.
In the embodiment of the invention, the first port is configured with a communication channel to obtain the message related identification currently carried in the processed message to be processed. And under the condition that the message correlation identification hits the second forwarding table, acquiring a communication channel index corresponding to the external network element hit by the message correlation identification in the second forwarding table. Based on the communication channel indicated by the communication channel index, the message to be processed can be conveniently forwarded to the corresponding external network element, so that the message forwarding efficiency can be ensured to a certain extent.
For example, fig. 3 is a schematic diagram of another node scheduling logic provided in the embodiment of the present invention, and as shown in fig. 3, the message extension data and pvlan _ id and cvlan _ id in the message may be obtained, and vni is obtained from the message extension data. Namely, the message correlation identifier currently carried in the processed message to be processed is obtained. The vni forward table is then consulted. In case of a hit, the vxlan outbound interface index is retrieved based on the vni _ forward table. And in case of miss, executing next-hop drop operation to discard the message to be processed.
Optionally, the transmitting the to-be-processed packet to the corresponding target network element in the host based on the memory area corresponding to the second port in the virtual switch includes:
step S71, performing hash processing according to the packet-related identifier of the packet to be processed, to obtain a hash value corresponding to the packet-related identifier.
Step S72, writing the to-be-processed packet into the memory area of the second port corresponding to the target network element according to the hash value; and the target network element is used for reading the message to be processed from the memory area.
In the embodiment of the invention, the hash calculation can be performed by taking the relevant message identification as input according to a preset hash algorithm so as to generate the hash value. For example, the hash value may be obtained by hashing vni + QinQ. After writing into the memory area of the second port corresponding to the target network element, the target network element may read the message to be processed from the memory area, so as to transmit the message to be processed to the target network element corresponding to the host, and then the target network element may perform the designated processing, that is, the target network element is configured to read the message to be processed from the memory area and perform the designated processing.
Further, a second port in the embodiment of the present invention may correspond to a virtual network element, and accordingly, after the target network element is determined, the second port corresponding to the target network element may be determined, and data transmission in the host is performed based on the memory area. Specifically, the steps S71 and S72 may be implemented based on a second packet forwarding node in the packet forwarding module, where the second packet forwarding node may be a "memif-output" node, and if the user accesses the internet traffic, that is, if the flow type of the packet to be processed is core traffic, the packet may be processed by the "memif-output" node. A "memif-output" node and a virtual network element may be bound to a second port, and the "memif-output" node and the virtual network element may exchange data based on the second port. The "memif-output" node and the virtual network element bound to different second ports may be different. Therefore, data exchange is carried out with the opposite terminal based on the second port in a memory exchange mode, the data exchange is carried out without the intermediary of a network card, and the message can be directly sent to an opposite terminal user mode application program (namely, a target network element) for processing through a user mode without passing through a kernel mode, so that network card resources can be saved, and the communication efficiency is ensured.
Then, the message to be processed may be written into the memory area of the second port corresponding to the target network element according to the sequence of increasing/decreasing the hash value in sequence. Specifically, the transmission queue may be written into the memory area.
In the embodiment of the invention, the hash processing is firstly carried out according to the message related identification of the message to be processed, and the hash value corresponding to the message related identification is obtained. And writing the message to be processed into the memory area of the second port corresponding to the target network element according to the hash value, wherein the target network element is used for reading the message to be processed from the memory area and performing designated processing. Therefore, the message to be processed can be uniformly distributed to the sending queue of the memory area to a certain extent by writing the message into the memory area after the hash processing is carried out, so that the writing effect is ensured.
Optionally, the target network element may be further configured to write the processed packet to be processed into a receive queue of the memory area. Correspondingly, the obtaining the processed message to be processed based on the memory area corresponding to the second port may specifically include: and step S81, directly reading the processed message to be processed from the receiving queue. This step may be implemented, for example, based on the second packet-receiving node. The second message receiving node may be a "memif-input" node. The second message receiving node may directly and actively poll the receive queue to receive the processed message to be processed from the target network element. Because the second ports and the virtual network elements are bound in a one-to-one correspondence manner, messages to be processed after the processing of the target network element can be stored in the receiving queue of the memory area of the second port corresponding to the target network element, and the processed messages to be processed can be directly read from the receiving queue without checking whether the destination MAC address of the messages is the MAC address of the port, so that the reading efficiency can be ensured, the data forwarding speed between the network elements under the same host can be improved, and the time delay can be reduced. The processed message to be processed may be a two-layer message, and further, a vxlan header may be encapsulated for the message to be processed based on a "vxlan-encap" node, and the message is handed to the next node: and the interface-output node sends and processes the packaged message to be processed based on the first port. The packed vxlan head may be set according to actual requirements, which is not limited in this embodiment of the present invention.
Optionally, before transmitting the message to be processed to the corresponding target network element in the host, the message correlation identifier in the message to be processed may also be written into an assigned message area, so as to perform the specified processing on the message to be processed based on the target network element according to the message correlation identifier in the message area. The designated message area may be a private field of a message expansion area or a message buffer (buffer) area. For example, the specified message area may be a message tail, which is not limited in this embodiment of the present invention. Therefore, the message related identification is uniformly packaged to the designated message area, so that the target network element can conveniently take the message related identification and carry out the designated processing based on the message related identification, thereby ensuring that the designated processing can be smoothly carried out. For example, the specifying process may be to determine a public network address corresponding to the message to be processed according to the message correlation identifier, and add the public network address to the message to be processed. And/or determining a message header corresponding to the message to be processed according to the message related identifier, and adding the message header to the message to be processed. And/or update vni in the relevant identifier of the packet, and the like, which is not limited in this embodiment of the present invention.
It should be noted that the first port in the embodiment of the present invention may be configured in a Bond mode to implement the functions of load balancing and network card hot backup. Therefore, when the switch is in data exchange with an external network element in a butt joint mode, stable operation of the switch is guaranteed without extra load balancing equipment, and a load balancing effect can be achieved based on the first port, so that the design complexity can be reduced, hardware resources are saved, the hardware cost is reduced, and the deployment flexibility is improved.
The first port may receive a plurality of network cards on the host, for example, manage a plurality of high-speed network cards, and the network cards may be used for communicating with the outside of the host. For example, load balancing may be performed on multiple network cards based on the actual load of each network card to balance the traffic load of each network card. The virtual switch may further include a bond-input node, and the node may change the incoming interface to an aggregated interface for load balancing. Wherein the change operation may follow an operation performed by the "dpdk-input" node. The "bond-input" node may belong to a message check module. The network cards bound to the first port can be partially used as standby network cards, information on the standby network cards can be kept synchronous with information on the non-standby network cards, and backup switching is performed when the non-standby network cards fail.
Fig. 4 is a schematic diagram of an exemplary packet forwarding framework according to an embodiment of the present invention, and as shown in fig. 4, an access switch may include multiple modules, such as packet receiving module, packet checking module, packet classifying module, packet processing module, and packet forwarding module. Each module may be implemented based on the vpp, and corresponding to different processing nodes in the vpp, the module may be understood as a software processing part in the access switch, and the access switch may perform message transceiving with a Data Center Switch (DCSW) external to the host based on a DPDK port configured in a Bond mode. For example, both the forwarding to the corresponding network element in the edge network and the forwarding to the corresponding network element in the edge network may pass through the DCSW. The access switch may be further configured with a memif port, and the memif port may be correspondingly bound to the vCPE. For example, when other network elements on the same host forward data through the access switch, a memif interface may be configured, where the access switch may be the virtual switch described above, and the virtual switch may be implemented by performing network function virtualization based on a VPP. The messages involved in the embodiment of the present invention, that is, the complete message structure between the DCSW and the virtual switch, may be represented as: "Outer Ethernet header + IP header + UDP header + VXLAN header + Inner Ethernet header + Inner IP header +". Among them, QinQ is carried in the "Inner Ethernet header". The original message of the user will pass through DCSW first, and then the section of message header of "Outer Ethernet header + IP header + UDP header + VXLAN header" is encapsulated, and this section can be regarded as the Outer layer Ethernet message. The original data packet of the user may be all data behind the vxlan header, i.e., "Inner Ethernet header + Inner IP header +.", and the user packet may be considered as an Inner Ethernet packet.
Fig. 5 is a schematic node diagram provided in the embodiment of the present invention, and taking a user accessing an internet side as an example, that is, accessing a core network as an example, a passing node may be as shown in fig. 5. Further, fig. 6 is another schematic diagram of a node provided in the embodiment of the present invention, and taking a user accessing an edge network as an example, a node passing through may be as shown in fig. 6. It should be noted that other links may also be extended in practical application, for example, in the processing process of fig. 6, after the "acc-sw-dispatcher-in" node, the "vxlan 4-encap" may be passed through to additionally encapsulate the vxlan header, which is not limited in this embodiment of the present invention.
It should be noted that, taking application to a specified operator network as an example, the message processing method provided in the embodiment of the present invention may be specially designed based on an access system of a specified operator, and may be specifically applied to an access network element and a service network element of a cloud gateway of a specified operator, for access to an edge cloud system. Wherein the designated operator may be set based on implementation requirements. In an actual application scenario, the actual number of the virtual switches can be multiple, the hardware part of the virtual switches can be realized based on a server, the virtual switches can be managed based on Kubernets (k8s) technology, and the virtual switches can be adapted to different types of servers so as to be flexibly configured and maintained.
Fig. 7 is a structural diagram of a message processing apparatus according to an embodiment of the present invention, where the apparatus may be applied to a virtual switch deployed on a host, and the apparatus 20 may include:
a transmission module 201, configured to respond to a to-be-processed packet received from a first port of the virtual switch, and transmit the to-be-processed packet to a corresponding target network element in the host based on a memory area corresponding to a second port of the virtual switch when a flow type of the to-be-processed packet is a core flow, so as to perform specified processing on the to-be-processed packet;
a first forwarding module 202, configured to obtain the processed packet to be processed based on the memory area corresponding to the second port, and forward the processed packet to be processed to a corresponding external network element in a core network;
a second forwarding module 203, configured to forward the packet to be processed to a corresponding network element in an edge network through the first port when the flow type is edge traffic.
In summary, the message processing apparatus provided in the embodiment of the present invention, in response to a to-be-processed message received from a first port of a virtual switch, transmits the to-be-processed message to a corresponding target network element in a host based on a memory area corresponding to a second port of the virtual switch when a flow type of the to-be-processed message is a core flow, so as to perform specified processing on the to-be-processed message. And acquiring the processed message to be processed based on the memory area corresponding to the second port, and forwarding the processed message to be processed to a corresponding external network element in the core network. And under the condition that the flow type is edge flow, forwarding the message to be processed to a corresponding network element in the edge network through the first port. Therefore, by adopting the virtual switch and based on the two ports configured for the virtual switch, the communication with the internal network element of the host can be directly carried out based on the memory area corresponding to the second port, the specified processing is completed, a network card is not required to be specially configured for the communication of the internal network element, and the hardware cost can be saved to a certain extent.
Meanwhile, the network diversity configuration can be enriched through two ports configured for the virtual switch. When the message is forwarded, the configured port can be flexibly selected, so that the message forwarding is more accurately realized.
Optionally, the apparatus 20 further includes:
the first query module is used for querying a first forwarding table according to a destination address carried by the message to be processed so as to determine whether the destination address hits the first forwarding table;
a first determining module, configured to determine that the stream type of the to-be-processed packet is edge traffic if the destination address hits the first forwarding table;
a second determining module, configured to determine that the flow type of the to-be-processed packet is a core flow if the destination address misses the first forwarding table.
Optionally, the apparatus 20 further includes:
the checking module is used for carrying out message checking on the message to be processed based on the message checking node in the virtual switch;
and the execution module is used for executing the operation of inquiring the first forwarding table according to the destination address carried by the message to be processed under the condition of passing the message verification.
Optionally, the apparatus 20 further includes:
an obtaining module, configured to obtain a message related identifier currently carried in the message to be processed before the message to be processed is transmitted to a corresponding target network element in the host in the memory area corresponding to the second port based on the virtual switch;
a second query module, configured to query a second forwarding table based on the packet-related identifier when the destination address misses the first forwarding table, so as to determine whether the packet-related identifier hits the second forwarding table;
a third determining module, configured to determine, when the packet correlation identifier hits the second forwarding table, a virtual network element that the packet correlation identifier hits in the second forwarding table as the target network element.
Optionally, the apparatus 20 further includes:
a third query module, configured to, before forwarding the to-be-processed packet to a corresponding network element in an edge network through the first port, query a third forwarding table based on the packet-related identifier when the destination address hits the first forwarding table, so as to determine whether the packet-related identifier hits the third forwarding table;
a fourth determining module, configured to determine, when the packet correlation identifier hits the third forwarding table, a network element hit by the packet correlation identifier in the third forwarding table as a corresponding network element in the edge network.
Optionally, the apparatus 20 further comprises:
a matching module, configured to match the packet correlation identifier with a fourth forwarding table to determine whether the packet correlation identifier hits the fourth forwarding table when the packet correlation identifier misses the second forwarding table or the third forwarding table;
a third forwarding module, configured to forward the to-be-processed packet to another network element based on the first port if the fourth forwarding table is hit;
and the discarding module is used for discarding the message to be processed if the fourth forwarding table is not hit.
Optionally, the first port is configured with at least one communication channel; the first forwarding module 202 is specifically configured to:
acquiring a message related identifier currently carried in the processed message to be processed;
determining an external network element hit by the message correlation identifier in the second forwarding table as a corresponding external network element in the core network under the condition that the message correlation identifier hits the second forwarding table, and acquiring a communication channel index corresponding to the external network element hit by the message correlation identifier in the second forwarding table;
and forwarding the message to be processed to a corresponding external network element in the core network based on the communication channel indicated by the communication channel index.
Optionally, the transmission module 201 is specifically configured to:
performing hash processing according to the message correlation identification of the message to be processed to obtain a hash value corresponding to the message correlation identification;
writing the message to be processed into a memory area of a second port corresponding to the target network element according to the hash value; and the target network element is used for reading the message to be processed from the memory area.
Optionally, the target network element is further configured to write the processed message to be processed into a receive queue of the memory area; the first forwarding module 202 is further specifically configured to: and directly reading the processed message to be processed from the receiving queue.
Optionally, the apparatus 20 further includes:
and the writing module is used for writing the message related identification in the message to be processed into an appointed message area so as to carry out appointed processing on the message to be processed based on a target network element according to the message related identification in the message area.
The present invention also provides an electronic device, see fig. 8, including: a processor 901, a memory 902 and a computer program 9021 stored on the memory and executable on the processor, the processor implementing the message processing method of the foregoing embodiment when executing the program.
The invention also provides a readable storage medium, and when instructions in the storage medium are executed by a processor of the electronic device, the electronic device is enabled to execute the message processing method of the foregoing embodiment.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
It should be noted that various information and data acquired in the embodiment of the present invention are acquired under the authorization of the information/data holder.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in a sequencing device according to the present invention. The present invention may also be embodied as an apparatus or device program for carrying out a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (13)

1. A message processing method is applied to a virtual switch deployed on a host machine, and comprises the following steps:
responding to a message to be processed received from a first port of the virtual switch, and transmitting the message to be processed to a corresponding target network element in the host machine based on a memory area corresponding to a second port of the virtual switch under the condition that the flow type of the message to be processed is core flow, so as to perform designated processing on the message to be processed;
acquiring the processed message to be processed based on the memory area corresponding to the second port, and forwarding the processed message to be processed to a corresponding external network element in a core network;
and forwarding the message to be processed to a corresponding network element in an edge network through the first port under the condition that the flow type is edge flow.
2. The method of claim 1, further comprising:
inquiring a first forwarding table according to a destination address carried by the message to be processed so as to determine whether the destination address hits the first forwarding table;
if the destination address hits the first forwarding table, determining that the stream type of the message to be processed is edge flow;
and if the destination address does not hit the first forwarding table, determining that the stream type of the message to be processed is core flow.
3. The method of claim 2, further comprising:
performing message check on the message to be processed based on a message check node in the virtual switch;
and under the condition of passing the message verification, executing the operation of inquiring the first forwarding table according to the destination address carried by the message to be processed.
4. The method according to claim 2, wherein before the to-be-processed packet is transmitted to the corresponding target network element in the host based on the memory area corresponding to the second port of the virtual switch, the method further comprises:
acquiring a message related identifier currently carried in the message to be processed;
under the condition that the destination address does not hit the first forwarding table, querying a second forwarding table based on the message correlation identification to determine whether the message correlation identification hits the second forwarding table;
and determining the virtual network element hit by the message correlation identifier in the second forwarding table as the target network element under the condition that the message correlation identifier hits the second forwarding table.
5. The method according to claim 4, wherein before forwarding the packet to be processed to a corresponding network element in an edge network through the first port, the method further comprises:
under the condition that the destination address hits the first forwarding table, querying a third forwarding table based on the message correlation identification to determine whether the message correlation identification hits the third forwarding table;
and determining the network element hit by the message correlation identifier in the third forwarding table as the corresponding network element in the edge network under the condition that the message correlation identifier hits the third forwarding table.
6. The method of claim 5, further comprising:
under the condition that the message correlation identification does not hit the second forwarding table or the third forwarding table, matching the message correlation identification with a fourth forwarding table to determine whether the message correlation identification hits the fourth forwarding table;
if the message to be processed hits the fourth forwarding table, forwarding the message to be processed to other network elements based on the first port;
and if the message to be processed does not hit the fourth forwarding table, discarding the message to be processed.
7. The method of claim 4, wherein the first port is configured with at least one communication channel; the forwarding the processed packet to be processed to a corresponding external network element in a core network includes:
acquiring a message related identifier currently carried in the processed message to be processed;
determining an external network element hit by the message correlation identifier in the second forwarding table as a corresponding external network element in the core network under the condition that the message correlation identifier hits the second forwarding table, and acquiring a communication channel index corresponding to the external network element hit by the message correlation identifier in the second forwarding table;
and forwarding the message to be processed to a corresponding external network element in the core network based on the communication channel indicated by the communication channel index.
8. The method of claim 1, wherein the transmitting the to-be-processed packet to a corresponding target network element in the host based on a memory area corresponding to a second port in the virtual switch comprises:
performing hash processing according to the message correlation identification of the message to be processed to obtain a hash value corresponding to the message correlation identification;
writing the message to be processed into a memory area of a second port corresponding to the target network element according to the hash value; and the target network element is used for reading the message to be processed from the memory area.
9. The method of claim 8, wherein the target network element is further configured to write the processed message to be processed into a receive queue of the memory area; the obtaining the processed message to be processed based on the memory area corresponding to the second port includes: and directly reading the processed message to be processed from the receiving queue.
10. The method according to claim 1, wherein before transmitting the message to be processed to the corresponding target network element in the host, the method further comprises:
and writing the message correlation identification in the message to be processed into a specified message area, and performing specified processing on the message to be processed based on a target network element according to the message correlation identification in the message area.
11. A message processing apparatus, wherein the apparatus is applied to a virtual switch deployed on a host, and the apparatus comprises:
a transmission module, configured to respond to a to-be-processed packet received from a first port of the virtual switch, and transmit the to-be-processed packet to a corresponding target network element in the host based on a memory area corresponding to a second port of the virtual switch when a flow type of the to-be-processed packet is a core flow, so as to perform specified processing on the to-be-processed packet;
a first forwarding module, configured to obtain the processed packet to be processed based on the memory area corresponding to the second port, and forward the processed packet to be processed to a corresponding external network element in a core network;
and a second forwarding module, configured to forward the packet to be processed to a corresponding network element in an edge network through the first port when the flow type is edge traffic.
12. An electronic device, comprising:
processor, memory and computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to one or more of claims 1-10 when executing the program.
13. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of one or more of claims 1-10.
CN202111647337.8A 2021-12-29 2021-12-29 Message processing method and device, electronic equipment and readable storage medium Active CN114430394B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111647337.8A CN114430394B (en) 2021-12-29 2021-12-29 Message processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111647337.8A CN114430394B (en) 2021-12-29 2021-12-29 Message processing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114430394A true CN114430394A (en) 2022-05-03
CN114430394B CN114430394B (en) 2023-06-23

Family

ID=81311108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111647337.8A Active CN114430394B (en) 2021-12-29 2021-12-29 Message processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114430394B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002016A (en) * 2022-05-17 2022-09-02 阿里云计算有限公司 Message processing system, method, device and storage medium
CN115277558A (en) * 2022-07-29 2022-11-01 中国电信股份有限公司 Message sending method and device, computer storage medium and electronic equipment
CN115334035A (en) * 2022-07-15 2022-11-11 天翼云科技有限公司 Message forwarding method and device, electronic equipment and storage medium
CN115996203A (en) * 2023-03-22 2023-04-21 北京华耀科技有限公司 Network traffic domain division method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101142791A (en) * 2005-02-14 2008-03-12 特利亚索内拉股份公司 Method for providing virtual private network services between autonomous systems
CN105991438A (en) * 2015-01-30 2016-10-05 华为技术有限公司 Method and device for processing data packet in virtual two-layer network
US20180139073A1 (en) * 2016-11-11 2018-05-17 Futurewei Technologies, Inc. Method to Support Multi-Protocol for Virtualization
CN108075956A (en) * 2016-11-16 2018-05-25 新华三技术有限公司 A kind of data processing method and device
WO2018205982A1 (en) * 2017-05-11 2018-11-15 中兴通讯股份有限公司 Method and device for implementing broadcast and multicast in software-defined network and storage medium
US20190132241A1 (en) * 2017-10-30 2019-05-02 Dell Products Lp Optimizing traffic paths to orphaned hosts in vxlan networks using virtual link trunking-based multi-homing
US20190356599A1 (en) * 2018-05-15 2019-11-21 Cisco Technology, Inc. Method and system for core network support of access network protocols in multi-homed redundancy groups
CN111565142A (en) * 2020-07-15 2020-08-21 鹏城实验室 Message processing method and device and computer readable storage medium
CN111953553A (en) * 2019-05-16 2020-11-17 华为技术有限公司 Message detection method, device and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101142791A (en) * 2005-02-14 2008-03-12 特利亚索内拉股份公司 Method for providing virtual private network services between autonomous systems
CN105991438A (en) * 2015-01-30 2016-10-05 华为技术有限公司 Method and device for processing data packet in virtual two-layer network
US20180139073A1 (en) * 2016-11-11 2018-05-17 Futurewei Technologies, Inc. Method to Support Multi-Protocol for Virtualization
CN108075956A (en) * 2016-11-16 2018-05-25 新华三技术有限公司 A kind of data processing method and device
WO2018205982A1 (en) * 2017-05-11 2018-11-15 中兴通讯股份有限公司 Method and device for implementing broadcast and multicast in software-defined network and storage medium
US20190132241A1 (en) * 2017-10-30 2019-05-02 Dell Products Lp Optimizing traffic paths to orphaned hosts in vxlan networks using virtual link trunking-based multi-homing
US20190356599A1 (en) * 2018-05-15 2019-11-21 Cisco Technology, Inc. Method and system for core network support of access network protocols in multi-homed redundancy groups
CN111953553A (en) * 2019-05-16 2020-11-17 华为技术有限公司 Message detection method, device and system
CN111565142A (en) * 2020-07-15 2020-08-21 鹏城实验室 Message processing method and device and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李佟;韩春静;李俊;: "SDN网络虚拟化中规则映射研究", 计算机系统应用, no. 09 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002016A (en) * 2022-05-17 2022-09-02 阿里云计算有限公司 Message processing system, method, device and storage medium
CN115002016B (en) * 2022-05-17 2023-08-22 阿里云计算有限公司 Message processing system, method, device and storage medium
CN115334035A (en) * 2022-07-15 2022-11-11 天翼云科技有限公司 Message forwarding method and device, electronic equipment and storage medium
CN115334035B (en) * 2022-07-15 2023-10-10 天翼云科技有限公司 Message forwarding method and device, electronic equipment and storage medium
CN115277558A (en) * 2022-07-29 2022-11-01 中国电信股份有限公司 Message sending method and device, computer storage medium and electronic equipment
CN115277558B (en) * 2022-07-29 2024-06-07 中国电信股份有限公司 Message sending method and device, computer storage medium and electronic equipment
CN115996203A (en) * 2023-03-22 2023-04-21 北京华耀科技有限公司 Network traffic domain division method, device, equipment and storage medium
CN115996203B (en) * 2023-03-22 2023-06-06 北京华耀科技有限公司 Network traffic domain division method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114430394B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN114430394B (en) Message processing method and device, electronic equipment and readable storage medium
US10237230B2 (en) Method and system for inspecting network traffic between end points of a zone
CN108449282B (en) Load balancing method and device
KR101863024B1 (en) Distributed load balancer
US11979322B2 (en) Method and apparatus for providing service for traffic flow
US9729578B2 (en) Method and system for implementing a network policy using a VXLAN network identifier
US9871720B1 (en) Using packet duplication with encapsulation in a packet-switched network to increase reliability
US10439931B2 (en) Data packet processing method, service node, and delivery node
CN101656765B (en) Address mapping system and data transmission method of identifier/locator separation network
CN106254256B (en) Data message forwarding method and equipment based on three layers of VXLAN gateway
US10461958B2 (en) Packet transmission method and apparatus
US11165693B2 (en) Packet forwarding
US20200252366A1 (en) Packet Sending Method and Device
CN109495320B (en) Data message transmission method and device
WO2022001835A1 (en) Method and apparatus for sending message, and network device, system and storage medium
US9124529B1 (en) Methods and apparatus for assessing the quality of a data path including both layer-2 and layer-3 devices
CN106341338B (en) A kind of retransmission method and device of message
CN106059946B (en) Message forwarding method and device
US11012412B2 (en) Method and system for network traffic steering towards a service device
CN109347670A (en) Route tracing method and device, electronic equipment, storage medium
CN109547350A (en) A kind of route learning method and gateway
CN107070790A (en) A kind of route learning method and routing device
CN109246016B (en) Cross-VXLAN message processing method and device
CN114172853A (en) Flow forwarding and bare computer server configuration method and device
US10229459B1 (en) Method and apparatus for routing in transaction management systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant