CN111510513B - MAP-E link acceleration method, device, storage medium and network equipment - Google Patents

MAP-E link acceleration method, device, storage medium and network equipment Download PDF

Info

Publication number
CN111510513B
CN111510513B CN202010007941.3A CN202010007941A CN111510513B CN 111510513 B CN111510513 B CN 111510513B CN 202010007941 A CN202010007941 A CN 202010007941A CN 111510513 B CN111510513 B CN 111510513B
Authority
CN
China
Prior art keywords
acceleration
message
rule table
hardware
ipv6
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010007941.3A
Other languages
Chinese (zh)
Other versions
CN111510513A (en
Inventor
朱海明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pulian International Co ltd
Original Assignee
Pulian International Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pulian International Co ltd filed Critical Pulian International Co ltd
Priority to CN202010007941.3A priority Critical patent/CN111510513B/en
Publication of CN111510513A publication Critical patent/CN111510513A/en
Application granted granted Critical
Publication of CN111510513B publication Critical patent/CN111510513B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/2542Translation of Internet protocol [IP] addresses involving dual-stack hosts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/2557Translation policies or rules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/2592Translation of Internet protocol [IP] addresses using tunnelling or encapsulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a MAP-E link acceleration method, a device, a computer readable storage medium and network equipment, wherein the method comprises the following steps: when a first message in an uplink direction is received, performing hardware NAT (network address translation) conversion on the first message according to a hardware acceleration module and an acceleration rule table, and performing IPv6 encapsulation on the converted first message according to the acceleration rule table to send the encapsulated first message to a Border Relay server, so that the Border Relay server decapsulates the encapsulated first message and forwards the decapsulated first message to a first destination device; when a second message in the downlink direction is received, the IPv6 decapsulation is performed on the second message according to the acceleration rule table, and hardware NAT conversion is performed on the decapsulated second message according to the hardware acceleration module and the acceleration rule table, so that the converted second message is forwarded to a second destination device. By adopting the technical scheme of the invention, the NAT conversion efficiency can be improved, so that the message forwarding speed is improved.

Description

MAP-E link acceleration method, device, storage medium and network equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a MAP-E link acceleration method, apparatus, computer-readable storage medium, and network device.
Background
In order to ensure the operation of the existing IPv4 service and the development of the continuously updated IPv6 service, the 4over6 scenario becomes the focus of the research on the long term evolution scheme by taking into account the characteristics of the IPv4 service and the IPv6 service. In the 4over6 scenario, a variety of transition technologies are also generated, wherein MAP (mapping Address and port) technology combines stateless and dual translation/encapsulation technologies, which refers to multiplexing addresses and ports statelessly, and defines a mechanism for carrying stateless Address encapsulation/translation of IPv4 and IPv6 services in an IPv6-only network, and gradually becomes a solution with higher attention at present.
The MAP technology is divided into a double-Encapsulation MAP-E and a double-translation MAP-T according to The message format, wherein The MAP-E is a short name for The Mapping of addressing and Port using Encapsulation, is called as a stateless Mapping and double-Encapsulation technology, is an IPv6 transition technology of 4over6, The MAP-E encapsulates an IPv4 message into an IPv6 header, The outer layer is The IPv6 header, and The inner layer is The IPv4 header, so that The message length is correspondingly increased, and higher requirements are provided for The message forwarding speed.
Disclosure of Invention
The technical problem to be solved in the embodiments of the present invention is to provide a MAP-E link acceleration method, apparatus, computer-readable storage medium, and network device, which can improve NAT conversion efficiency, thereby improving the forwarding speed of a packet.
In order to solve the above technical problem, an embodiment of the present invention provides a MAP-E link acceleration method, where the method is applied to a network device, and the network device includes a hardware acceleration module and a controller for controlling the hardware acceleration module; an acceleration rule table is preset in the controller; the method comprises the following steps:
when a first message in an uplink direction is received, performing hardware NAT conversion on the first message according to the hardware acceleration module and the acceleration rule table, performing IPv6 encapsulation on the converted first message according to the acceleration rule table, sending the encapsulated first message to a Border Relay server, enabling the Border Relay server to decapsulate the encapsulated first message, and forwarding the decapsulated first message to a first destination device;
when a second message in a downlink direction is received, performing IPv6 decapsulation on the second message according to the acceleration rule table, and performing hardware NAT (network Address translation) conversion on the decapsulated second message according to the hardware acceleration module and the acceleration rule table so as to forward the converted second message to a second destination device;
wherein, the acceleration rule table at least comprises an IPv6 header, a plurality of upstream acceleration entries and a plurality of downstream acceleration entries; the controller is used for managing acceleration entries in the acceleration rule table; the second message is a data message which is subjected to IPv6 encapsulation by the Border Relay server;
the performing hardware NAT translation on the first packet according to the hardware acceleration module and the acceleration rule table specifically includes:
and performing hardware NAT (network Address translation) conversion on the first message according to the hardware acceleration module and the uplink acceleration entry obtained from the acceleration rule table so as to correspondingly convert a source IPv4 address and a source port in the first message into a tunnel IPv4 address and a tunnel port.
Further, before performing hardware NAT translation on the first packet according to the hardware acceleration module and the acceleration rule table, the method further includes:
judging whether an uplink acceleration entry corresponding to the first message exists in the acceleration rule table or not;
then, the performing hardware NAT translation on the first packet according to the hardware acceleration module and the acceleration rule table specifically includes:
and when the upstream acceleration entry corresponding to the first message exists in the acceleration rule table, performing hardware NAT (network address translation) conversion on the first message according to the upstream acceleration entry corresponding to the first message in the acceleration rule table through the hardware acceleration module.
Further, the method further comprises:
and when the uplink acceleration item corresponding to the first message does not exist in the acceleration rule table, performing corresponding uplink acceleration rule learning processing according to the first message, and sending the processed first message to the Border Relay server for corresponding processing.
Further, the method performs corresponding uplink acceleration rule learning processing according to the first message by the following steps:
acquiring quintuple information carried in the first message, and sending the quintuple information to the controller; the quintuple information comprises a source IP address, a source port, a destination IP address, a destination port and a direction;
performing NAT conversion on the first message to obtain a converted first message;
carrying out IPv6 packaging on the converted first message to obtain a packaged first message;
acquiring the converted five-tuple information and the encapsulated IPv6 header carried in the encapsulated first message, and sending the converted five-tuple information and the IPv6 header to the controller, so that the controller stores the five-tuple information, the converted five-tuple information and the IPv6 header in the acceleration rule table to form a corresponding uplink acceleration entry.
Further, before the IPv6 decapsulating the second packet according to the acceleration rule table, the method further includes:
judging whether a downlink acceleration item corresponding to the second message exists in the acceleration rule table;
then, the decapsulating the second packet according to the acceleration rule table by IPv6 specifically includes:
and when the downlink acceleration entry corresponding to the second message exists in the acceleration rule table, performing IPv6 decapsulation on the second message.
Further, the method further comprises:
and when the downlink acceleration item corresponding to the second message does not exist in the acceleration rule table, performing corresponding downlink acceleration rule learning processing according to the second message, and forwarding the processed second message to the second destination device.
Further, the method performs corresponding downlink acceleration rule learning processing according to the second message by the following steps:
acquiring quintuple information carried in the second message, and sending the quintuple information to the controller; the quintuple information comprises a source IP address, a source port, a destination IP address, a destination port and a direction;
carrying out IPv6 decapsulation on the second message to obtain a decapsulated second message;
performing NAT conversion on the decapsulated second message to obtain a converted second message;
and acquiring converted quintuple information carried in the converted second message, and sending the converted quintuple information to the controller, so that the controller stores the quintuple information and the converted quintuple information in the acceleration rule table to form a corresponding downlink acceleration entry.
Further, before sending the encapsulated first message to the Border Relay server, the method further includes:
judging whether the encapsulated first message is larger than the maximum load of a preset Ethernet frame or not;
then, the sending the encapsulated first packet to the Border Relay server specifically includes:
and when the encapsulated first message is not greater than the maximum load of a preset Ethernet frame, sending the encapsulated first message to the Border Relay server.
Further, the method further comprises:
and when the encapsulated first message is larger than the maximum load of a preset Ethernet frame, carrying out IPv6 layer fragmentation processing or packet loss processing on the encapsulated first message, and deleting the corresponding uplink acceleration entry of the first message in the acceleration rule table through the controller.
In order to solve the above technical problem, an embodiment of the present invention further provides a MAP-E link acceleration apparatus, where the apparatus is disposed in a network device, and the network device further includes a hardware acceleration module and a controller that controls the hardware acceleration module; an acceleration rule table is preset in the controller; the device comprises:
the uplink message acceleration module is used for performing hardware NAT conversion on a first message according to the hardware acceleration module and the acceleration rule table when the first message in the uplink direction is received, performing IPv6 encapsulation on the converted first message according to the acceleration rule table, sending the encapsulated first message to a Border Relay server, enabling the Border Relay server to decapsulate the encapsulated first message, and forwarding the decapsulated first message to a first destination device;
the downlink message acceleration module is used for performing IPv6 decapsulation on a second message according to the acceleration rule table when the second message in the downlink direction is received, and performing hardware NAT (network address translation) conversion on the decapsulated second message according to the hardware acceleration module and the acceleration rule table so as to forward the converted second message to a second destination device;
wherein, the acceleration rule table at least comprises an IPv6 header, a plurality of upstream acceleration entries and a plurality of downstream acceleration entries; the controller is used for managing acceleration entries in the acceleration rule table; the second message is a data message which is subjected to IPv6 encapsulation by the Border Relay server;
the uplink message acceleration module performs hardware NAT conversion on the first message according to the hardware acceleration module and the acceleration rule table, and specifically includes:
and performing hardware NAT (network Address translation) conversion on the first message according to the hardware acceleration module and the uplink acceleration entry obtained from the acceleration rule table so as to correspondingly convert a source IPv4 address and a source port in the first message into a tunnel IPv4 address and a tunnel port.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program; wherein the computer program, when executed, controls an apparatus in which the computer readable storage medium is located to perform any of the above MAP-E link acceleration methods.
An embodiment of the present invention further provides a network device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor implements the MAP-E link acceleration method described in any one of the above when executing the computer program.
Compared with the prior art, the embodiment of the invention provides a MAP-E link acceleration method, a device, a computer readable storage medium and network equipment, wherein the method is suitable for the network equipment, the network equipment comprises a hardware acceleration module and a controller for controlling the hardware acceleration module, and an acceleration rule table is preset in the controller; the method comprises the following steps: when a first message in an uplink direction is received, performing hardware NAT (network address translation) conversion on the first message according to a hardware acceleration module and an acceleration rule table, performing IPv6 encapsulation on the converted first message according to the acceleration rule table, so as to send the encapsulated first message to a Border Relay server, enabling the Border Relay server to decapsulate the encapsulated first message, and forwarding the decapsulated first message to a first destination device; when a second message in a downlink direction is received, performing IPv6 decapsulation on the second message according to the acceleration rule table, and performing hardware NAT (network Address translation) conversion on the decapsulated second message according to the hardware acceleration module and the acceleration rule table so as to forward the converted second message to a second destination device; the embodiment of the invention processes the message through the hardware acceleration module and the preset acceleration rule table, can improve the NAT conversion efficiency of the message, and thus improves the forwarding speed of the message.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a MAP-E link acceleration method provided by the present invention;
FIGS. 2A-2B are diagrams illustrating an uplink application scenario of a MAP-E link acceleration method according to the present invention;
FIG. 3 is a block diagram of a preferred embodiment of a MAP-E link acceleration apparatus according to the present invention;
fig. 4 is a block diagram of a preferred embodiment of a network device provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without any inventive step, are within the scope of the present invention.
An embodiment of the present invention provides a MAP-E link acceleration method, which is a flowchart of a preferred embodiment of the MAP-E link acceleration method provided by the present invention, and is shown in fig. 1, where the method is applied to a network device, and the network device includes a hardware acceleration module and a controller that controls the hardware acceleration module; an acceleration rule table is preset in the controller; the method includes steps S11 to S12:
step S11, when a first message in an uplink direction is received, performing hardware NAT (network address translation) conversion on the first message according to the hardware acceleration module and the acceleration rule table, performing IPv6 encapsulation on the converted first message according to the acceleration rule table, and sending the encapsulated first message to a Border Relay server, so that the Border Relay server decapsulates the encapsulated first message and forwards the decapsulated first message to a first destination device;
step S12, when a second message in the downlink direction is received, the second message is de-encapsulated by IPv6 according to the acceleration rule table, and the de-encapsulated second message is subjected to hardware NAT conversion according to the hardware acceleration module and the acceleration rule table, so that the converted second message is forwarded to a second destination device;
wherein, the acceleration rule table at least comprises an IPv6 header, a plurality of upstream acceleration entries and a plurality of downstream acceleration entries; the controller is used for managing acceleration entries in the acceleration rule table; the second message is a data message encapsulated by the Border Relay server through IPv 6.
Specifically, the network device includes a hardware acceleration module (for example, an IPv4NAT/NAPT module owned by a chip of the router) and a controller for controlling the hardware acceleration module, and an acceleration rule table is preset in the controller, the acceleration rule table is obtained in advance through acceleration rule learning processing, the acceleration rule table at least includes an IPv6 header encapsulating an IPv4 address by IPv6, a plurality of upstream acceleration entries for processing data packets in an upstream direction, and a plurality of downstream acceleration entries for processing data packets in a downstream direction, and the controller is configured to manage the acceleration entries in the acceleration rule table (for example, add or/and delete some contents in the acceleration rule table).
When the network device receives a first message in an uplink direction (for example, the first message is an uplink data message sent by a first source device to a first destination device through the network device and the Border Relay server), performing hardware NAT (network Address translation) conversion on the first message according to the hardware acceleration module and an uplink acceleration entry obtained from the acceleration rule table, and correspondingly obtaining a converted first message, and performs IPv6 encapsulation on the converted first message according to the IPv6 header obtained from the acceleration rule table, correspondingly obtains the encapsulated first message, thereby completing the acceleration processing process of the first message, finally sending the encapsulated first message to the Border Relay server through GMAC (network card), so that the Border Relay server performs IPv6 decapsulation on the encapsulated first message, to remove the IPv6 header and obtain the first IPv4 packet accordingly, the Border Relay server forwards the decapsulated first packet (i.e., the first IPv4 packet) to the first destination device.
When the network device receives a second packet in a downlink direction (for example, the second packet is a downlink data packet replied to the first source device by the first destination device through the Border Relay server and the network device), because the second packet is a data packet subjected to IPv6 encapsulation by the Border Relay server, IPv6 decapsulation is performed on the second packet according to the acceleration rule table to remove an IPv6 header, so as to obtain an IPv4 packet correspondingly, hardware NAT conversion is performed on the decapsulated second packet (i.e., a second IPv4 packet) according to the hardware acceleration module and the downlink acceleration entry obtained from the acceleration rule table, so as to obtain a converted second packet correspondingly, thereby completing an acceleration processing process on the second packet, and finally the converted second packet is forwarded to the second destination device.
It should be noted that, the embodiment of the present invention is applicable to a network configured by using PPPoEv6, SLAAC, DHCPv6, and other addresses, and when applied to a PPPoEv6 network, for a first packet in an uplink direction, after obtaining a first packet encapsulated in IPv6, a PPPoE header needs to be added to the first packet after encapsulation, where the PPPoE header is also obtained in advance through accelerated rule learning processing and stored in an accelerated rule table; correspondingly, for the second message in the downlink direction, before IPv6 decapsulation is performed on the second message, PPPoE decapsulation is performed on the second message first, so as to remove the PPPoE header in the second message; when the method is applied to the SLAAC or DHCPv6 network, PPPoE encapsulation and PPPoE decapsulation processing are not required to be carried out on the message.
It can be understood that the network device further includes an ethernet interface or/and a wireless network module, and the forwarding of the data packet among the network device, the source device, the destination device, and the Border Relay server can be realized through the ethernet interface or/and the wireless network module.
According to the MAP-E link acceleration method provided by the embodiment of the invention, the function of supporting the MAP-E link rapid forwarding is added for the network equipment which does not support the MAP-E acceleration through the preset acceleration rule table, and the NAT conversion efficiency of the message can be greatly improved by performing the NAT conversion on the hardware level on the message through the hardware acceleration module of the network equipment, so that the message forwarding speed is improved, meanwhile, the overall throughput of the network equipment can be greatly improved, and the load of the network equipment is reduced.
In another preferred embodiment, before performing hardware NAT translation on the first packet according to the hardware acceleration module and the acceleration rule table, the method further includes:
judging whether an uplink acceleration entry corresponding to the first message exists in the acceleration rule table;
then, the performing hardware NAT translation on the first packet according to the hardware acceleration module and the acceleration rule table specifically includes:
and when the upstream acceleration entry corresponding to the first message exists in the acceleration rule table, performing hardware NAT (network address translation) conversion on the first message according to the upstream acceleration entry corresponding to the first message in the acceleration rule table through the hardware acceleration module.
Specifically, with reference to the foregoing embodiment, when the network device receives a first packet in the uplink direction, it is first determined whether an uplink acceleration entry corresponding to the first packet exists in a preset acceleration rule table, and when it is determined that the uplink acceleration entry exists, the hardware acceleration module is based on, and performs hardware NAT translation on the first packet according to the uplink acceleration entry corresponding to the first packet, which is obtained from the acceleration rule table, and after the translated first packet is obtained, a subsequent processing procedure is the same as that in the foregoing embodiment, and is not described here again.
In yet another preferred embodiment, the method further comprises:
and when the uplink acceleration item corresponding to the first message does not exist in the acceleration rule table, performing corresponding uplink acceleration rule learning processing according to the first message, and sending the processed first message to the Border Relay server for corresponding processing.
Specifically, with reference to the foregoing embodiment, when it is determined that there is no uplink acceleration entry corresponding to the first packet in the acceleration rule table, performing corresponding uplink acceleration rule learning processing according to the first packet to obtain an uplink acceleration entry corresponding to the first packet, and after the uplink acceleration rule learning processing is performed, the correspondingly obtained processed first packet is the first packet encapsulated in IPv6, sending the processed first packet to the Border Relay server, so that the Border Relay server performs IPv6 decapsulation on the processed first packet to remove the IPv6 header, and accordingly obtains the first IPv4 packet, and the Border Relay server forwards the decapsulated first packet (i.e., the first IPv4 packet) to the first destination device.
It can be understood that, because the preset acceleration rule table does not have the upstream acceleration entry corresponding to the first packet, the acceleration rule learning processing may be performed according to the first packet, and the upstream acceleration entry corresponding to the first packet is correspondingly obtained, so that the upstream acceleration entry is used subsequently.
As an improvement of the above scheme, the method performs corresponding uplink acceleration rule learning processing according to the first packet by the following steps:
acquiring quintuple information carried in the first message, and sending the quintuple information to the controller; the quintuple information comprises a source IP address, a source port, a destination IP address, a destination port and a direction;
performing NAT conversion on the first message to obtain a converted first message;
carrying out IPv6 packaging on the converted first message to obtain a packaged first message;
acquiring converted five-tuple information and an encapsulated IPv6 header carried in an encapsulated first message, and sending the converted five-tuple information and the IPv6 header to the controller, so that the controller stores the five-tuple information, the converted five-tuple information and the IPv6 header in the acceleration rule table to form a corresponding uplink acceleration entry.
Specifically, with reference to the above embodiments, the embodiment of the present invention is a specific method for learning and processing an uplink acceleration rule, which is applicable to all data packets in an uplink direction, and the following description will take as an example that corresponding learning and processing of the uplink acceleration rule are performed according to a first packet: the network equipment firstly acquires quintuple information carried in a first message (the quintuple information comprises a source IP address, a source port, a destination IP address, a destination port and a direction of the first message, wherein the direction is an uplink direction), and sends the acquired quintuple information to the controller; then, performing NAT (network Address translation) conversion on the software layer of the first message to correspondingly obtain a converted first message, and performing IPv6 packaging on the converted first message to correspondingly obtain a packaged first message; and finally, acquiring the converted five-tuple information (the converted five-tuple information comprises a converted source IP address, a converted source port, a converted destination IP address, a destination port and a direction) carried in the encapsulated first message and an IPv6 header used in IPv6 encapsulation, and sending the converted five-tuple information and the IPv6 header to the controller, so that the controller stores the received five-tuple information, the converted five-tuple information and the IPv6 header in a preset acceleration rule table to form an uplink acceleration entry corresponding to the first message, which is equivalent to adding a new uplink acceleration entry to the acceleration rule table by the controller.
It should be noted that the acceleration rule table is obtained in advance through acceleration rule learning processing, and the method for obtaining the IPv6 header and the plurality of uplink acceleration entries in the acceleration rule table is the same as the method used in the embodiment of the present invention, and is not described here again.
In another preferred embodiment, before the IPv6 decapsulating the second packet according to the acceleration rule table, the method further includes:
judging whether a downlink acceleration item corresponding to the second message exists in the acceleration rule table;
then, the decapsulating the second packet according to the acceleration rule table by IPv6 specifically includes:
and when the downlink acceleration entry corresponding to the second message exists in the acceleration rule table, performing IPv6 decapsulation on the second message.
Specifically, with reference to the foregoing embodiment, when the network device receives the second packet in the downlink direction, it is first determined whether a downlink acceleration entry corresponding to the second packet exists in a preset acceleration rule table, and when it is determined that the downlink acceleration entry exists, the IPv6 decapsulation is performed on the second packet, and after the decapsulated second packet is obtained, the subsequent processing procedure is the same as that in the foregoing embodiment, and is not described here again.
In yet another preferred embodiment, the method further comprises:
and when the downlink acceleration entry corresponding to the second message does not exist in the acceleration rule table, performing corresponding downlink acceleration rule learning processing according to the second message, and forwarding the processed second message to the second destination device.
Specifically, with reference to the foregoing embodiment, when it is determined that there is no downlink acceleration entry corresponding to the second packet in the acceleration rule table, corresponding downlink acceleration rule learning processing is performed according to the second packet to obtain a downlink acceleration entry corresponding to the second packet, and after the downlink acceleration rule learning processing is performed, the correspondingly obtained processed second packet is the second packet after hardware NAT conversion, and the processed second packet is forwarded to the second destination device.
It can be understood that, since the preset acceleration rule table does not have the downlink acceleration entry corresponding to the second packet, the acceleration rule learning processing may be performed according to the second packet, and the downlink acceleration entry corresponding to the second packet is correspondingly obtained, so as to facilitate subsequent use of the downlink acceleration entry.
As an improvement of the above solution, the method performs corresponding downlink acceleration rule learning processing according to the second packet by the following steps:
acquiring quintuple information carried in the second message, and sending the quintuple information to the controller; the quintuple information comprises a source IP address, a source port, a destination IP address, a destination port and a direction;
carrying out IPv6 decapsulation on the second message to obtain a decapsulated second message;
performing NAT conversion on the decapsulated second message to obtain a converted second message;
and acquiring converted quintuple information carried in the converted second message, and sending the converted quintuple information to the controller, so that the controller stores the quintuple information and the converted quintuple information in the acceleration rule table to form a corresponding downlink acceleration entry.
Specifically, with reference to the foregoing embodiment, the embodiment of the present invention is a specific method for learning a downlink acceleration rule, and is applicable to data packets in all downlink directions, and the following description will take an example of performing corresponding learning processing on a downlink acceleration rule according to a second packet as an example: the network equipment firstly acquires quintuple information carried in the second message (the quintuple information comprises a source IP address, a source port, a destination IP address, a destination port and a direction of the second message, wherein the direction is a downlink direction), and sends the acquired quintuple information to the controller; then, IPv6 decapsulating is carried out on the second message, the decapsulated second message is correspondingly obtained, NAT conversion of a software layer is carried out on the decapsulated second message, and the converted second message is correspondingly obtained; and finally, acquiring converted quintuple information (the converted quintuple information comprises a source IP address, a source port, a converted destination IP address, a converted destination port and a direction) carried in the converted second message, and sending the converted quintuple information to the controller, so that the controller stores the received quintuple information and the converted quintuple information in a preset acceleration rule table to form a downlink acceleration entry corresponding to the second message, which is equivalent to adding a new downlink acceleration entry for the acceleration rule table by the controller.
It should be noted that the acceleration rule table is obtained in advance through acceleration rule learning processing, and the method for obtaining a plurality of downlink acceleration entries in the acceleration rule table is the same as the method used in the embodiment of the present invention, and is not described here again.
In another preferred embodiment, before sending the encapsulated first message to the Border Relay server, the method further includes:
judging whether the encapsulated first message is larger than the maximum load of a preset Ethernet frame or not;
then, the sending the encapsulated first packet to the Border Relay server specifically includes:
and when the encapsulated first message is not greater than the maximum load of a preset Ethernet frame, sending the encapsulated first message to the Border Relay server.
Specifically, with reference to the foregoing embodiment, after obtaining the IPv6 encapsulated first packet, because the IPv6 header is added to the IPv4 packet, and the packet length is correspondingly increased, before sending the IPv6 encapsulated first packet to the Border Relay server for corresponding processing, it is necessary to determine whether the packet length of the encapsulated first packet is greater than the preset maximum load of the ethernet frame, and when it is determined that the packet length of the encapsulated first packet is not greater than the maximum load of the ethernet frame, the encapsulated first packet is sent to the Border Relay server for subsequent processing.
It should be noted that, when the embodiment of the present invention is applied to a PPPoEv6 network, since the IPv6 header and the PPPoE header are added to the first packet at the same time, it is also necessary to determine whether the packet length exceeds the maximum load of the preset ethernet frame, and only if the packet length does not exceed the maximum load of the preset ethernet frame, the subsequent processing is performed.
In yet another preferred embodiment, the method further comprises:
and when the encapsulated first message is larger than the maximum load of a preset Ethernet frame, carrying out IPv6 layer fragmentation processing or packet loss processing on the encapsulated first message, and deleting the corresponding uplink acceleration entry of the first message in the acceleration rule table through the controller.
Specifically, with reference to the foregoing embodiment, when it is determined that the packet length of the encapsulated first packet is greater than the maximum load of the ethernet frame, the encapsulated first packet is subjected to overrun processing, for example, IPv6 layer fragmentation processing or direct packet loss processing, and the controller deletes the upstream acceleration entry corresponding to the first packet in the acceleration rule table, and then does not accelerate the first packet until the acceleration condition is satisfied again, that is, the upstream acceleration entry corresponding to the first packet is obtained through the upstream acceleration rule learning processing.
For example, in order to solve the problem of increasing fragmentation of the IPv6 tunnel header, the TCP MSS value of the data packet encapsulated by IPv6 may be set to be smaller than or equal to 1412 bytes, that is, the specific calculation method is that the MSS is the maximum load MTU-IPv6 header length of the ethernet frame-IPv 4 header length-TCP header length-PPPoE header length is 1412; the MTU is 1500 bytes, the IPv6 header length is 40 bytes, the IPv4 header length is 20 bytes, the TCP header length is 20 bytes, and the PPPoE header length is 8 bytes; in addition, the problem can be solved by setting the MTU of the data message encapsulated by IPv6 to be not more than 1452 bytes, but the problem can be caused by that the UDP data message is fragmented at the IPv4 layer, and the compatibility problem is caused with partial application.
Referring to fig. 2A to 2B, which are schematic diagrams of an uplink application scenario of a MAP-E link acceleration method provided by the present invention, in the diagrams, a CPU represents a process interval processed by the CPU of a network device, a Hardware acceleration module of the network device is represented by hard ware Accelerator, CTRL represents a controller of the Hardware acceleration module, WIFI represents a wireless network module of the network device, ETH represents an ethernet interface of the network device, a MAP-E Accelerator RX represents an acceleration processing process interval of an uplink data packet receiving direction, a MAP-E Accelerator TX represents an acceleration processing process interval of an uplink data packet transmitting direction, Br-LAN represents a soft bridge on a LAN side, an NAT represents a process interval of performing NAT conversion processing on the uplink data packet at a software level, and Tunnel represents a process interval of performing IPv6 Tunnel encapsulation processing on the uplink data packet, PPP represents a process interval for PPPoE encapsulation processing of an uplink data packet, LAN Phone represents a wireless client on the LAN side, LAN PC represents a wired client on the LAN side, BR represents a Border Relay server for IPv6 point-to-point tunnel conversion, and WAN PC represents any access point PC on the Internet, and the following describes the working process of the embodiment of the present invention specifically with reference to fig. 2 (the working process in the downlink direction is opposite to the working process in the uplink direction, and the principle is similar, and is not described here again):
(1) upstream acceleration rule learning process (as shown in fig. 2A):
an uplink message is sent from a wired client or a wireless client on the LAN side (process in fig. 2A), for example, from 192.168.0.4:100 to 10.0.0.23:80, and is submitted to MAP-E accumulator RX after being subjected to WIFI wireless driving or ETH driving, and the MAP-E accumulator RX will learn the initial quintuple information (source IP, source Port, destination IP, destination Port, direction) of the uplink message to obtain (SrcIP, SrcPort, DstIP, DstPort, Dir) (192.168.0.4, 100, 10.0.0.23, 80, up), and set it into CTRL of Hardware accumulator (process in fig. 2A), the uplink message is submitted to Br-LAN from MAP-E accumulator RX, enters NAT after being routed to NAT for conversion (process in fig. 2A), and the source IP address and NAT address are converted, assuming that the uplink message is changed to the quintuple information after NAT address conversion (366335), 101, 10.0.0.23, 80, up), entering Tunnel for IPv6 Tunnel encapsulation (process r in fig. 2A) by the uplink message after NAT conversion, assuming that IPv6 address obtained by PPP is 2404::10/64, after IPv6 Tunnel encapsulation, the uplink message of IPv4 carries Tunnel header of IPv6, the encapsulated uplink message is delivered to PPP (process fifth in fig. 2A), after PPPoE header is encapsulated, it is sent to ETH ethernet driver after routing, and then delivered to MAP-E Accelerator TX by ETH ethernet driver, and if PPPoE quintuple information (10.0.0.8, 101, 10.0.0.23, 80, up) after NAT conversion is learned from the uplink after encapsulation by MAP-E Accelerator module, the Session ID of PPPoE header is changed to be saved for the whole PPPoE 8 byte 4, for example, the Session ID is saved to hash CTRL header, and if the header does not support PPPoE variable setting up, it should be noted that the upstream packet after accelerated learning still needs to be sent to the CPU for lucky processing, and in addition, 40 bytes of the IPv6 tunnel header of the upstream packet (for example, stored in the static variable IPv6 HdrW) need to be stored (process sixty in fig. 2A), at this time, all information required for hardware acceleration is closed-loop; during subsequent processing, the uplink message is normally sent to the terminal BR of the tunnel (process of FIG. 2A): 1/64, the BR removes the tunnel header of IPv6, routes the uplink message of IPv4 and sends the uplink message to the final WAN PC (process of FIG. 2A): its IP is 10.0.0.23, assuming that the address of BR is 2404::::.
(2) Uplink packet forwarding process (as shown in fig. 2B):
an uplink message is sent from a wired client or a wireless client on a LAN side, after the router receives the uplink message through WIFI (left in the process in fig. 2B), the uplink message is sent to a MAP-E Accelerator RX, after determining that an acceleration entry exists in the uplink message (assuming that the uplink acceleration entry corresponding to the uplink message has been set in CTRL through an uplink acceleration rule learning process, a corresponding acceleration rule table is also stored in a memory for lookup, thereby determining whether the uplink message exists in the uplink acceleration entry), the uplink message is sent to a Hardware Accelerator through ETH ethernet driving for Hardware NAT conversion (process in fig. 2B), if the uplink message is received through a two-layer switch of the router (below the Hardware Accelerator, all ethernet packets need to be forwarded through the switch), the uplink message directly enters the Hardware NAT module for Hardware NAT conversion (first in the process in fig. 2B), the quintuple information before and after conversion is converted from (192.168.0.4, 100, 10.0.0.23, 80, up) to (10.0.0.8, 101, 10.0.0.23, 80, up), the converted upstream message is delivered to MAP-E accumulator RX (the upstream message is the message sent by Hardware accumulator to CPU and should be received by MAP-E accumulator RX) to continue processing (process in fig. 2B), the tunnel header to which IPv6 is added, this header is the IPv6HdrW saved in the learning phase (for MAP-E, there is one and only one IPv6HdrW, because the source IPv6 address and the destination IPv6 address are fixed unless network interruption occurs and the address is redialed to be allocated), if the Hardware accumulator module IPv4PPPoE acceleration occurs, the PPPoE header needs to be added continuously, this header is the ppdrpw saved in the learning phase, then the ethernet header is sent to ethernet header 854 PPPoE acceleration, and if the ethernet header is found to be added with the maximum ethernet load after ethernet packet processing, 6 is found to exceed the ethernet header is added with the ethernet payload, then, the IPv6 layer fragmentation processing is performed or the uplink packet is discarded, and the uplink acceleration rule corresponding to the uplink packet is cleared, and the subsequent packet is not accelerated again until it meets the acceleration condition again, if it is found that the packet length of the uplink packet after adding the PPPoE header and the IPv6 tunnel header does not exceed the maximum load of the ethernet frame at this time, the uplink packet is normally sent to the terminal BR of the tunnel (process iv in fig. 2B), the BR removes the tunnel header of IPv6, routes the uplink packet of IPv4, and sends the uplink packet to the final WAN PC (process iv in fig. 2B), where the IP of the uplink packet is 10.0.0.23.
The embodiment of the present invention further provides a MAP-E link acceleration apparatus, which can implement all the processes of the MAP-E link acceleration method described in any of the above embodiments, and the functions and technical effects of each module and unit in the apparatus are respectively the same as those of the MAP-E link acceleration method described in the above embodiment, and are not described herein again.
Referring to fig. 3, it is a block diagram of a preferred embodiment of a MAP-E link acceleration apparatus provided in the present invention, where the apparatus is disposed in a network device, and the network device further includes a hardware acceleration module and a controller for controlling the hardware acceleration module; an acceleration rule table is preset in the controller; the device comprises:
the uplink packet acceleration module 11 is configured to, when receiving a first packet in an uplink direction, perform hardware NAT conversion on the first packet according to the hardware acceleration module and the acceleration rule table, perform IPv6 encapsulation on the converted first packet according to the acceleration rule table, so as to send the encapsulated first packet to the Border Relay server, so that the Border Relay server decapsulates the encapsulated first packet, and forward the decapsulated first packet to a first destination device;
the downlink packet acceleration module 12 is configured to, when a second packet in a downlink direction is received, perform IPv6 decapsulation on the second packet according to the acceleration rule table, and perform hardware NAT translation on the decapsulated second packet according to the hardware acceleration module and the acceleration rule table, so as to forward the translated second packet to a second destination device;
wherein, the acceleration rule table at least comprises an IPv6 header, a plurality of upstream acceleration entries and a plurality of downstream acceleration entries; the controller is used for managing acceleration entries in the acceleration rule table; the second message is a data message encapsulated by the Border Relay server through IPv 6.
Preferably, the apparatus further comprises:
an upstream acceleration entry determining module, configured to determine whether an upstream acceleration entry corresponding to the first packet exists in the acceleration rule table;
then, the uplink packet acceleration module 11 specifically includes:
and the NAT conversion unit is used for performing hardware NAT conversion on the first message according to the uplink acceleration item corresponding to the first message in the acceleration rule table through the hardware acceleration module when the uplink acceleration item corresponding to the first message exists in the acceleration rule table.
Preferably, the apparatus further comprises:
and the uplink acceleration rule learning module is used for performing corresponding uplink acceleration rule learning processing according to the first message when no uplink acceleration item corresponding to the first message exists in the acceleration rule table, and sending the processed first message to the Border Relay server for corresponding processing.
Preferably, the upstream acceleration rule learning module is specifically configured to:
acquiring quintuple information carried in the first message, and sending the quintuple information to the controller; the quintuple information comprises a source IP address, a source port, a destination IP address, a destination port and a direction;
performing NAT conversion on the first message to obtain a converted first message;
carrying out IPv6 encapsulation on the converted first message to obtain an encapsulated first message;
acquiring the converted five-tuple information and the encapsulated IPv6 header carried in the encapsulated first message, and sending the converted five-tuple information and the IPv6 header to the controller, so that the controller stores the five-tuple information, the converted five-tuple information and the IPv6 header in the acceleration rule table to form a corresponding uplink acceleration entry.
Preferably, the apparatus further comprises:
a downlink acceleration entry judging module, configured to judge whether a downlink acceleration entry corresponding to the second packet exists in the acceleration rule table;
then, the downlink packet acceleration module 12 specifically includes:
an IPv6 decapsulating unit, configured to decapsulate, when a downlink acceleration entry corresponding to the second packet exists in the acceleration rule table, the second packet by IPv 6.
Preferably, the apparatus further comprises:
and the downlink acceleration rule learning module is configured to, when no downlink acceleration entry corresponding to the second packet exists in the acceleration rule table, perform corresponding downlink acceleration rule learning processing according to the second packet, and forward the processed second packet to the second destination device.
Preferably, the line acceleration rule learning module is specifically configured to:
acquiring quintuple information carried in the second message, and sending the quintuple information to the controller; the quintuple information comprises a source IP address, a source port, a destination IP address, a destination port and a direction;
carrying out IPv6 decapsulation on the second message to obtain a decapsulated second message;
performing NAT conversion on the decapsulated second message to obtain a converted second message;
and acquiring converted quintuple information carried in the converted second message, and sending the converted quintuple information to the controller, so that the controller stores the quintuple information and the converted quintuple information in the acceleration rule table to form a corresponding downlink acceleration entry.
Preferably, the apparatus further comprises:
the overrun judging module is used for judging whether the encapsulated first message is larger than the maximum load of a preset Ethernet frame;
then, the uplink packet acceleration module 11 specifically includes:
and the message sending unit is used for sending the first message after encapsulation to the Border Relay server when the first message after encapsulation is not larger than the maximum load of a preset Ethernet frame.
Preferably, the apparatus further comprises:
and the overrun processing module is used for performing IPv6 layer fragmentation processing or packet loss processing on the encapsulated first message when the encapsulated first message is greater than the maximum load of a preset Ethernet frame, and deleting the corresponding uplink acceleration entry of the first message in the acceleration rule table through the controller.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program; wherein the computer program, when running, controls the device on which the computer readable storage medium is located to execute the MAP-E link acceleration method according to any of the above embodiments.
An embodiment of the present invention further provides a network device, which is shown in fig. 4 and is a block diagram of a preferred embodiment of the network device provided in the present invention, the network device includes a processor 10, a memory 20, and a computer program stored in the memory 20 and configured to be executed by the processor 10, and the processor 10, when executing the computer program, implements the MAP-E link acceleration method according to any of the above embodiments.
Preferably, the computer program can be divided into one or more modules/units (e.g. computer program 1, computer program 2,) which are stored in the memory 20 and executed by the processor 10 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program in the network device.
The Processor 10 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, etc., the general purpose Processor may be a microprocessor, or the Processor 10 may be any conventional Processor, the Processor 10 is a control center of the network device, and various interfaces and lines are used to connect various parts of the network device.
The memory 20 mainly includes a program storage area that may store an operating system, an application program required for at least one function, and the like, and a data storage area that may store related data and the like. In addition, the memory 20 may be a high speed random access memory, a non-volatile memory such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), and the like, or the memory 20 may be other volatile solid state memory devices.
It should be noted that the network device may include, but is not limited to, a processor and a memory, and those skilled in the art will understand that the structural block diagram of fig. 4 is only an example of the network device and does not constitute a limitation of the network device, and may include more or less components than those shown, or combine some components, or different components.
To sum up, the MAP-E link acceleration method, apparatus, computer-readable storage medium, and network device provided in the embodiments of the present invention add a function of supporting MAP-E link fast forwarding to a network device that does not support MAP-E acceleration through a preset acceleration rule table, and perform hardware-level NAT conversion on a packet through a hardware acceleration module of the network device, so that NAT conversion efficiency of the packet can be greatly improved, thereby improving forwarding speed of the packet, and simultaneously, greatly improving overall throughput of the network device and reducing load of the network device.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (12)

1. A MAP-E link acceleration method is characterized in that the method is applied to a network device, and the network device comprises a hardware acceleration module and a controller for controlling the hardware acceleration module; an acceleration rule table is preset in the controller; the method comprises the following steps:
when a first message in an uplink direction is received, performing hardware NAT (network address translation) conversion on the first message according to the hardware acceleration module and the acceleration rule table, performing IPv6 encapsulation on the converted first message according to the acceleration rule table, so as to send the encapsulated first message to a Border Relay server, enabling the Border Relay server to decapsulate the encapsulated first message, and forwarding the decapsulated first message to a first destination device;
when a second message in a downlink direction is received, performing IPv6 decapsulation on the second message according to the acceleration rule table, and performing hardware NAT (network address translation) conversion on the decapsulated second message according to the hardware acceleration module and the acceleration rule table so as to forward the converted second message to a second destination device;
wherein, the acceleration rule table at least comprises an IPv6 header, a plurality of upstream acceleration entries and a plurality of downstream acceleration entries; the controller is used for managing acceleration entries in the acceleration rule table; the second message is a data message which is subjected to IPv6 encapsulation by the Border Relay server;
the performing hardware NAT translation on the first packet according to the hardware acceleration module and the acceleration rule table specifically includes:
and performing hardware NAT (network Address translation) conversion on the first message according to the hardware acceleration module and the uplink acceleration entry obtained from the acceleration rule table so as to correspondingly convert the source IPv4 address and the source port in the first message into a tunnel IPv4 address and a tunnel port.
2. The MAP-E link acceleration method of claim 1, wherein prior to hardware NAT translation of the first packet according to the hardware acceleration module and the acceleration rule table, the method further comprises:
judging whether an uplink acceleration entry corresponding to the first message exists in the acceleration rule table or not;
then, the performing hardware NAT translation on the first packet according to the hardware acceleration module and the acceleration rule table specifically includes:
and when the upstream acceleration entry corresponding to the first message exists in the acceleration rule table, performing hardware NAT (network address translation) conversion on the first message according to the upstream acceleration entry corresponding to the first message in the acceleration rule table through the hardware acceleration module.
3. The MAP-E link acceleration method of claim 2, wherein the method further comprises:
and when the uplink acceleration item corresponding to the first message does not exist in the acceleration rule table, performing corresponding uplink acceleration rule learning processing according to the first message, and sending the processed first message to the Border Relay server for corresponding processing.
4. The MAP-E link acceleration method of claim 3, characterized in that the method performs the corresponding upstream acceleration rule learning process according to the first packet by:
acquiring quintuple information carried in the first message, and sending the quintuple information to the controller; the quintuple information comprises a source IP address, a source port, a destination IP address, a destination port and a direction;
performing NAT conversion on the first message to obtain a converted first message;
carrying out IPv6 packaging on the converted first message to obtain a packaged first message;
acquiring converted five-tuple information and an encapsulated IPv6 header carried in an encapsulated first message, and sending the converted five-tuple information and the IPv6 header to the controller, so that the controller stores the five-tuple information, the converted five-tuple information and the IPv6 header in the acceleration rule table to form a corresponding uplink acceleration entry.
5. The MAP-E link acceleration method of claim 1, wherein prior to IPv6 decapsulating the second packet according to the acceleration rule table, the method further comprises:
judging whether a downlink acceleration item corresponding to the second message exists in the acceleration rule table;
then, the decapsulating the second packet according to the acceleration rule table by IPv6 specifically includes:
and when the downlink acceleration entry corresponding to the second message exists in the acceleration rule table, performing IPv6 decapsulation on the second message.
6. The MAP-E link acceleration method of claim 5, wherein the method further comprises:
and when the downlink acceleration item corresponding to the second message does not exist in the acceleration rule table, performing corresponding downlink acceleration rule learning processing according to the second message, and forwarding the processed second message to the second destination device.
7. The MAP-E link acceleration method according to claim 6, wherein the method performs corresponding downstream acceleration rule learning processing according to the second packet by:
acquiring quintuple information carried in the second message, and sending the quintuple information to the controller; the quintuple information comprises a source IP address, a source port, a destination IP address, a destination port and a direction;
carrying out IPv6 decapsulation on the second message to obtain a decapsulated second message;
performing NAT conversion on the decapsulated second message to obtain a converted second message;
and acquiring converted quintuple information carried in the converted second message, and sending the converted quintuple information to the controller, so that the controller stores the quintuple information and the converted quintuple information in the acceleration rule table to form a corresponding downlink acceleration entry.
8. The MAP-E link acceleration method of claim 1, wherein before sending the encapsulated first packet to a Border Relay server, the method further comprises:
judging whether the encapsulated first message is larger than the maximum load of a preset Ethernet frame or not;
then, the sending the encapsulated first packet to the Border Relay server specifically includes:
and when the encapsulated first message is not greater than the maximum load of a preset Ethernet frame, sending the encapsulated first message to the Border Relay server.
9. The MAP-E link acceleration method of claim 8, wherein the method further comprises:
and when the encapsulated first message is larger than the maximum load of a preset Ethernet frame, carrying out IPv6 layer fragmentation processing or packet loss processing on the encapsulated first message, and deleting the corresponding uplink acceleration entry of the first message in the acceleration rule table through the controller.
10. A MAP-E link acceleration device is characterized in that the device is arranged in network equipment, and the network equipment further comprises a hardware acceleration module and a controller for controlling the hardware acceleration module; an acceleration rule table is preset in the controller; the device comprises:
the uplink message acceleration module is used for performing hardware NAT (network address translation) conversion on a first message according to the hardware acceleration module and the acceleration rule table when the first message in the uplink direction is received, performing IPv6 encapsulation on the converted first message according to the acceleration rule table, and sending the encapsulated first message to a Border Relay server, so that the Border Relay server decapsulates the encapsulated first message and forwards the decapsulated first message to a first destination device;
the downlink message acceleration module is used for performing IPv6 decapsulation on a second message according to the acceleration rule table when the second message in the downlink direction is received, and performing hardware NAT (network address translation) conversion on the decapsulated second message according to the hardware acceleration module and the acceleration rule table so as to forward the converted second message to a second destination device;
wherein, the acceleration rule table at least comprises an IPv6 header, a plurality of upstream acceleration entries and a plurality of downstream acceleration entries; the controller is used for managing acceleration entries in the acceleration rule table; the second message is a data message which is subjected to IPv6 encapsulation by the Border Relay server;
the uplink message acceleration module performs hardware NAT conversion on the first message according to the hardware acceleration module and the acceleration rule table, and specifically includes:
and performing hardware NAT (network Address translation) conversion on the first message according to the hardware acceleration module and the uplink acceleration entry obtained from the acceleration rule table so as to correspondingly convert the source IPv4 address and the source port in the first message into a tunnel IPv4 address and a tunnel port.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored computer program; wherein the computer program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the MAP-E link acceleration method according to any one of claims 1 to 9.
12. A network device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the MAP-E link acceleration method of any of claims 1-9 when executing the computer program.
CN202010007941.3A 2020-01-03 2020-01-03 MAP-E link acceleration method, device, storage medium and network equipment Active CN111510513B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010007941.3A CN111510513B (en) 2020-01-03 2020-01-03 MAP-E link acceleration method, device, storage medium and network equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010007941.3A CN111510513B (en) 2020-01-03 2020-01-03 MAP-E link acceleration method, device, storage medium and network equipment

Publications (2)

Publication Number Publication Date
CN111510513A CN111510513A (en) 2020-08-07
CN111510513B true CN111510513B (en) 2022-08-30

Family

ID=71875678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010007941.3A Active CN111510513B (en) 2020-01-03 2020-01-03 MAP-E link acceleration method, device, storage medium and network equipment

Country Status (1)

Country Link
CN (1) CN111510513B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438108B (en) * 2021-06-22 2022-11-29 京信网络系统股份有限公司 Communication acceleration method, device, base station and computer readable storage medium
CN114422365B (en) * 2022-01-21 2024-03-19 成都飞鱼星科技股份有限公司 Internet surfing behavior management method and system based on hardware flow acceleration
CN114978806A (en) * 2022-05-05 2022-08-30 上海联虹技术有限公司 Data transmission method based on hardware acceleration, device and processor thereof
CN116095197B (en) * 2022-07-04 2023-12-12 荣耀终端有限公司 Data transmission method and related device
CN116800672B (en) * 2023-08-24 2024-01-12 北京城建智控科技股份有限公司 Method, device, electronic equipment and storage medium for accelerating message forwarding

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1809033A (en) * 2006-02-16 2006-07-26 四川南山之桥微电子有限公司 NAT hardware implementation method
CN101640635A (en) * 2009-07-31 2010-02-03 北京师范大学 Method for avoiding message recombination in 6over4 tunnel and system therefor
CN103763194A (en) * 2013-12-31 2014-04-30 杭州华三通信技术有限公司 Message forwarding method and device
WO2015074324A1 (en) * 2013-11-22 2015-05-28 上海斐讯数据通信技术有限公司 Data packet express forwarding method and apparatus
WO2017156908A1 (en) * 2016-03-14 2017-09-21 中兴通讯股份有限公司 Method and device for forwarding packet

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI220344B (en) * 2002-10-23 2004-08-11 Winbond Electronics Corp Manufacture and method for accelerating network address translation
CN102821032B (en) * 2011-06-10 2016-12-28 中兴通讯股份有限公司 A kind of method of fast-forwarding packet and three-layer equipment
CN103516692A (en) * 2012-06-28 2014-01-15 中兴通讯股份有限公司 Method and system for achieving accelerating processing of DS-Lite data message
JP6098192B2 (en) * 2013-01-31 2017-03-22 富士通株式会社 Address generator
CN103856581B (en) * 2014-03-26 2017-03-01 清华大学 A kind of translation encapsulation adaptive approach of user side equipment
CN105681194A (en) * 2016-03-14 2016-06-15 上海市共进通信技术有限公司 Method for realizing fast forwarding of two-layer data packet of gateway equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1809033A (en) * 2006-02-16 2006-07-26 四川南山之桥微电子有限公司 NAT hardware implementation method
CN101640635A (en) * 2009-07-31 2010-02-03 北京师范大学 Method for avoiding message recombination in 6over4 tunnel and system therefor
WO2015074324A1 (en) * 2013-11-22 2015-05-28 上海斐讯数据通信技术有限公司 Data packet express forwarding method and apparatus
CN103763194A (en) * 2013-12-31 2014-04-30 杭州华三通信技术有限公司 Message forwarding method and device
WO2017156908A1 (en) * 2016-03-14 2017-09-21 中兴通讯股份有限公司 Method and device for forwarding packet

Also Published As

Publication number Publication date
CN111510513A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111510513B (en) MAP-E link acceleration method, device, storage medium and network equipment
US9154993B1 (en) Mobile-IPv6 encapsulation for wireless networks
JP3494610B2 (en) IP router device with TCP termination function and medium
US20150358232A1 (en) Packet Forwarding Method and VXLAN Gateway
CN102938736B (en) A kind of method and apparatus realizing IPv4 message passing through IPv 6 network
US20220239629A1 (en) Business service providing method and system, and remote acceleration gateway
CN109936492B (en) Method, device and system for transmitting message through tunnel
CN108933756B (en) Data packet processing method, access network equipment and computer readable storage medium
JP6098192B2 (en) Address generator
WO2014026571A1 (en) Method and device for sending generic routing encapsulation tunnel message
US8934489B2 (en) Routing device and method for processing network packet thereof
WO2009124477A1 (en) Method, system and device for packet transmission
US20080170567A1 (en) Packet switch apparatus and method thereof
CN108737239B (en) Message forwarding method and device
CN110752989A (en) Method and device for forwarding east-west traffic
CN113766569A (en) Data transmission method, device, system and storage medium
EP3389231B1 (en) Cluster and forwarding method
CN115514828A (en) Data transmission method and electronic equipment
CN113746715A (en) Communication method and device
KR102087735B1 (en) Relay proxy server, method for relaying data transmission and data transmission system
CN109428819B (en) Method, network component, device and computer storage medium for transmitting data
CN110505137B (en) Function expansion type wired network device
US11870685B2 (en) Packet capsulation method and packet capsulation device
WO2023279990A1 (en) Packet transmission method, apparatus and system, network device, and storage medium
CN117478763B (en) ICMP agent UDP data transmission method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant