WO2022218341A1 - Procédé de transfert de données et appareil associé - Google Patents

Procédé de transfert de données et appareil associé Download PDF

Info

Publication number
WO2022218341A1
WO2022218341A1 PCT/CN2022/086603 CN2022086603W WO2022218341A1 WO 2022218341 A1 WO2022218341 A1 WO 2022218341A1 CN 2022086603 W CN2022086603 W CN 2022086603W WO 2022218341 A1 WO2022218341 A1 WO 2022218341A1
Authority
WO
WIPO (PCT)
Prior art keywords
acceleration node
acceleration
address
node
terminal
Prior art date
Application number
PCT/CN2022/086603
Other languages
English (en)
Chinese (zh)
Inventor
顾炯炯
苗勇
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Publication of WO2022218341A1 publication Critical patent/WO2022218341A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/70Routing based on monitoring results
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Definitions

  • the present application relates to the technical field of computer networks, and in particular, to a data forwarding method and related devices.
  • G Global Accelerator
  • a cloud vendor builds a point of presence (POP) around the world, and the POP point is interconnected with the cloud vendor's private line network.
  • the terminal is connected to the nearest POP point, for example, the terminal of the Asia-Pacific user is connected to POP1.
  • the POP1 point introduces the data flow from the terminal to the private line network, and the private line network is connected to the cloud area, so that the terminal can quickly forward the data to be forwarded to the cloud area through the private line network deployed by the cloud manufacturer.
  • the GA service is completely dependent on the construction and distribution of POP nodes and physical private line networks deployed by cloud vendors, and the GA service capability is limited.
  • Embodiments of the present application provide a data forwarding method and a related device, which are used to improve the coverage of a network acceleration service.
  • an embodiment of the present application provides a data forwarding method, which is applied to a first acceleration node in a communication system.
  • the communication system includes a central controller and multiple acceleration nodes, and the multiple acceleration nodes belong to an overlay network, and the multiple acceleration nodes belong to an overlay network.
  • the nodes include a first acceleration node and a second acceleration node, wherein the deployment environment of the central controller belongs to the first cloud service provider, and the deployment environment of the multiple acceleration nodes belongs to the second cloud service provider, application service provider or telecom operator During the operation of multiple acceleration nodes, they are controlled by the central controller, that is, controlled by the first cloud service provider; during the data forwarding process, the first acceleration node receives the data request from the first terminal, and the data request Used to access the destination; the first acceleration node obtains the target path, and the target path comes from the routing table entry generated by the central controller; the first acceleration node sends a data request to the next-hop acceleration node according to the target path, until the data request is forwarded to The second acceleration node is configured to forward the data request to the destination end.
  • multiple forwarding nodes can be flexibly deployed in a deployment environment provided by a second cloud service provider, an application service provider, or a telecom operator, thereby ensuring that acceleration nodes can be parasitized everywhere in the world, and all Any acceleration node among the acceleration nodes can be used as an access node for a terminal to access the network, and each acceleration node can be used as a transmission node in the target path.
  • the first acceleration node After the first acceleration node obtains the data request from the first terminal, , the first acceleration node sends the data request from the terminal to the overlay network according to the target path, until the data request is transmitted to the destination acceleration node (second acceleration node) of the target path, and the second acceleration node transmits the data request to the destination end, In this way, users around the world can truly enjoy the network acceleration service.
  • the routing table entry includes a source routing table and a location routing table; the method further includes:
  • the first acceleration node receives the source routing table and the location routing table sent by the central controller, the source routing table includes a path from the source acceleration node to the destination acceleration node, and the location routing table includes the first IP address and the second acceleration node.
  • the corresponding relationship, The first IP address is the IP address of the destination end where the user applies for the network acceleration service; the acquisition of the target path by the first acceleration node may specifically include: when the destination address of the data request is the first IP address, the first acceleration node queries the location routing table , determine the second acceleration node corresponding to the first IP address, and the second acceleration node is the destination acceleration node; when the first acceleration node is the source acceleration node, the first acceleration node queries the source routing table according to the second acceleration node, and determines from The target path from the first acceleration node to the second acceleration node.
  • the first acceleration node can determine the second acceleration node according to the location routing table.
  • the first acceleration node can determine the second acceleration node according to the source routing table.
  • a target path from the first acceleration node to the second acceleration node is determined, and the first acceleration node forwards to the next-hop acceleration node according to the target path until forwarding to the second acceleration node.
  • the method before the first acceleration node receives the source routing table sent by the central controller, the method further includes: the first acceleration node measures the link status between the first acceleration node and neighboring acceleration nodes, The link state information is obtained; the first acceleration node sends the link state information to the central controller, and the link state information is used by the central controller to generate a source routing table.
  • the acceleration node is also used to measure the link status with the neighboring acceleration node, so that the central controller can generate a source routing table according to the link status, and the path in the source routing table is obtained according to the link status , so that the network acceleration service has a higher quality of service.
  • an SDK plug-in is configured in the first terminal, the address of the central controller is preset in the SDK plug-in, and the first acceleration node receiving the data request from the first terminal may include: the first acceleration node
  • the SDK package data is received from the first terminal through the SDK tunnel.
  • the SDK package data is the data after the data request is packaged.
  • the destination address in the header of the SDK package data is the IP address of the first acceleration node, and the source address in the header is the IP address of the first terminal.
  • an SDK plug-in is configured in the terminal, the terminal can access an acceleration node nearby, and the overlay network performs accelerated forwarding for the data request of the first terminal, and the application scenarios are wide.
  • the deployment environment of the first acceleration node is a first network device
  • the first network device is used to receive an access control list ACL policy instruction
  • the ACL policy instruction is used to trigger the first network device to convert the destination address
  • the data of the first IP address is guided to the first acceleration node;
  • the first acceleration node receiving the data request from the first terminal may include: the first acceleration node receiving the data from the first terminal guided by the first network device according to the ACL policy instruction ask.
  • This embodiment is applicable to a scenario where a terminal accesses a network through a first network device (eg, MEC or OLT) (eg, a scenario where a home bandwidth accesses the network).
  • the first network device not only serves as a network access device of the first terminal, the first acceleration node deployed in the first network device also serves as a source acceleration node in the overlay network, and the overlay network access points of the terminal are abundant.
  • the deployment environment of the first acceleration node is a device in a local area network
  • the first terminal is a terminal in the local area network
  • receiving the data request from the first terminal by the first acceleration node may include: the first acceleration node
  • the node receives the data request from the first terminal through the local area network.
  • the acceleration node is embedded and deployed in a local area network (such as an enterprise intranet), and the second network device provides a private network AIP for the first acceleration node, so that terminal devices in the local area network can access the overlay network through the acceleration node .
  • the overlay network provides network acceleration services for terminal devices in the local area network, which solves the problem of expensive public network AIP resources allocated by operators.
  • the destination end is a cloud area, or the destination end is a second terminal or a server.
  • the overlay network not only supports a scenario in which a terminal accesses a cloud region, but also supports a scenario in which a terminal accesses between terminals, which is universal.
  • the deployment environment includes a cloud area, POP, edge cloud, OLT, or MEC.
  • the acceleration node can be flexibly deployed in various deployment environments, so that the overlay network can truly cover the global area.
  • an embodiment of the present application provides a data forwarding method, which is applied to a central controller in a communication system.
  • the communication system includes a central controller and multiple acceleration nodes deployed in various deployment environments.
  • the multiple acceleration nodes It includes a first acceleration node and a second acceleration node, and the method includes: the central controller obtains link state information sent by the acceleration node; the central controller generates a source routing table according to the link state information, and the source routing table contains the information from the source acceleration node.
  • the destination address of a data request of a terminal the source routing table is used by the first acceleration node to obtain the target path, the target path is used to guide the data request to be forwarded to the second acceleration node, and the second acceleration node is used to forward the data to the destination.
  • the central controller generates a source routing table, the source routing table is used to indicate paths in multiple acceleration nodes, the central controller generates a location routing table, and the location routing table is used to indicate the destination acceleration node, so that the source acceleration node It can query the destination acceleration node according to the location routing table, and then query the target path according to the source routing table, so as to realize the accelerated forwarding of data requests by the acceleration node in the overlay network.
  • the generation of the location routing table by the central controller may include: the central controller determines the second acceleration node according to the first IP address of the destination terminal; the central controller establishes a relationship between the first IP address and the second acceleration node Corresponding relationship; the central controller generates a location routing table according to the corresponding relationship.
  • the central controller determining the second acceleration node according to the first IP address of the destination terminal may include: the central controller determining the acceleration node deployed in the cloud area according to the first IP address The second acceleration node.
  • the acceleration node can be flexibly deployed in the cloud area.
  • the central controller directly determines the second acceleration node deployed in the cloud area according to the first IP address, so that the second acceleration node can be deployed in the cloud area. The node forwards the data request to the destination within the cloud area, reducing the transmission distance from the acceleration node to the destination.
  • the central controller determining the second acceleration node according to the first IP address of the destination terminal may include: the central controller querying an IP address library according to the first IP address, Determine the physical location of the destination; the central controller determines the second acceleration node closest to the physical location, thereby minimizing the transmission distance from the second acceleration node to the destination.
  • an SDK plug-in is configured in the first terminal, and address information of the central controller is preset in the SDK plug-in; the method further includes: the central central controller receives a request sent by the first terminal; The central controller feeds back the IP address of the first acceleration node to the first terminal according to the request, and the IP address of the first acceleration node is used by the first terminal to send a data request to the first acceleration node by using the SDK tunnel.
  • An SDK plug-in is configured in the first terminal, and the address of the central controller is preset in the SDK plug-in.
  • the first terminal accesses the central controller through the SDK plug-in, that is, the central central controller receives a request sent by the first terminal, the request carries the IP address of the first terminal, and the central controller queries the IP address database according to the IP address of the first terminal, and determines the physical location of the first terminal, and determine the acceleration node (that is, the first acceleration node) closest to the physical location according to the physical location of the first terminal, and the central controller feeds back the IP address of the first acceleration node to the first terminal, overlay
  • the network performs accelerated forwarding for the data request of the first terminal, and has a wide range of application scenarios.
  • the method further includes: the central controller sends traffic diversion information to the network management system, the traffic diversion information includes IP information of the destination end, and the traffic diversion information is used to trigger the network management system to send the ACL to the first network device
  • the policy instruction, the second acceleration node is an acceleration node deployed in the first network device, and the ACL policy instruction is used to trigger the first network device to direct the data request from the first terminal to the first acceleration node.
  • the central controller and the first network device cooperate to guide the data request of the first terminal to the first acceleration node. It is applicable to the scenario where the terminal accesses the network through the first network device (eg, MEC or OLT) (eg, the scenario where the home bandwidth accesses the network).
  • the first network device not only serves as a network access device of the first terminal, the first acceleration node deployed in the first network device also serves as a source acceleration node in the overlay network, and the overlay network access points of the terminal are abundant.
  • the method further includes: the central controller obtains a mode parameter, where the mode parameter includes a first mode and a second mode, wherein the first mode is used to indicate that the destination of the network acceleration service is the cloud area, and the second mode is used to indicate that the destination of the network acceleration service is the second terminal or server.
  • the overlay network not only supports a scenario in which a terminal accesses a cloud region, but also supports a scenario in which a terminal accesses between terminals, which is universal.
  • an embodiment of the present application provides an acceleration node, which is included in a communication system.
  • the communication system includes a central controller and multiple acceleration nodes, and the multiple acceleration nodes include a first acceleration node and a second acceleration node, wherein the center
  • the deployment environment of the controller belongs to the first cloud service provider, and the deployment environment of the multiple acceleration nodes belongs to the second cloud service provider, an application service provider or a telecom operator;
  • the first acceleration node includes:
  • a forwarding module configured to receive a data request from the first terminal, and the data request is used to access the destination terminal;
  • the control module is used to obtain the target path, and the target path comes from the routing table entry generated by the central controller;
  • the forwarding module is configured to send a data request to the next-hop acceleration node according to the target path until the data request is forwarded to the second acceleration node, and the second acceleration node is configured to forward the data request to the destination.
  • the routing table entry includes a source routing table and a location routing table
  • the control module is also used to receive the source routing table and the location routing table sent by the central controller, the source routing table includes a path from the source acceleration node to the destination acceleration node, and the location routing table includes the first IP address and the second acceleration node.
  • the first IP address is the IP address of the destination end where the user applies for the network acceleration service;
  • the first acceleration node queries the location routing table to determine the second acceleration node corresponding to the first IP address, and the second acceleration node is the destination acceleration node;
  • the first acceleration node When the first acceleration node is the source acceleration node, the first acceleration node queries the source routing table according to the second acceleration node, and determines the target path from the first acceleration node to the second acceleration node.
  • the forwarding module is further configured to measure the link state between the first acceleration node and the neighbor acceleration node to obtain link state information
  • the control module is further configured to send the link state information obtained by the forwarding module to the central controller, where the link state information is used by the central controller to generate a source routing table.
  • an SDK plug-in is configured in the first terminal, and the address of the central controller is preset in the SDK plug-in;
  • the forwarding module is further configured to receive the SDK encapsulation data from the first terminal through the SDK tunnel, the SDK encapsulation data is the data after encapsulating the data request, and the destination address in the header of the SDK encapsulation data is the IP address of the first acceleration node, The source address in the header is the IP address of the first terminal.
  • the deployment environment of the first acceleration node is a first network device
  • the first network device is used to receive an access control list ACL policy instruction
  • the ACL policy instruction is used to trigger the first network device to convert the destination address
  • the data of the first IP address is directed to the first acceleration node
  • the forwarding module is further configured to receive a data request from the first terminal guided by the first network device according to the ACL policy instruction.
  • the deployment environment of the first acceleration node is a device in a local area network
  • the first terminal is a terminal in the local area network
  • the forwarding module is further configured to receive a data request from the first terminal through the local area network.
  • an embodiment of the present application provides a central controller, including:
  • the transceiver module is used to obtain the link status information sent by the acceleration node;
  • a processing module configured to generate a source routing table according to the link state information acquired by the transceiver module, where the source routing table includes a path from the source acceleration node to the destination acceleration node;
  • the transceiver module is used to obtain the first IP address of the destination end where the user applies for the network acceleration service
  • the processing module is also used to generate a location routing table, where the location routing table includes the correspondence between the first IP address and the second acceleration node;
  • the transceiver module is further configured to send the location routing table and the source routing table corresponding to the first acceleration node to the first acceleration node, where the location routing table is used to guide the first acceleration node to determine the second acceleration node according to the first IP address, the first IP address The address is the destination address of the data request from the first terminal.
  • the source routing table is used by the first acceleration node to obtain the target path, and the target path is used to guide the data request to be forwarded to the second acceleration node, and the second acceleration node is used to forward the data to destination.
  • the processing module is further specifically configured to: determine the second acceleration node according to the first IP address of the destination end; establish a correspondence between the first IP address and the second acceleration node; generate a location route according to the correspondence surface.
  • the processing module is further configured to determine the second acceleration node deployed in the cloud area according to the first IP address.
  • the processing module is further configured to query the IP address database according to the first IP address to determine the physical location of the destination terminal; and determine the second accelerator closest to the physical location. node.
  • an SDK plug-in is configured in the first terminal, and the address information of the central controller is preset in the SDK plug-in; the transceiver module is further configured to receive a request sent by the first terminal; A terminal feeds back the IP address of the first acceleration node, and the IP address of the first acceleration node is used by the first terminal to send a data request to the first acceleration node by using the SDK tunnel.
  • the transceiver module is further configured to send traffic diversion information to the network management system, the traffic diversion information includes IP information of the destination end, and the traffic diversion information is used to trigger the network management system to send an ACL policy instruction to the first network device,
  • the second acceleration node is an acceleration node deployed in the first network device, and the ACL policy instruction is used to trigger the first network device to direct the data request from the first terminal to the first acceleration node.
  • the transceiver module is further configured to obtain a mode parameter, where the mode parameter includes a first mode and a second mode, wherein the first mode is used to indicate that the destination end of the network acceleration service is a cloud area, and the first mode is used to indicate that the destination end of the network acceleration service is a cloud area.
  • the second mode is used to indicate that the destination of the network acceleration service is the second terminal or server.
  • an embodiment of the present application provides a communication system, including a plurality of acceleration nodes according to the third aspect and a central controller according to the fourth aspect, wherein the deployment of the central controller The environment belongs to the first cloud service provider, and the deployment environment of the multiple acceleration nodes belongs to the second cloud service provider, an application service provider or a telecom operator.
  • an embodiment of the present application provides a central controller, including a processor, the processor is coupled to at least one memory, and the processor is configured to read a computer program stored in the at least one memory, so that all The central controller executes the method described in any one of the above second aspects.
  • an embodiment of the present application provides a computer program product, the computer program product includes computer program code, and when the computer program code is executed by a computer, enables the computer to implement any one of the above-mentioned first aspects. or, causing a computer to implement the method described in any one of the second aspects above.
  • an embodiment of the present application provides a computer-readable storage medium for storing a computer program or instruction, and when the computer program or instruction is executed, the computer executes the method described in any one of the first aspect above; Alternatively, the computer is caused to implement the method described in any one of the above second aspects.
  • FIGS. 1A and 1B are schematic diagrams of scenarios of a network acceleration system in a traditional method
  • FIG. 2 is a schematic diagram of a scenario of a communication system in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of overlay and underlay in an embodiment of the application.
  • FIG. 4 is a schematic structural diagram of a communication system in an embodiment of the application.
  • FIG. 5 is a schematic flowchart of steps of an embodiment of a data forwarding method in an embodiment of the present application
  • FIG. 6 is a schematic diagram of a scenario in which a first terminal accesses a first acceleration node in an embodiment of the present application
  • FIG. 7 is a schematic diagram of a scenario in which a central controller and a first network device cooperate to guide a data request to a first acceleration node in an embodiment of the present application;
  • FIG. 8 is a schematic diagram of two application modes for a business application to apply for a network acceleration service in an embodiment of the present application
  • FIG. 9 is a schematic diagram of a scenario of an application interface for a network acceleration service in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a scenario in which a terminal accesses a cloud region in an embodiment of the application;
  • FIG. 11 is a schematic diagram of a data format for overlay encapsulation based on UDP in an embodiment of the application;
  • FIG. 12 is a schematic diagram of a scenario of data forwarding between a first terminal and a cloud region in an embodiment of the application
  • FIG. 13 is a schematic diagram of a scenario of data forwarding between a first terminal and a second terminal in an embodiment of the present application
  • FIG. 14 is a schematic diagram of an overlay tunnel encrypting and forwarding data in an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of an embodiment of an acceleration node in an embodiment of the present application.
  • 16 is a schematic diagram of the architecture of a virtual machine in an embodiment of the present application.
  • 17 is a schematic structural diagram of an embodiment of a central controller in an embodiment of the application.
  • FIG. 18 is a schematic structural diagram of another embodiment of the central controller in the embodiment of the present application.
  • the network acceleration system in the conventional method includes a DNS server, a controller, a plurality of POP points, and an IP dedicated line network connected to the POP points.
  • Each POP point is configured with at least one anycast IP (anycast IP, AIP) address.
  • the controller When a business application applies for the acceleration service, the controller generates a mapping relationship between AIP and an elastic IP address (EIP) (public network IP address).
  • EIP elastic IP address
  • business application A applies for a network acceleration service
  • the destination end of the network acceleration service is a cloud region (region), and the EIP of the cloud region is EIP1.
  • the controller allocates an access AIP (eg, AIP1) to the service application.
  • the controller maintains the identity of the service application A, the mapping relationship between EIP1 and AIP1 (as shown in Table 1 below), and delivers the mapping relationship to each POP synchronously.
  • the controller sends the mapping relationship between EIP1 and AIP1 to the DNS server, and the DNS server is used to synchronously maintain the mapping relationship between the domain name, AIP and EIP.
  • the mapping relationship between each AIP and EIP is shown in Table 1 below.
  • the GA scheme in the traditional method includes two stages, the first stage: the stage of terminal access to the POP point.
  • the second stage the POP point accesses the cloud region (region) stage through the IP private line network.
  • Stage 1 First, the terminal sends the domain name of the resource to be accessed to the domain name system (DNS) server. Then, the DNS server feeds back the AIP (eg, AIP2) that has a mapping relationship with the domain name to the terminal. For example, the number of POP points configured with AIP2 is 3. Afterwards, the terminal accesses the "nearest route" POP point (such as POP2) through the underlay. The terminal sends a data packet to the POP2, the source address in the packet header of the data packet is the IP address of the terminal (such as IPA), and the destination address is AIP2.
  • DNS domain name system
  • POP2 modifies the destination address in the original data packet from the terminal to EIP2 through the mapping relationship shown in Table 1 above (AIP2 and EIP2 have a mapping relationship) to obtain the target data packet.
  • the source address of the target data packet is IPA
  • the destination address is EIP2.
  • POP2 sends the target data packet to the IP private line network. It should be noted that the real destination address (EIP2) in the original data packet is lost, and the lost EIP2 is restored by the POP2 node according to the mapping relationship between EIP1 and AIP1.
  • the second stage The POP point is interconnected with the IP private line network.
  • the POP point introduces the data flow from the terminal to the IP private line network, and the IP private line network forwards the data flow to the cloud region, so that the terminal can accelerate the access to the cloud region.
  • the GA network acceleration service method in the traditional method has at least the following problems.
  • GA acceleration service depends on the investment and construction of cloud vendors. At present, there are only dozens of POP points in the world. Cloud manufacturers have not invested in building POP points in some regions (or certain countries and regions) around the world, and the global coverage is not enough, which leads to the lack of POP points. Terminals in the region cannot access the IP private line network and cannot enjoy the network acceleration services provided by cloud vendors.
  • the EIP of a service application consumes one AIP address mapping, which leads to the consumption of a large amount of AIP.
  • adding anycast IP addresses to the routing network can be done by using routing protocols (such as border gateway protocol (BGP), and an AIP needs to be published to the Internet at multiple POP points BGP multi-source, and distributed across operators AIP is difficult, that is, different regions (such as China and India) have different operators who allocate the same AIP, and it is difficult for different operators to issue the same AIP project.
  • Border gateway protocol BGP
  • an embodiment of the present application provides a data forwarding method, which is based on an overlay overlay network on an underlay, and implements data forwarding through an overlay overlay network.
  • the overlay network includes a central controller and a large number of acceleration nodes deployed in various deployment environments. For example, a large number of forwarding nodes are flexibly deployed on edge clouds, POP points, cloud regions, OLT and MEC devices around the world.
  • the overlay network can truly cover the global area.
  • the overlay network in this application is used to realize the forwarding of service application data, and the overlay is also called an application delivery network (application delivery network, ADN).
  • FIG. 2 is a schematic diagram of a scenario of a communication system.
  • the communication system includes a central controller 201 and a plurality of acceleration nodes 202 .
  • the central controller 201 is used to manage and control all the acceleration nodes 202. Taking the first acceleration node among the multiple acceleration nodes 202 as an example, the forwarding of the data from the terminal by the first acceleration node will be described.
  • the first acceleration node receives the routing table entry sent by the central controller 201, and the routing table entry serves as the basis for the first acceleration node to forward data.
  • the first acceleration node receives a data request from the first terminal, and the data request is used to access the destination; first, the first acceleration node obtains the target path, and the target path comes from the routing table entry generated by the central controller 201; An acceleration node sends a data request to the next-hop acceleration node according to the target path, until the data request is forwarded to the second acceleration node, and the second acceleration node is used for forwarding the data request to the destination.
  • a large number of acceleration nodes are flexibly deployed in the overlay network, and any acceleration node among all the acceleration nodes can be used as the access acceleration node of the terminal, so that terminals around the world can access a nearby acceleration node. on the acceleration node.
  • the first acceleration node After the first acceleration node obtains the data request of the first terminal, the first acceleration node sends the data request from the terminal to the overlay network according to the target path, until the data request is transmitted to the destination acceleration node (second acceleration node) of the target path, The second acceleration node then transmits the data request to the destination end, so that users of business applications worldwide can truly enjoy the network acceleration service.
  • any acceleration node among all the acceleration nodes can be used as a terminal to access the access node of the overlay network, and each acceleration node can be used as a transmission node in the target path.
  • the services provided by each acceleration node are shared by all destinations, and there is no need for GA in the traditional method.
  • a business application needs to spend one AIP address mapping, and the project deployment is easy to implement.
  • the central controller is used to control all the acceleration nodes, obtain the link status reported by the acceleration nodes, generate the source routing table according to the link status between the acceleration nodes, and also generate the location routing table, and connect the source routing table and the location routing table.
  • the table is delivered to each acceleration node.
  • the central controller can be a virtual server deployed on the cloud side.
  • Accelerate node is used to realize data forwarding function and link state measurement function. Acceleration nodes are deployed in virtual machines or containers provided by the deployment environment.
  • the acceleration node includes a local controller and at least one forward node (forward node or compass).
  • the local controller is used to control the compass to perform link state measurement (or also referred to as "QoS measurement") between accelerated nodes.
  • Compass is mainly responsible for the traffic forwarding function of the data plane.
  • the compass may be a forwarding module that implements the forwarding function through software.
  • the deployment environment of the acceleration node which is used to assign a "host” and a public IP address to the acceleration node.
  • Deployment environments include but are not limited to edge cloud, optical line terminal (OLT), multi-access edge computing (MEC), POP, cloud region (region), content delivery network (content delivery network) network, CDN), etc. It can be understood that the deployment environment only needs to provide a virtual machine (or container) and a public network IP address, and then the acceleration node can be deployed. This requirement is not harsh or customized.
  • third-party CDNs, edge clouds, OLT or MEC devices can easily provide virtual machines (or containers) and public IP addresses, so that acceleration nodes can be parasitized into the deployment environment.
  • acceleration nodes Due to the flexibility in the deployment of acceleration nodes, it is ensured that the acceleration nodes can be parasitized everywhere in the world, and the overlay network can cover a wider range of global areas.
  • the flexible deployment of acceleration nodes is also reflected in the fact that the acceleration nodes can be deployed in the cloud region.
  • the cloud region only needs to provide virtual machines (or containers) and public network IPs for the acceleration nodes to meet the deployment conditions. . Therefore, the deployment of acceleration nodes can meet all cloud types such as partner cloud, third-party cloud, partner cloud, and hybrid cloud, and the application business scope of acceleration is wider.
  • "video service provider A" can directly deploy acceleration nodes to the third-party edge cloud built by "video service provider A" to provide acceleration services for "video service provider A"'s services.
  • the provider can be a cloud service provider, an application service provider (such as "instant messaging service” provider A), or a telecom operator (such as China Mobile, China Unicom, and Telecom).
  • the deployment environment of the central controller belongs to the first cloud service provider (such as cloud service provider A), and the deployment environment of multiple acceleration nodes may belong to the second cloud service provider (such as cloud service provider B, cloud service provider C) ), application service provider or telecom operator.
  • the second cloud service provider, application service provider or telecom operator provides a deployment environment of the acceleration node on its own hardware facilities.
  • the deployment environment here is a virtual environment for the first cloud service provider to apply for computing resources (eg, virtual machines, containers). Or apply for computing resources on the deployment environment provided by telecom operators and run acceleration nodes on the computing resources.
  • the central controller that is, controlled by the first cloud service provider.
  • Service application the user of the service traffic forwarding service provided by the overlay network.
  • real time communication real time communication
  • RTC real time communication
  • Terminals including but not limited to server terminals, mobile phone (mobile phone), tablet computer (Pad), personal computer (personal computer, PC), virtual reality (virtual reality, VR) terminal, augmented reality (augmented reality, AR) terminal, Terminals in industrial control, in-vehicle terminals, terminals in self-driving, terminals in assisted driving, terminals in remote medical, terminals in smart grid, Terminals in transportation safety, terminals in smart cities, terminals in smart homes, etc.
  • server terminals mobile phone (mobile phone), tablet computer (Pad), personal computer (personal computer, PC), virtual reality (virtual reality, VR) terminal, augmented reality (augmented reality, AR) terminal, Terminals in industrial control, in-vehicle terminals, terminals in self-driving, terminals in assisted driving, terminals in remote medical, terminals in smart grid, Terminals in transportation safety, terminals in smart cities, terminals in smart homes, etc.
  • the destination end may be a cloud area, or may be a second terminal or a server.
  • the source routing table and the location routing table are exemplified.
  • the source routing table is used to indicate the optimal path from the source acceleration node to the destination acceleration node.
  • “source acceleration node” and “destination acceleration node” are both acceleration nodes in the above overlay network.
  • an acceleration node that receives data from a terminal is a "source acceleration node”.
  • the acceleration node that sends data to the destination is the "destination acceleration node”.
  • the source acceleration node can traverse every acceleration node in all forwarding nodes, and the destination acceleration node can traverse every acceleration node in all forwarding nodes.
  • the location routing table includes the correspondence between the first IP address and the second acceleration node.
  • the first IP address is the IP address of the destination end where the user (such as a business application) applies to the central controller in advance for the acceleration service.
  • the second acceleration node is a forwarding node determined by the central controller according to the IP address of the destination end.
  • the second acceleration node is the acceleration node closest to the destination among all the acceleration nodes. For example, when the destination is a cloud area, the second acceleration node may be an acceleration node deployed in the cloud area.
  • the central controller queries the IP address database to determine the physical location of the terminal (or server), and the central controller determines the acceleration node (ie, the closest acceleration node to the physical location of the terminal (or server)). second acceleration node).
  • the acceleration node ie, the closest acceleration node to the physical location of the terminal (or server)
  • second acceleration node any one of the multiple source acceleration nodes.
  • Link status the acceleration node measures the quality of service (QoS) of its neighbor acceleration nodes to obtain link status information.
  • the link state information includes the link state of the acceleration node to each neighbor acceleration node. It can be understood that the link state can be described by a QoS value, wherein the performance indicators of the QoS include packet loss rate, delay and jitter, and the like.
  • the acceleration node when the acceleration node performs QoS measurement on the link between each neighbor acceleration node, the acceleration node will continuously send q probe packets (q is an integer greater than or equal to 2) to its neighbor acceleration node. The replies of the q probe packets are used to calculate the transmission delay, jitter, and packet loss rate.
  • the acceleration node performs a weighted average of transmission delay, jitter and packet loss rate, and uses the weighted average value to describe the link state between the acceleration node and the neighbor acceleration node.
  • the measurement of the "link status" of the acceleration node of its neighbor acceleration nodes may also be referred to as "QoS measurement”.
  • the underlay network refers to the physical network, which consists of physical devices and physical links.
  • common physical devices include switches, routers, and firewalls. These physical devices are connected through specific links to form a traditional physical network.
  • An overlay network is a computer network that can be built on top of an underlay.
  • Nodes (ie, forwarding nodes) in an overlay network can be considered to be connected by virtual or logical links, where each link corresponds to a path.
  • the four nodes H, I, J, and K in FIG. 3 are logical nodes in the overlay network.
  • the direct connection between the H and J nodes that is, there is only one hop at the application layer level, is mapped to the lower-layer underlay network, which may involve multiple relay forwarding routing devices, It is actually multi-hop routing.
  • Nodes in the overlay network implement data forwarding at the overlay layer by encapsulating the source IP and destination IP mapped to the nodes in the underlay network.
  • Full-mash refers to a networking mode in which two nodes among N nodes are interconnected.
  • FIG. 4 is a schematic structural diagram of a communication system in the present application.
  • the PC of the operation and maintenance personnel in response to the first operation of the operation and maintenance personnel, applies to the deployment environment such as edge cloud, POP, cloud region, etc. to allocate a virtual machine (or container) and a public network IP.
  • the deployment environment such as edge cloud, POP, cloud region, etc.
  • an operation interface for applying for a virtual machine (or container) and a public network IP for the deployment environment is installed in the PC.
  • the PC of the operation and maintenance personnel logs in to the account of the deployment center, uses the deployment center to automatically upload to the virtual machine (or container), and automatically installs acceleration node software in batches.
  • the deployment center is a cloud center tool for automated batch deployment of accelerated nodes.
  • the central controller handshakes and communicates with the forwarding node, and the central controller receives the registration request sent by each forwarding node.
  • the registration request includes but is not limited to the ID of the deployment environment of the forwarding node, the public IP address of the forwarding node, the physical location of the deployment environment of the forwarding node, and the like.
  • the process of registering the forwarding node by the central controller can be understood as a process of storing the relevant information of the forwarding node by the central controller.
  • the central controller obtains the relevant information of each forwarding node, and can further manage each forwarding node.
  • steps S30 to S33 are the deployment process of the acceleration node. If the forwarding node has been registered to the central controller, and the registered acceleration node has not been deleted, or no other acceleration node has been added, it is not necessary to perform step S30 every time. - Step S33. Steps S30 to S33 are optional steps, and step 501 is directly executed.
  • Step 501 The central controller acquires link state information between the acceleration node and the neighboring acceleration nodes.
  • the central controller receives the link state information sent by each acceleration node, and the link state information includes the link state between the acceleration node and each neighbor acceleration node.
  • the central controller sends measurement tasks to the local controllers in each acceleration node.
  • All acceleration nodes are fully interconnected (full-mash), and the acceleration node performs QoS measurement on the links between its neighbor acceleration nodes.
  • Each acceleration node will collect link status information, the link status information includes the link status (described by QoS value) from the acceleration node to neighboring acceleration nodes and the link identifier corresponding to the link status (for example, the acceleration node Node A ⁇ Accelerate Node B).
  • the acceleration node in the neighborhood of the acceleration node refers to a node connected to the acceleration node.
  • the neighbor acceleration nodes of any acceleration node refer to all other acceleration nodes except the forwarding node among all the acceleration nodes.
  • the compass in the acceleration node performs QoS measurement periodically (for example, in seconds), collects link state information (represented by a QoS value), and stores the collected link state in the local controller.
  • the local controller reports link status information to the central controller periodically (for example, in minutes).
  • Step 502 The central controller generates a source routing table according to the link state information.
  • the source routing table is used to indicate paths in multiple acceleration nodes.
  • the path is the path from the source acceleration node to the destination acceleration node.
  • the central controller selects a path from the source acceleration node to the destination acceleration node among all the acceleration nodes according to the link state information and the topology structures of all the acceleration nodes.
  • the central controller receives the link status information reported by each acceleration node.
  • the topology of all acceleration nodes is fully interconnected as an example, and the central controller determines multiple paths. For example, taking the acceleration node A as the source forwarding node, the central controller calculates the paths from the acceleration node A to other acceleration nodes (eg, the acceleration node B and the acceleration node C). Taking the acceleration node B as the source forwarding node, the central controller calculates the path from the acceleration node B to other acceleration nodes (eg, the acceleration node A and the acceleration node C).
  • the central controller generates a source routing table corresponding to each acceleration node based on the path.
  • the source routing table includes a list of acceleration nodes experienced by the path and next-hop acceleration nodes.
  • the next-hop acceleration nodes of each acceleration node may be different. Therefore, each forwarding node needs to correspond to a different source routing table.
  • Step 503 The central controller delivers the source routing table corresponding to the acceleration node to each acceleration node.
  • the acceleration node takes the first acceleration node and the second acceleration node as examples.
  • the central controller sends the source routing table A to the first acceleration node.
  • the central controller sends the source routing table B to the second acceleration node.
  • Step 504 The central controller obtains the first IP address of the destination end of the user applying for the network acceleration service.
  • the central controller receives a request for applying for a network acceleration service, where the request carries a service domain name.
  • the central controller sends the service domain name to the DNS server, and the DNS server is used to resolve the service domain name to obtain IP information (eg, EIP1 ) of the cloud region, and the first IP address is EIP1 .
  • the central controller obtains the IP information (ie EIP1) of the cloud region from the DNS server.
  • the request carries the IP address (eg IP1) of the terminal (or server), the first IP address is IP1.
  • Step 505 The central controller generates a location routing table, where the location routing table includes the correspondence between the first IP address and the second acceleration node.
  • the central controller determines the second acceleration node according to the first IP address of the destination.
  • the central controller determines the acceleration node (eg, the second acceleration node) deployed in the cloud region.
  • the central controller searches the IP address library to determine the physical location of the destination, and the central controller determines the distance from the The second acceleration node with the closest physical location. Then, the central controller establishes the correspondence between the first IP address and the second acceleration node. Finally, the central controller generates a location routing table according to the corresponding relationship.
  • Step 506 The central controller sends the location routing table to all the acceleration nodes, and all the acceleration nodes include the first acceleration node; correspondingly, the first acceleration node receives the source routing table and the location routing table sent by the central controller.
  • the central controller calls the southbound data interface to deliver the location routing table to the local controllers of each acceleration node, and the local controller delivers the source routing table and the location routing table to Compass. It should be understood that the central controller sends the source routing table corresponding to each acceleration node to each acceleration node. Instead, send the same location routing table to all acceleration nodes. For example, the central controller sends the source routing table A and the location routing table to the first acceleration node.
  • the process of forwarding data by the first acceleration node among all the acceleration nodes is taken as an example for description, and the first acceleration node is any acceleration node among all the acceleration nodes.
  • Step 507 The first acceleration node receives the data request from the first terminal.
  • the first terminal accesses the first acceleration node through a software development kit (SDK) tunnel, that is, the first acceleration node receives the SDK package data through the SDK tunnel, and the SDK package data is a request for data Encapsulated data.
  • SDK software development kit
  • FIG. 6 is a schematic diagram of a scenario in which the first terminal accesses the first acceleration node.
  • An SDK plug-in is configured in the first terminal, and the address of the central controller is preset in the SDK plug-in.
  • the first terminal accesses the central controller through the SDK plug-in, that is, the central central controller receives a request sent by the first terminal, where the request carries the IP address of the first terminal.
  • the central controller queries the IP address library according to the IP address of the first terminal, determines the physical location of the first terminal, and determines the acceleration node (ie, the first acceleration node) closest to the physical location according to the physical location of the first terminal, and the central controller controls the The controller feeds back the IP address (eg IP2) of the first acceleration node to the first terminal.
  • IP2 IP2
  • the first terminal sends the raw data to be sent (also referred to as a "data request") to the first acceleration node.
  • the destination address of the original data is EIP1
  • the source address of the original data is IPA (ie, the IP address of the first terminal).
  • the first terminal performs SDK encapsulation on the data request to obtain SDK encapsulation data.
  • the destination address in the header (or "packet header") of the SDK encapsulated data is the IP address (eg IP2) of the first acceleration node, and the source address in the header is the IP address (eg IPA) of the first terminal.
  • an SDK plug-in is configured in the terminal, the terminal can access an acceleration node nearby, and the overlay network performs accelerated forwarding for the data request of the first terminal, and the application scenarios are wide.
  • the central controller and the first network device cooperate to guide the data request of the first terminal to the first acceleration node.
  • FIG. 7 is a schematic diagram of a scenario in which the central controller and the first network device cooperate to guide the data request to the first acceleration node.
  • the first acceleration node is an acceleration node deployed in the first network device.
  • the first network device may be an MEC or an OLT.
  • the central controller sends traffic diversion information to the network management system, and the network management system is used to manage network element equipment (such as MEC or OLT, etc.).
  • the traffic diversion information includes the IP address (ie, the first IP address) of the destination end.
  • the network management device generates an access control list (access control list, ACL) policy instruction according to the traffic diversion information, and sends the ACL policy instruction to the first network device, where the ACL policy instruction is used to trigger the first network device to assign the destination address to the first IP address
  • the data is directed to the first acceleration node.
  • the first network device filters the received data according to the ACL policy instruction.
  • the first network device receives the data request whose destination address is the first IP address
  • the first network device directs the data request whose destination address is the first IP address to the first acceleration node.
  • the first acceleration node receives the data request guided by the first network management device through the ACL policy.
  • This embodiment is applicable to a scenario where a terminal accesses a network through a first network device (eg, MEC or OLT) (eg, a scenario where a home bandwidth accesses the network).
  • the first network device not only serves as a network access device of the first terminal, the first acceleration node deployed in the first network device also serves as a source acceleration node in the overlay network, and the overlay network access points of the terminal are abundant.
  • the first acceleration node is deployed on a second network device, the second network device is a host in a local area network, and the first terminal is a terminal device in the local area network.
  • the second network device provides the private network AIP for the first acceleration node.
  • the first acceleration node receives the data request from the terminal device through the local area network.
  • the acceleration node is embedded and deployed in a local area network (such as an enterprise intranet), and the second network device provides a private network AIP for the first acceleration node, so that terminal devices in the local area network can access the overlay network through the acceleration node .
  • the overlay network provides network acceleration services for terminal devices in the local area network, which solves the problem of expensive public network AIP resources allocated by operators.
  • the above three implementation manners do not lose the destination address (eg EIP1) of the original data when the first terminal sends a data request to the first acceleration node.
  • the terminal can access the overlay network only through the public network IP of one acceleration node.
  • One public network IP address can provide shared access for multiple service applications, reducing deployment costs.
  • Step 508 The first acceleration node obtains a target path, where the target path comes from a routing table entry generated by the central controller.
  • the first acceleration node queries the location routing table to determine the second acceleration node corresponding to the first IP address, and the second acceleration node is the destination acceleration node.
  • the location routing table includes the correspondence between the first IP address and the second acceleration node, as shown in Table 2 below.
  • the first acceleration node receives the SDK package data, and the first acceleration node decapsulates the SDK package data, and obtains the real purpose IP (such as EIP1 of the original data (data request)) ).
  • the central controller searches the location routing table to determine the acceleration node D that has a corresponding relationship with EIP1.
  • the first acceleration node sends a data request to the next-hop acceleration node according to the source routing table until the data request is forwarded to the second acceleration node.
  • the source routing table includes the optimal path from the first acceleration node to the second acceleration node.
  • the node is configured to forward the data request to the destination end corresponding to the first IP address.
  • the first acceleration node queries the source routing table to determine a target path from the first acceleration node to the second acceleration node.
  • the first acceleration node is acceleration node A
  • the second acceleration node is acceleration node D
  • the target path is represented by a list of acceleration nodes (eg acceleration node A, acceleration node B, acceleration node D).
  • Step 509 The first acceleration node sends a data request to the next-hop acceleration node according to the target path, until the data request is forwarded to the second acceleration node, which is used to forward the data request to the destination.
  • the first acceleration node performs overlay encapsulation on the original data to obtain overlay encapsulated data.
  • the first acceleration node sends the overlay encapsulation data to the next hop through the overlay tunnel.
  • the overlay package data includes the original data (data request), the target path, the destination address of the next-hop acceleration node, and the source address.
  • the overlay encapsulated data is forwarded hop by hop to the acceleration node on the target path until it is forwarded to the second acceleration node. After the second acceleration node decapsulates the overlay encapsulated data, the data request is sent to the destination.
  • a large number of acceleration nodes are flexibly deployed in the overlay network, so that terminals around the world can access an acceleration node nearby.
  • the second acceleration node can be determined by querying the location routing table, and the first acceleration node sends the data request from the terminal to the next-hop acceleration node according to the optimal path indicated in the source routing table. , until the data request is transmitted to the second acceleration node, and the second acceleration node is transmitting the data request to the destination end, so that it is possible to truly realize that terminals worldwide can enjoy the network acceleration service.
  • any acceleration node among all the acceleration nodes can be used as a terminal to access the access node of the overlay network, and each acceleration node can be used as a transmission node in the optimal path.
  • the services provided by each acceleration node are shared by all destinations, and there is no need for GA in the traditional method.
  • a business application needs to spend one AIP address mapping, and the project deployment is easy to implement.
  • the business application can customize the network setting parameters.
  • the overlay network in this embodiment can provide network acceleration services according to the actual requirements of business applications.
  • FIG. 8 is a schematic diagram of two application manners for a business application to apply for a network acceleration service.
  • the ADN in this application provides network acceleration services for various business applications, and the business applications (such as video service provider A) apply to the central controller for network acceleration services in the following two ways.
  • the PC of the business application personnel logs in to the console platform.
  • the PC computer responds to the operation of the business application personnel.
  • the business application personnel click on the console interface to select the network setting parameters.
  • the network setting parameters include but are not limited to including acceleration period, bandwidth, cost at least one of.
  • the ADN provides overlay network acceleration services for service applications (such as video service provider A) according to network setting parameters.
  • service applications such as video service provider A
  • the staff of the business application only needs to select according to the network setting parameters provided by the ADN, and the method of applying for the network acceleration service is simple and easy to operate.
  • the business application and the ADN are in a cooperative relationship, and the AND authorizes the business application, and the business application can directly call the northbound application programming interface (API) of the central controller to customize network parameters.
  • API application programming interface
  • the business application can customize the network acceleration service completely according to its own requirements, so as to meet the personalized service requirements of different business applications.
  • FIG. 9 is a schematic diagram of a scenario of an application interface for a network acceleration service.
  • ADN provides an interface of "application for network acceleration service” for business applications, so that various business applications can apply for network acceleration service.
  • the interface of "Applying for Network Acceleration Service” mainly includes the interface for creating a tenant, creating an acceleration instance (inputting the acceleration instance parameter configuration) interface, adding an acceleration region interface, setting the cloud region domain name interface, and setting the acceleration public network IP address interface, etc.
  • setting cloud region domain name interface is applicable to the scenario where the destination terminal is the cloud region, that is, the scenario where the terminal accesses the cloud region.
  • the “Setting the Acceleration Public Network IP Address Interface” is applicable to the scenario where the destination end is a terminal (or server), that is, the scenario where the terminal accesses the terminal.
  • the steps for applying for the network acceleration service for business applications are as follows: Step a to Step e.
  • step a the PC of the business application personnel displays the console "create user” interface, and the PC of the business application personnel sends user information such as "username” and "password” to the console platform in response to the operation of creating a user by the business application personnel.
  • the console platform sends user information to the central controller.
  • Step b The PC of the business application personnel displays the interface of "Create an Acceleration Instance".
  • the "Create Accelerated Instance” interface is used to provide settings for network setting parameters.
  • network setting parameters include bandwidth, acceleration period, and service mode parameters (first mode or second mode).
  • the first mode means that the destination end of the network acceleration service is the cloud region
  • the second mode means that the destination end of the network acceleration service is the terminal (or server).
  • Step c When the service application personnel select the first mode, the PC of the service application personnel displays the interface of "select acceleration area".
  • the "Select Acceleration Area” interface is used to provide the bound area of the network acceleration service. For example, regions include “Asia”, “China”, “India”, “Europe”, etc.
  • the “acceleration area” is used to indicate the area where the user served by the business application is located. For example, if the business application is "NetEase Games", the users of “NetEase Games" are all over the world, and the acceleration area selected by “NetEase Games” may select all regions.
  • the acceleration region selected by “Video Service Provider A” may select all regions as “China”. After the terminal responds to the operation of the acceleration area selected by the business application personnel, it sends the target acceleration area (such as China) to the console platform.
  • step d the PC of the business application personnel displays the "cloud region information" interface.
  • the "cloud region information" interface is used to receive the cloud region's identifier, domain name (or EIP).
  • the domain name is the domain name of the service application (eg, video service provider A).
  • Step e when the service application personnel select the second mode, the PC of the service application personnel displays an interface of "input acceleration IP".
  • the interface of "Input Acceleration IP” is used to receive the list of public IP addresses of the destination (terminal or server). After the PC responds to the operation of the service application personnel, it sends the public IP of the destination end to the console platform.
  • the console platform After the above steps a to d, or after the steps a, b and e, the console platform establishes an association relationship between the user name and the network setting parameters after receiving the above network setting parameters.
  • the console platform sends the network setting parameters to the central controller through the northbound API.
  • step 507 the following steps are further included:
  • the central controller obtains the bandwidth and target acceleration area corresponding to the business application
  • the central controller generates committed access rate (CAR) rate limit configuration information according to the bandwidth parameters;
  • the central controller assigns access rights to the acceleration nodes in the target area according to the EIP information of the destination end, and delivers the CAR speed limit configuration information to the acceleration nodes.
  • the CAR speed limit configuration information is used to guide all acceleration nodes on the optimal path to perform Data forwarding to meet the network acceleration requirements of business applications.
  • the application scenario mainly describes a scenario in which the destination end accessed by the first terminal is a cloud region.
  • the central controller issues QoS measurement tasks to all acceleration nodes.
  • Each acceleration node performs QoS measurement on the link status between the acceleration node and neighboring acceleration nodes, and the acceleration node collects the link status information and sends the link status information to the central controller.
  • the central controller calculates the optimal path according to the link state information and the topological structures of all acceleration nodes, and generates a source routing table according to the optimal path.
  • the central controller sends the corresponding source routing table of the acceleration node to each acceleration node.
  • a business application (eg, video service provider A) applies to the console for a network acceleration service
  • the destination of the network acceleration service is the cloud region
  • the IP information of the cloud region (eg, cloud region1) is EIP1.
  • the central controller also acquires information such as the target bandwidth and the target acceleration area corresponding to the "video service provider A".
  • the central controller determines acceleration node D according to EIP1, and acceleration node D is an acceleration node deployed in cloud region1.
  • the central controller generates a location routing table, and the location routing table includes the corresponding relationship between the acceleration node D and the EIP1.
  • the central controller sends the location routing table to all acceleration nodes.
  • the central controller indexes all acceleration nodes in the China region according to the target acceleration region (China region).
  • the central controller generates configuration information according to the target bandwidth.
  • the central controller sends control information and configuration information to the acceleration nodes in the target acceleration area.
  • the control information includes the EIP of the cloud region (eg, EIP1 ), and the control information is used to instruct the acceleration node to filter the EIP of the cloud region, allowing the data traffic of EIP1 to perform data forwarding according to the configuration information.
  • the first terminal is connected to the acceleration node A nearby, and the acceleration node A is an acceleration node in the Chinese region.
  • the destination address of acceleration node A's data request is EIP1.
  • the acceleration node A filters data requests whose destination address is EIP1 according to the control information to pass at the rate indicated by the configuration information.
  • the acceleration node A determines the acceleration node D (ie, the second acceleration node) corresponding to the EIP1 according to the location routing table.
  • the acceleration node A queries the source routing table, and according to the optimal path between the acceleration node A and the acceleration node D in the source routing table (for example, the acceleration node A-acceleration node B-acceleration node C-acceleration node D), the data from the first terminal
  • the data request is forwarded to the next-hop acceleration node B, and the acceleration nodes on the optimal path (such as acceleration node A, acceleration node B, acceleration node C, and acceleration node D) will be sent from the first node according to the configuration information issued by the central controller.
  • the data request of a terminal is forwarded one by one on the optimal path until it is forwarded to the acceleration node D.
  • the acceleration node D forwards the data from the first terminal to the cloud region (EIP1).
  • the network between the POP and the cloud region can implement HBN dedicated line network transmission, or common internet transmission.
  • HBN dedicated line network transmission or common internet transmission.
  • acceleration node A is deployed in edge cloud A
  • acceleration node B is deployed in edge cloud B
  • acceleration node C is deployed at the POP point
  • acceleration node D is deployed in cloud region.
  • the destination address is the acceleration public network IP address of the acceleration node D
  • the source address is the acceleration public network IP address of the acceleration node C
  • the acceleration node C and the acceleration node D pass through the HBN.
  • the dedicated line network forwards data to improve the network transmission rate from POP to cloud regions. Data can also be transmitted between the acceleration node C and the acceleration node D through the common internet, thereby saving costs for business applications.
  • each business application when the destination terminal is a cloud region, each business application only needs to invoke the network acceleration service provided by AND to realize the terminal's quick access to the cloud region, avoiding repeated and independent development of each business application system.
  • the second application scenario mainly describes a scenario where the destination is a terminal (or server), that is, a scenario of lateral access between terminals.
  • the central controller issues QoS measurement tasks to all acceleration nodes.
  • Each acceleration node performs QoS measurement on the link status between the acceleration node and neighboring acceleration nodes, and the acceleration node collects the link status information and sends the link status information to the central controller.
  • the central controller calculates the optimal path in all the acceleration nodes according to the link state information and the topology structure of all the acceleration nodes, and generates a source routing table according to the optimal path.
  • the central controller sends the source corresponding to the acceleration node to each acceleration node. routing table.
  • the console platform obtains the list of public network IP addresses, and sends the list of public network IP addresses to the central controller.
  • the central controller After obtaining the list of public network IP addresses, the central controller queries the IP address database to determine the physical location of each destination (such as the second terminal), and determines the acceleration node closest to the physical location according to the physical location of the destination. For example, IP1 is located in Beijing, and the central controller searches the IP address database to determine that the acceleration node closest to IP1 is acceleration node D (located in Beijing). IP2 is located in Xi'an, and the central controller searches the IP address database to determine that the acceleration node closest to IP2 is acceleration node F (located in Xi'an). The central controller generates a location routing table, and the location routing table includes the correspondence between the public network IP and the acceleration node (eg, the correspondence between IP1 and acceleration node D, and the correspondence between IP2 and acceleration node F).
  • the central controller sends the location routing table to the acceleration node.
  • the first terminal accesses the acceleration node A nearby, and the acceleration node A obtains a data request from the first terminal, and the destination IP of the data request is IP1.
  • the acceleration node A determines, according to the location routing table, that the acceleration node corresponding to IP1 is the acceleration node D (ie, the second acceleration node).
  • Acceleration node A queries the source routing table, and forwards the data request to the optimal path between acceleration node A and acceleration node D in the source routing table (such as acceleration node A-acceleration node B-acceleration node C-acceleration node D)
  • the next hop accelerates the node B, and the data request is forwarded hop by hop on the optimal route until it is forwarded to the acceleration node D.
  • the acceleration node D forwards the data from the first terminal to the second terminal.
  • the GA in the traditional method only supports the scenario in which the terminal accesses the cloud region, while the ADN in this embodiment not only supports the scenario in which the terminal accesses the cloud region, but also supports the scenario in which the terminal and the terminal are accessed. Universality.
  • the destination is the cloud region, which is the first application scenario above.
  • the destination end is a terminal (or server), that is, the second application scenario above.
  • FIG. 11 is a schematic diagram of a data format for overlay encapsulation based on a user datagram protocol (UDP).
  • the packets transmitted in the overlay tunnel encapsulate the original data to obtain overlay encapsulation data.
  • the format of the overlay encapsulation data includes the following fields.
  • IP header field includes the source address (32 bits in length) and the destination address (32 bits in length).
  • the UDP header field includes the source port number (16 bits in length) and the destination port number (16 bits in length).
  • Segment list (segment list, SL) field: used to indicate the nodes that the data packet needs to pass through during the forwarding process.
  • the list is segment list[0] to segment list[n-1].
  • [*] is used to represent the node number (or also called "subscript")
  • n represents the number of accelerated nodes in the optimal path.
  • the optimal path includes n nodes (such as node A, node B, node C, etc.), and the first one pushed into the destination address is segment list[n-1] (for example, segment list[2]) corresponds to The IP address of the acceleration node.
  • the last one pushed into the destination address is the IP address of segment list[0].
  • the segment list can look like this.
  • the first segment field 8 bits in length, used to refer to the first hop through which data is sent from the source acceleration node to the destination acceleration node.
  • the bottom node (segment list[n-1]) is the node closer to the source acceleration node, and the top one is the destination acceleration node segment list[0]), so the value of the first segment field is " n-1".
  • segment left used to indicate the currently active segment, that is, used to indicate the next hop where data will be transmitted.
  • segment left used to indicate the currently active segment, that is, used to indicate the next hop where data will be transmitted.
  • the acceleration node will copy the IP address of the node of the segment list [SL] to the destination address field in the packet header, thereby indicating the next hop node and sending the data to the destination.
  • A is the source acceleration node
  • D is the destination acceleration node
  • the acceleration nodes that the optimal path passes through are B, C, and D.
  • the acceleration node B is the first acceleration node to pass from the acceleration node A to the acceleration node D, so the value of the first segment field is the subscript "2" corresponding to the acceleration node B.
  • the acceleration node B checks that the destination address of the header is IPB after receiving the data.
  • the acceleration node B removes the header, and the value of the segment left field is obtained as "2".
  • the acceleration node B determines that the transmitted data has not reached the destination acceleration node.
  • the acceleration node B also needs to continue to forward the data, and the acceleration node B keeps the first segment.
  • the value of the field does not change (eg 2).
  • Payload length field The length is 16 bits.
  • the overlay tunnel encapsulates IP Layer 3 packets in UDP mode for data forwarding, and the data in the IP packet (that is, the original data) may be various transmissions such as transmission control protocol (TCP) or UDP.
  • TCP transmission control protocol
  • Type of data packets, overlay encapsulated data packets can not be constrained by transmission type and application type, and the network acceleration service has a wider scope.
  • the process of data forwarding is divided into the case where the destination is a cloud region, and the case where the destination is a terminal (or server).
  • the destination is the cloud region, that is, the scenario where the first terminal accesses the cloud region.
  • the access by the first terminal to the first acceleration node is taken through the SDK tunnel as an example
  • the first acceleration node is the acceleration node A as an example
  • the IP address of the first terminal is IP1
  • the public network of the acceleration node A is used as an example.
  • An IP address is an IPA.
  • the central controller pre-configures the first NAT IP (also called "first NAT IP”) for the source acceleration node and the destination acceleration node, and configures the second NAT IP (also called "tail NAT IP”) for the destination acceleration node.
  • the first NAT IP is IP8 and the second NAT IP is IP9.
  • the optimal path from the source acceleration node to the destination acceleration node is: acceleration node A ⁇ acceleration node B ⁇ acceleration node D.
  • the acceleration node A is configured with a public network IPA and a first NAT IP (IP8), and IPA and IP8 can be different IP addresses, or, in order to save public network IP, IPA and IP8 can be the same IP address.
  • the acceleration node D is configured with the public network IPD and the first NAT IP (IP9).
  • the IPD and the IP9 can be different IP addresses, or, in order to save the public network IP, the IPD and the IP9 can be the same IP address.
  • FIG. 12 is a schematic diagram of a scenario of data forwarding between the first terminal and the cloud region.
  • the first terminal sends a data packet to the acceleration node A, the destination address of the data packet is EIP, and the source address is IP1.
  • the first terminal performs SDK encapsulation on the data packet to obtain SDK encapsulation data.
  • the destination address of the SDK encapsulation data is IPA
  • the source address of the SDK encapsulation data is IP1.
  • IP1 is a public network IP
  • the first terminal sends the SDK package data to acceleration node A. After acceleration node A receives the SDK package data, it strips off the SDK header and exposes the destination address of the original data as EIP, the source address is IP1.
  • IP1 is a private network IP address
  • the SDK encapsulates the data through the network address translation (NAT) device of the operator's network to reach acceleration node A.
  • the public IP address of the NAT device is IPM.
  • the acceleration node A strips off the SDK header to expose the original data, and the acceleration node A modifies the source address in the inner layer to the public network IPM after network address translation.
  • the acceleration node A performs source address translation (source NAT, SNAT), and converts the IPM into the first NAT IP (such as IP8). That is, the source address in the inner layer is IP8 and the destination address is EIP. It can be understood that in this step, SNAT is the port mapping, and the IPM is mapped to IP8. In this step, the purpose of the acceleration node A converting the IPM to the first NAT IP is to use IP8 as the IP address of the destination acceleration node when the cloud region returns the data stream.
  • the acceleration node A performs overlay encapsulation on the inner layer data.
  • the packet format of the overlay encapsulation data is shown in Figure 11.
  • the overlay encapsulation data includes the inner layer data (that is, the original data, the source address is IP8, and the destination address is EIP), The IP address (IPB) of the next hop in the overlay header and the IP address of the acceleration node in the optimal path (for example, the IP address of acceleration node A is IPA, the IP address of acceleration node B is IPB, and the IP address of acceleration node D is IPD).
  • the acceleration node B After the acceleration node B receives the overlay encapsulation data, it determines that the data packet does not reach the destination acceleration node according to the value of the segment left field, and the acceleration node B continues to modify the IP address of the next hop in the overlay encapsulation data to IPD, and changes the The overlay encapsulated data is forwarded to the next hop (acceleration node D).
  • the acceleration node D After the acceleration node D receives the overlay package data, it determines that the overlay package data has reached the destination acceleration node according to the value of the segment left field. After the acceleration node D strips the header of the overlay package data, the source address of the exposed inner layer data is IP8, the destination address is EIP. The acceleration node D maps the source address of the inner data to the tail NAT IP (such as IP9) after going through SNAT. The acceleration node D is the acceleration node deployed in the cloud region (destination terminal), and the tail NAT IP is the IP address assigned by the cloud region.
  • the IPD of the acceleration node D and the tail NAT IP can be the same IP.
  • the acceleration node D accesses the data center through the cloud region internal network (IP information is EIP).
  • IP information is EIP
  • the purpose of the acceleration node D mapping the destination address of the inner layer data to the tail NAT IP is to use IP9 as the IP address of the source acceleration node when the cloud region returns the data stream.
  • steps S41-S46 are the forward data traffic forwarding process, that is, the process in which the first terminal sends data to the cloud region.
  • steps S51-S56 are the reverse data traffic forwarding process, that is, the process in which the cloud region sends data to the first terminal.
  • the data center sends the feedback original data to the acceleration node D.
  • the destination address of the feedback original data is IP9, and the source and destination address is EIP, that is, the data center uses the tail NAT IP (IP9) as the destination address, and the data center connects to the cloud The acceleration node D of the region.
  • the acceleration node D After the acceleration node D receives the feedback original data, it performs destination address translation on the destination address in the feedback original data, and maps the destination address to IP8.
  • the acceleration node D searches the location routing table, and determines the acceleration node A corresponding to IP8 according to the location routing table.
  • the acceleration node D determines the optimal path (the list of acceleration nodes) from the acceleration node D to the acceleration node A according to the source routing table.
  • Acceleration node D overlay-encapsulates the optimal path (list of acceleration nodes), the next-hop acceleration node (such as acceleration node B), and the feedback original data, and forwards the overlay-encapsulated data hop by hop until it is forwarded to acceleration node A .
  • the acceleration node A After the acceleration node A receives the overlay package data, it strips off the overlay header to expose the destination address (IP8) and source address (EIP) of the inner layer feedback data.
  • IP8 destination address
  • EIP source address
  • the acceleration node A performs destination address translation, and maps IP8 to the public network IPM of the NAT device.
  • the NAT device maps the IPM to the private network IP address (IP1) of the first terminal, and the NAT device forwards the private network.
  • the destination end is a terminal (or server), that is, a scenario in which the first terminal accesses the second terminal (or server).
  • FIG. 13 is a schematic diagram of a scene diagram of data forwarding between the first terminal and the second terminal.
  • the first terminal has a built-in SDK plug-in, and the first terminal can access the central controller through the SDK plug-in.
  • the second terminal has a built-in SDK plug-in, and the second terminal can access the central controller through the SDK plug-in.
  • the IP address of the first terminal is IP2, and the IP address of the second terminal is IP3.
  • the first terminal accesses the central controller through the SDK, and the central controller feeds back the IP address of the acceleration node A to the first terminal.
  • the first terminal accesses the acceleration node A (source acceleration node).
  • the first terminal obtains original data, the destination address of the original data is IP3, and the source address is IP2.
  • SDK package data includes SDK header and original data.
  • the source address in the SDK header is IP2, and the destination address is IPA (the public IP of the acceleration node A).
  • the acceleration node A decapsulates the SDK package data, strips off the SDK header, and exposes the destination address (IP3) and source address (IP2) of the original data.
  • the acceleration node A searches the location routing table, and determines the acceleration node (eg, the acceleration node D) that has a corresponding relationship with the destination address (IP3).
  • the acceleration node A searches the source routing table to determine the optimal path from the acceleration node A to the acceleration node D, that is, the acceleration node (segment list) that needs to be experienced from the acceleration node A to the acceleration node D.
  • Acceleration node A performs overlay encapsulation on the original data packet to obtain overlay encapsulation data.
  • the overlay encapsulation data includes the original data, the IP of each acceleration node on the optimal path (for example, IPA, IPB, IPC, and IPD), and the next-hop acceleration node. IP (eg IPB).
  • the acceleration node A sends the overlay encapsulated data, and the overlay encapsulated data is forwarded hop by hop until forwarded to the acceleration node D (destination forwarding node).
  • the acceleration node D decapsulates the overlay package data to obtain original data.
  • the acceleration node D performs SDK encapsulation on the original data to obtain SDK encapsulated data.
  • the outer destination address of the SDK encapsulated data is IP3, and the source address is IPD.
  • the acceleration node D sends the SDK package data to the second terminal through the SDK tunnel.
  • the second terminal decapsulates the SDK package data to obtain original data.
  • steps S61-S66 are exemplary descriptions of the forward forwarding process of the data flow, that is, the process of sending data from the first terminal to the second terminal.
  • steps S71-S76 are the reverse data traffic forwarding process, that is, the process in which the second terminal sends data to the first terminal.
  • the second terminal accesses the central controller through the SDK, and the central controller feeds back the IP address of the acceleration node D to the second terminal.
  • the first terminal accesses the acceleration node D (source acceleration node).
  • the second terminal obtains the raw data fed back, the destination address of the raw data fed back is IP2, and the source address is IP3.
  • SDK package data includes SDK header and feedback data.
  • the source address in the header of the SDK package data is IP3, and the destination address is IPD (the public IP of the acceleration node D).
  • the acceleration node D decapsulates the SDK encapsulated data, strips off the header, and exposes the destination address (IP2) and source address (IP3) of the fed back original data.
  • the acceleration node D searches the location routing table, and determines the acceleration node (eg, the acceleration node A) that has a corresponding relationship with the destination address (IP2).
  • the acceleration node D searches the source routing table to determine the optimal path from the acceleration node D to the acceleration node A, that is, the acceleration node (segment list) that needs to be experienced from the acceleration node D to the acceleration node A.
  • the acceleration node D performs overlay encapsulation on the feedback data to obtain the overlay encapsulation data.
  • the overlay encapsulation data includes the feedback data, the IP of each acceleration node (for example, IPD, IPC, IPB, and IPA) on the optimal path, and the IP address of the next-hop acceleration node. IP (eg IPC).
  • the acceleration node D sends the overlay encapsulated data, and the overlay encapsulated data is forwarded hop by hop until forwarded to the acceleration node A (destination forwarding node).
  • the acceleration node A decapsulates the overlay package data, and obtains the original data fed back.
  • the acceleration node A performs SDK encapsulation on the feedback data.
  • the outer destination address of the SDK encapsulated data is IP2 and the source address is IPA.
  • the acceleration node A sends the SDK package data to the first terminal through the SDK tunnel.
  • the first terminal decapsulates the SDK package data, and obtains the original data that is fed back.
  • FIG. 14 is a schematic diagram of encrypted and forwarded data in an overlay tunnel.
  • the encryption package When the source acceleration node (such as acceleration node A) obtains the original data, the encryption key field is added, and the original data is encrypted and filled to obtain encrypted data.
  • the source acceleration node such as acceleration node A
  • the source acceleration node sends encrypted data to the next-hop acceleration node, the encrypted data is forwarded hop by hop, and the encrypted data is kept encrypted during the forwarding process. Until the encrypted data reaches the destination acceleration node (such as acceleration node D).
  • the acceleration node D performs overlay decapsulation on the encrypted data, and also de-encrypts the data, restores the original data, and forwards the original data.
  • the GA in the traditional method relies on the IP private line network of the cloud manufacturer to provide acceleration services, and can only encrypt the data to be transmitted through the application layer, while the transport layer does not support encryption services.
  • AND forwards data based on the overlay tunnel and can naturally encrypt data based on the overlay tunnel, so that some confidential data can be protected by double-layer encryption at the application layer and the transport layer, thereby ensuring data security.
  • the communication system includes a central controller and a plurality of acceleration nodes, and the plurality of acceleration nodes include a first acceleration node and a second acceleration node, wherein the deployment environment of the central controller belongs to the first acceleration node.
  • a cloud service provider, the deployment environment of the multiple acceleration nodes belongs to a second cloud service provider, an application service provider or a telecom operator.
  • An embodiment of the present application provides an acceleration node.
  • the acceleration node 1500 is described by taking a first acceleration node as an example, and the first acceleration node may be any acceleration node among multiple acceleration nodes.
  • the first acceleration node is configured to implement the functions performed by the first acceleration node in the foregoing method embodiments. Referring to FIG.
  • the acceleration node 1500 includes a forwarding module 1501 and a control module 1502, wherein the forwarding module 1501 is used to implement the function of the forwarding node in the above method embodiment, and the control module 1502 is used to implement the local control in the above method embodiment. function of the device.
  • a forwarding module 1501 configured to receive a data request from the first terminal, where the data request is used to access the destination;
  • control module 1502 configured to obtain a target path, the target path is from a routing table entry generated by the central controller;
  • the forwarding module 1501 is configured to send the data request to the next-hop acceleration node according to the target path, until the data request is forwarded to the second acceleration node, and the second acceleration node is configured to forward the data The request is forwarded to the destination.
  • the forwarding module 1501 is configured to perform step 507 and step 509 in the above-mentioned embodiment corresponding to FIG. 5 .
  • the forwarding module 1501 is further configured to perform steps S42 , S43 and S44 in the example corresponding to FIG. 12 .
  • the forwarding module 1501 is further configured to perform step S45 in the example corresponding to FIG. 12 .
  • the forwarding module 1501 is further configured to perform steps S46 , S51 , S52 and S54 in the example corresponding to FIG. 12 .
  • the forwarding module 1501 is further configured to execute steps S63, S74 and S75 in the example corresponding to FIG. 13
  • the forwarding module 1501 is further configured to execute Steps S64, S65 and S73 in the example corresponding to FIG. 13
  • the control module 1502 is configured to execute step 508 in the embodiment corresponding to FIG. 5 , and steps S53 , S55 and S56 in the example corresponding to FIG. 12 .
  • the routing table entry includes a source routing table and a location routing table; the control module 1502 is further configured to receive the source routing table and the location routing table sent by the central controller, where the source routing table includes The path from the source acceleration node to the destination acceleration node.
  • the location routing table includes the correspondence between the first IP address and the second acceleration node.
  • the first IP address is the IP address of the destination end where the user applies for the network acceleration service; when the destination address of the data request When it is the first IP address, the first acceleration node queries the location routing table to determine the second acceleration node corresponding to the first IP address, and the second acceleration node is the destination acceleration node; when the first acceleration node is the source acceleration node, the first acceleration node is the source acceleration node.
  • An acceleration node queries the source routing table according to the second acceleration node, and determines a target path from the first acceleration node to the second acceleration node.
  • the forwarding module 1501 is further configured to measure the link status between the first acceleration node and the neighboring acceleration node to obtain link status information; the control module 1502 is further configured to send the information to the central controller The link state information obtained by the forwarding module 1501 is used for the central controller to generate the source routing table.
  • an SDK plug-in is configured in the first terminal, and the address of the central controller is preset in the SDK plug-in; the forwarding module 1501 is further configured to receive SDK package data from the first terminal through the SDK tunnel,
  • the SDK encapsulation data is the data after encapsulating the data request.
  • the destination address in the header of the SDK encapsulation data is the IP address of the first acceleration node, and the source address in the header is the IP address of the first terminal.
  • the deployment environment of the first acceleration node is a first network device
  • the first network device is used to receive an access control list ACL policy instruction
  • the ACL policy instruction is used to trigger the first network device to convert the destination address
  • the data of the first IP address is directed to the first acceleration node
  • the forwarding module 1501 is further configured to receive a data request from the first terminal directed by the first network device according to the ACL policy instruction.
  • the deployment environment of the first acceleration node is a device in a local area network
  • the first terminal is a terminal in the local area network
  • the forwarding module 1501 is further configured to receive a data request from the first terminal through the local area network.
  • the acceleration node 1500 runs in a virtual machine or container provided by the deployment environment.
  • FIG. 16 is a schematic diagram of the architecture of a virtual machine.
  • the architecture of the virtual machine includes a hardware layer 1601 , a virtualization layer 1602 and a virtual machine 1603 .
  • the virtualization layer 1602 includes a hypervisor.
  • the hypervisor is used to manage the real hardware resources of the hardware layer 1601 , and provides hardware resource abstraction for the virtual machine 1603 , thereby providing a running environment for the acceleration node 1500 in the virtual machine 1603 .
  • Hardware layer 1601 may include one or more processors, memory, and storage devices. The storage device and the memory are both connected to the processor.
  • the processor can also be referred to as a processing unit, which can implement certain control functions.
  • the processor may be a general-purpose processor or a special-purpose processor, or the like. Instructions may be stored on the memory, and the instructions may be executed on the processor.
  • the storage device is used to store the source routing table and the location routing table.
  • the hypervisor provides hardware resource abstraction for the virtual machine, so that the acceleration node in the virtual machine executes the method executed by the first acceleration node in the above method embodiment.
  • an embodiment of the present application further provides a central controller, where the central controller is configured to execute the method executed by the central controller in the foregoing method embodiments.
  • the central controller 1700 includes a transceiver module 1701 and a processing module 1702 .
  • a transceiver module 1701 configured to acquire link status information sent by the acceleration node
  • the processing module 1702 is configured to generate a source routing table according to the link state information obtained by the transceiver module 1701, where the source routing table includes a path from the source acceleration node to the destination acceleration node;
  • the transceiver module 1701 is used to obtain the first IP address of the destination end of the user applying for the network acceleration service;
  • the processing module 1702 is further configured to generate a location routing table, where the location routing table includes the correspondence between the first IP address and the second acceleration node;
  • the transceiver module 1701 is further configured to send the location routing table and the source routing table corresponding to the first acceleration node to the first acceleration node.
  • the location routing table is used to guide the first acceleration node to determine the second acceleration node according to the first IP address.
  • the IP address is the destination address of the data request from the first terminal
  • the source routing table is used by the first acceleration node to obtain the target path
  • the target path is used to guide the data request to be forwarded to the second acceleration node
  • the second acceleration node is used to forward the data to the destination.
  • the transceiver module 1701 is a transceiver.
  • the transceiver has the function of sending and/or receiving.
  • the transceiver is replaced by a receiver and/or a transmitter.
  • the transceiver module 1701 is a communication interface.
  • the communication interface is an input-output interface or a transceiver circuit.
  • the input and output interface includes an input interface and an output interface.
  • the transceiver circuit includes an input interface circuit and an output interface circuit.
  • the processing module 1702 is a processor, and the processor is a general-purpose processor or a special-purpose processor or the like.
  • the processor includes a transceiver unit for implementing receiving and transmitting functions.
  • the transceiver unit is a transceiver circuit, or an interface, or an interface circuit.
  • Transceiver circuits, interfaces, or interface circuits for implementing receiving and transmitting functions are deployed separately, or optionally, integrated together.
  • the above-mentioned transceiver circuit, interface or interface circuit is used for reading and writing code or data, or the above-mentioned transceiver circuit, interface or interface circuit is used for signal transmission or transmission.
  • the transceiver module 1701 is configured to execute step 501 , step 503 , step 504 and step 506 in the above-mentioned embodiment corresponding to FIG. 5 .
  • the processing module 1702 is configured to execute step 502 and step 505 in the above-mentioned embodiment corresponding to FIG. 5 .
  • the processing module 1702 is further specifically configured to: determine the second acceleration node according to the first IP address of the destination; establish a correspondence between the first IP address and the second acceleration node; Relationships generate location routing tables.
  • the processing module 1702 is further configured to determine the second acceleration node deployed in the cloud area according to the first IP address.
  • the processing module 1702 is further configured to query the IP address database according to the first IP address to determine the physical location of the destination terminal; determine the second closest to the physical location. Speed up nodes.
  • an SDK plug-in is configured in the first terminal, and the address information of the central controller is preset in the SDK plug-in; the transceiver module 1701 is further configured to receive a request sent by the first terminal; The first terminal feeds back the IP address of the first acceleration node, and the IP address of the first acceleration node is used by the first terminal to send a data request to the first acceleration node by using the SDK tunnel.
  • the transceiver module 1701 is further configured to send traffic diversion information to the network management system, the traffic diversion information includes IP information of the destination end, and the traffic diversion information is used to trigger the network management system to send an ACL policy instruction to the first network device , the second acceleration node is an acceleration node deployed in the first network device, and the ACL policy instruction is used to trigger the first network device to direct the data request from the first terminal to the first acceleration node.
  • the transceiver module 1701 is further configured to acquire a mode parameter, where the mode parameter includes a first mode and a second mode, wherein the first mode is used to indicate that the destination of the network acceleration service is a cloud area, The second mode is used to indicate that the destination of the network acceleration service is the second terminal or the server.
  • an embodiment of the present application provides a central controller, and the central controller 1800 is used to implement the method executed by the central controller in the above method embodiments.
  • the central controller 1800 is used to implement the method executed by the central controller in the above method embodiments.
  • the central controller 1800 is used to implement the method executed by the central controller in the above method embodiments.
  • the central controller 1800 may include one or more processors 1801, and the processors 1801 may also be referred to as processing units, which may implement certain control functions.
  • the processor 1801 may be a general-purpose processor or a special-purpose processor, or the like.
  • the central processing unit can be used to control the central controller, execute software programs, and process data of the software programs.
  • the processor 1801 may also store instructions 1803, and the instructions 1803 may be executed by the processor, so that the central controller 1800 executes the methods described in the above method embodiments.
  • the processor 1801 may include a transceiver unit for implementing the functions of receiving and transmitting.
  • the transceiver unit may be a transceiver circuit, or an interface, or an interface circuit.
  • Transceiver circuits, interfaces or interface circuits used to implement receiving and transmitting functions may be separate or integrated.
  • the above-mentioned transceiver circuit, interface or interface circuit can be used for reading and writing code/data, or the above-mentioned transceiver circuit, interface or interface circuit can be used for signal transmission or transmission.
  • the central controller 1800 may include a circuit, and the circuit may implement the function of sending or receiving in the above method embodiments.
  • the central controller 1800 may include one or more memories 1802 on which instructions 1804 may be stored, and the instructions may be executed on the processor, so that the central controller 1800 executes the methods described in the above method embodiments.
  • data may also be stored in the memory.
  • instructions and/or data may also be stored in the processor.
  • the processor and the memory can be provided separately or integrated together.
  • the central controller 1800 may further include a transceiver 1805 and/or an antenna 1806 .
  • the processor 1801 may be called a processing unit, and controls the central controller 1800 .
  • the transceiver 1805 may be referred to as a transceiver unit, a transceiver, a transceiver circuit, a transceiver device or a transceiver module, etc., and is used to implement a transceiver function.
  • An embodiment of the present application is a computer program product, the computer program product includes computer program code, and when the computer program code is executed by a computer, enables the computer to implement the method executed by the central controller in the above method embodiments.
  • An embodiment of the present application is a computer program product.
  • the computer program product includes computer program code, which, when executed by a computer, enables the computer to implement the method executed by the first acceleration node in the above method embodiments.
  • An embodiment of the present application is a computer-readable storage medium for storing a computer program or instruction, which, when executed, causes the computer to execute the method executed by the central controller in the above method embodiment.
  • An embodiment of the present application is a computer-readable storage medium for storing a computer program or instruction, and when the computer program or instruction is executed, the computer executes the method executed by the first acceleration node in the foregoing method embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne un procédé de transfert de données et un appareil associé, qui sont utilisés pour améliorer le degré de couverture d'un service d'accélération de réseau. Dans un mode de réalisation de la présente demande, un système de communication comprend un contrôleur central et une pluralité de nœuds d'accélération, la pluralité de nœuds d'accélération comprenant un premier nœud d'accélération et un second nœud d'accélération, un environnement de déploiement du contrôleur central appartient à un premier fournisseur de services en nuage et un environnement de déploiement de la pluralité de nœuds d'accélération appartient à un second fournisseur de services en nuage, à un fournisseur de services d'application ou à un opérateur de télécommunications. Le procédé comprend les étapes suivantes : un premier nœud d'accélération reçoit une demande de données en provenance d'un premier terminal, la demande de données étant utilisée pour accéder à une extrémité de destination ; le premier nœud d'accélération acquiert un trajet cible, le trajet cible commençant à partir d'une entrée de table de routage générée par un contrôleur central ; et le premier nœud d'accélération envoie la demande de données à un nœud d'accélération de bond suivant en fonction du trajet cible jusqu'à ce que la demande de données soit transférée à un second nœud d'accélération, le second nœud d'accélération étant utilisé pour transférer la demande de données à l'extrémité de destination. Une pluralité de nœuds de transfert sont déployés de manière flexible, améliorant ainsi le degré de couverture d'un réseau de transfert de données.
PCT/CN2022/086603 2021-04-16 2022-04-13 Procédé de transfert de données et appareil associé WO2022218341A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110411432.1A CN115225631A (zh) 2021-04-16 2021-04-16 一种数据转发方法及相关装置
CN202110411432.1 2021-04-16

Publications (1)

Publication Number Publication Date
WO2022218341A1 true WO2022218341A1 (fr) 2022-10-20

Family

ID=83605570

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/086603 WO2022218341A1 (fr) 2021-04-16 2022-04-13 Procédé de transfert de données et appareil associé

Country Status (2)

Country Link
CN (1) CN115225631A (fr)
WO (1) WO2022218341A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277630A (zh) * 2020-01-13 2020-06-12 腾讯科技(深圳)有限公司 一种路由控制方法、装置、电子设备和存储介质
CN111683013A (zh) * 2020-06-08 2020-09-18 腾讯科技(深圳)有限公司 一种加速网络的路由方法和加速网络

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277630A (zh) * 2020-01-13 2020-06-12 腾讯科技(深圳)有限公司 一种路由控制方法、装置、电子设备和存储介质
CN111683013A (zh) * 2020-06-08 2020-09-18 腾讯科技(深圳)有限公司 一种加速网络的路由方法和加速网络

Also Published As

Publication number Publication date
CN115225631A (zh) 2022-10-21

Similar Documents

Publication Publication Date Title
JP7079866B2 (ja) パケット処理方法、及びデバイス
CN111885075B (zh) 容器通信方法、装置、网络设备及存储介质
US11991042B2 (en) Stitching enterprise virtual private networks (VPNS) with cloud virtual private clouds (VPCS)
US10469442B2 (en) Adaptive resolution of domain name requests in virtual private cloud network environments
US8396954B2 (en) Routing and service performance management in an application acceleration environment
US10104633B1 (en) Active position driven mobility content delivery in information centric networks
WO2021073565A1 (fr) Procédé et système de fourniture de service
JP7413415B2 (ja) ハイブリッドクラウド環境における通信方法、ゲートウェイ、並びに管理方法及び装置
US20150281056A1 (en) Data center networks
CN112671938B (zh) 业务服务提供方法及系统、远端加速网关
US10397791B2 (en) Method for auto-discovery in networks implementing network slicing
US9584340B2 (en) Data center networks
Aazam et al. Impact of ipv4-ipv6 coexistence in cloud virtualization environment
WO2022218341A1 (fr) Procédé de transfert de données et appareil associé
CN115150312B (zh) 一种路由方法及设备
CN108040137A (zh) 一种域名解析方法、网关及网络系统
KR20170099710A (ko) 분산 클라우드 환경에서 서비스 품질을 보장하는 전용망 서비스 제공 장치 및 방법
CN111464449B (zh) 一种域间流量本地化交换方法
WO2023228249A1 (fr) Système de commande de trajet de communication, terminal de communication, dispositif relais, procédé de commande de trajet de communication et programme
US20240044739A1 (en) System and method for selecting virtual appliances in communications with virtual private cloud networks
WO2024113867A1 (fr) Procédé et appareil de communication
CN118300981A (zh) 一种网络地址转换网关配置方法及云管理平台
JP5810047B2 (ja) 通信システム、及びパケット通信方法
CN118233379A (zh) 数据传输方法、装置、设备、存储介质及程序产品
CN117478446A (zh) 云网络接入方法、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22787563

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22787563

Country of ref document: EP

Kind code of ref document: A1