WO2024098731A1 - 算力资源通告方法、算力流量处理方法、通信设备及介质 - Google Patents

算力资源通告方法、算力流量处理方法、通信设备及介质 Download PDF

Info

Publication number
WO2024098731A1
WO2024098731A1 PCT/CN2023/097507 CN2023097507W WO2024098731A1 WO 2024098731 A1 WO2024098731 A1 WO 2024098731A1 CN 2023097507 W CN2023097507 W CN 2023097507W WO 2024098731 A1 WO2024098731 A1 WO 2024098731A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing power
computing
resource
node
type
Prior art date
Application number
PCT/CN2023/097507
Other languages
English (en)
French (fr)
Inventor
魏月华
张征
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2024098731A1 publication Critical patent/WO2024098731A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/645Splitting route computation layer and forwarding layer, e.g. routing according to path computational element [PCE] or based on OpenFlow functionality
    • H04L45/655Interaction between route computation entities and forwarding entities, e.g. for route determination or for flow table update
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/741Routing in networks with a plurality of addressing schemes, e.g. with both IPv4 and IPv6
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering

Definitions

  • the embodiments of the present application relate to the field of communication technology, and in particular to a computing resource notification method, a routing table establishment method, a computing traffic processing method, a communication device, and a computer-readable storage medium.
  • the computing resources in the computing network are distributed in separate computing nodes.
  • the computing routes are published and announced through the Border Gateway Protocol (Border Gateway Protocol, BGP) or various interior gateway protocols (Interior Gateway Protocol, IGP) to achieve distributed computing resource discovery and call.
  • Border Gateway Protocol Border Gateway Protocol
  • IGP Interior Gateway Protocol
  • IPv4 Internet Protocol version 4
  • IPv6 Internet Protocol version 6
  • the embodiments of the present application provide a computing power resource notification method, a routing table establishment method, a computing power flow processing method, a communication device and a computer-readable storage medium, aiming to reduce the difficulty of management and planning and deployment of computing power resources.
  • an embodiment of the present application provides a computing power resource notification method, which is applied to a first node, and the method includes: setting the address family type in a border gateway protocol message to a computing power routing type, wherein the computing power routing is a routing corresponding to the computing power resource; filling the computing power resource information connected to the first node into the border gateway protocol message; and sending the border gateway protocol message to a second node.
  • an embodiment of the present application provides a method for establishing a routing table, which is applied to a second node.
  • the method includes: generating a computing power routing table whose next hop is the first node based on a computing power resource notification message notified by the first node; wherein the computing power resource notification message is obtained according to the computing power resource notification method described in the first aspect.
  • an embodiment of the present application provides a computing power flow processing method, which is applied to a second node, and the method includes: receiving computing power flow sent to a computing power resource connected to a first node; obtaining computing power resource information and a computing power encapsulation type according to a border gateway protocol message announced by the first node; encapsulating the computing power flow according to the computing power resource information and the computing power encapsulation type; and sending the encapsulated computing power flow to the first node; wherein the border gateway protocol message is obtained according to the computing power resource announcement method described in the first aspect.
  • an embodiment of the present application provides a computing power flow processing method, which is applied to a first node, and the method includes: receiving computing power flow sent by a second node, wherein the computing power flow is obtained according to the computing power flow processing method described in any one of the third aspects; decapsulating the computing power flow to obtain target computing power resource information and the decapsulated computing power flow; According to the computing power routing table and the target computing power resource information, the decapsulated computing power traffic is forwarded to the target computing power resource.
  • an embodiment of the present application provides a communication device, comprising: at least one processor; at least one memory for storing at least one program; when at least one of the programs is executed by at least one of the processors, the computing power resource notification method as described in the first aspect, or the routing table establishment method as described in the second aspect, or the computing power traffic processing method as described in the third aspect or the fourth aspect is implemented.
  • an embodiment of the present application provides a computer-readable storage medium, which stores a program executable by a processor.
  • the program executable by the processor is executed by the processor, it is used to implement the computing power resource notification method as described in the first aspect, or the routing table establishment method as described in the second aspect, or the computing power traffic processing method as described in the third aspect or the fourth aspect.
  • the computing power resource is announced, and the computing power resources are accurately announced to avoid confusion with other non-computing power routes.
  • the computing power resource is announced, and the computing power resources are accurately announced to avoid confusion with other non-computing power routes.
  • By encapsulating the computing power traffic based on the computing power resource identifier it is guided to the correct computing power resource for processing, further simplifying the planning and management of computing power resources, and making the announcement of computing power resources simpler and more efficient.
  • FIG1 is a schematic diagram of a network communication system provided by an embodiment of the present application.
  • FIG2 is a flow chart of a computing resource notification method provided by an embodiment of the present application.
  • FIG3 is a schematic diagram of an R1 node and a connected computing resource network provided in an example of the present application
  • FIG4 is a schematic diagram of a BGP message format provided in an example of the present application.
  • FIG5 is a flow chart of a method for establishing a routing table provided in an embodiment of the present application.
  • FIG6 is a flow chart of a method for processing computing power flow provided in an embodiment of the present application.
  • FIG7 is a flow chart of a computing power flow processing method provided by another embodiment of the present application.
  • FIG8 is a schematic diagram of a BGP network system provided by an example of the present application.
  • FIG9 is a schematic diagram of a message structure when an R1 node notifies computing resources according to an example of the present application
  • FIG10 is a schematic diagram of the structure of a computing head provided in an example of the present application.
  • FIG11 is a schematic diagram of the structure of a target computing resource identifier and a package node identifier provided in an example of the present application;
  • FIG12 is a schematic diagram of an R2 node and a computing resource network connected thereto provided in an example of the present application;
  • FIG13 is a computing resource notification message structure of an R2 node provided in an example of the present application.
  • FIG14 is a schematic diagram of the structure of a communication device provided in an embodiment of the present application.
  • computing power network is a new type of information infrastructure that allocates and flexibly schedules computing resources, storage resources, and network resources on demand between cloud, edge, and end according to business needs.
  • the essence of computing power network is a computing power resource service.
  • corporate customers or individual users will not only need networks and clouds, but also need to schedule computing tasks to the optimal computing nodes through the optimal network path.
  • Computing power resources may be distributed anywhere in the network, and an efficient computing power network will use networking technology to integrate computing power resources.
  • the computing power network includes the computing network infrastructure layer, the orchestration management layer, and the operation service layer. It is a new information service system of "connection + computing power + capability". In order to realize computing power scheduling, it is a key link to announce the routes corresponding to computing power resources. For the convenience of description, the abbreviation “computing power routing” is used in the following text to replace "routes corresponding to computing power resources”. The computing power routing can be announced to distributed router nodes or to centralized controller devices. The controller correctly schedules the user's computing power request based on these routing information.
  • Border Gateway Protocol is an important routing protocol that provides interconnection for networks. It can achieve reachability by maintaining IP routing tables or "prefix" tables and is a vector routing protocol. BGP determines routing based on paths, network policies, or rule sets.
  • computing power resources are distributed in separate computing power nodes.
  • BGP or various interior gateway protocols (IGP) can connect computing power nodes.
  • computing power routes can be published and announced through BGP and IGP, thereby realizing distributed computing power resource discovery and call.
  • computing power resources are all identified by IPv4 or IPv6 routing, so as to realize routing announcement in BGP/IGP protocol.
  • computing resources are of various types, which may be servers or just a CPU. If all IPv4 or IPv6 addresses are assigned to all of these computing resources in order to correctly notify them, it will cause a large waste of limited IP addresses, increasing the difficulty of managing, planning and deploying computing resources. In addition, assigning existing IP addresses to computing resources and notifying computing resources can easily cause confusion with non-computing routing notifications, which will cause difficulties in device processing and may cause routing security issues.
  • the present application provides a computing power resource notification method, a routing table establishment method, a computing power flow processing method, a communication device and a computer-readable storage medium.
  • the computing power resource notification is performed to achieve accurate notification of the computing power resources and avoid confusion with other non-computing power routes.
  • the computing power resources are distinguished, not limited to IPv4 or IPv6 addresses, reducing the waste of IPv4 or IPv6 addresses, and reducing the difficulty of management and planning and deployment of computing power resources.
  • the present application also encapsulates the computing power flow based on the computing power resource identifier, wherein the computing power flow is the flow data that needs to provide computing power resource services, thereby guiding the computing power flow to the correct computing power resource for processing, further simplifying the planning and management of computing power resources, and making the notification of computing power resources simpler and more efficient.
  • Figure 1 is a schematic diagram of a network communication system provided in an embodiment of the present application.
  • the communication system includes a first network device 110, a second network device 120, a third network device 130, a fourth network device 140, a fifth network device 150, and a sixth network device 160; each network device is communicatively connected to each other.
  • the technical solution of the embodiment of the present application can be applied to various communication systems, such as computing aware network, IP bearer network, wideband code division multiple access mobile communication system (WCDMA), evolved universal terrestrial radio access network (Evolved Universal Terrestrial Radio Access Network (E UTRAN) system, Next Generation Radio Access Network (NG RAN) system, Long Term Evolution (LTE) system, Worldwide Interoperability For Microwave Access (WiMAX) communication system, Fifth Generation (5G) system, such as New Radio Access Technology (NR), and future communication systems, such as 6G system.
  • WCDMA wideband code division multiple access mobile communication system
  • E UTRAN evolved universal terrestrial radio access network
  • NG RAN Next Generation Radio Access Network
  • LTE Long Term Evolution
  • WiMAX Worldwide Interoperability For Microwave Access
  • 5G Fifth Generation
  • 5G New Radio Access Technology
  • 6G system such as New Radio Access Technology (NR)
  • the technical solution of the embodiment of the present application can be applied to various communication technologies, such as microwave communication, optical wave communication, millimeter wave communication, etc.
  • the embodiment of the present application does not limit the specific technology and specific device form used.
  • the first network device 110, the second network device 120, the third network device 130, the fourth network device 140, the fifth network device 150, and the sixth network device 160 in the embodiment of the present application are collectively referred to as network devices hereinafter, and can be any network device with communication capabilities, and the network device can be a router, a server, a car with communication functions, a smart car, a mobile phone (Mobile Phone), a wearable device, a tablet computer (Pad), a computer with a transceiver function, a virtual reality (Virtual Reality, VR) device, an augmented reality (Augmented Reality, AR) device, a network device in industrial control (Industrial Control), a network device in self driving, a network device in remote medical surgery, a network device in smart grid (Smart Grid), a network device in transportation safety (Transportation Safety), a network device in smart city (Smart City), a network device in smart home (Smart Home), a network device in a vehicle communication network, etc.
  • the embodiment of the present application does not
  • FIG2 is a flow chart of a computing resource notification method provided in an embodiment of the present application.
  • the computing resource notification method can be applied to, but not limited to, network devices such as routers, servers, and terminals, or the network communication system provided in the above embodiments.
  • the computing resource notification method is applied to the first node, and may include, but is not limited to, steps S1100, S1200, and S1300.
  • Step S1100 Set the address family type in the Border Gateway Protocol message to the computing power routing type.
  • the computing power routing is the routing corresponding to the computing power resources.
  • a new type for characterizing the computing power routing is added to the type of BGP address family identifier.
  • the border gateway protocol includes an address family identifier and a sub-address family identifier; step S1100 includes: setting the value of the address family identifier to the address family identifier value corresponding to the computing power routing type; setting the value of the sub-address family identifier to the resource type value corresponding to the resource type of the computing power resource.
  • the resource type of the computing power resources includes at least one of the following: untyped computing power resources; hardware type computing power resources; software type computing power resources.
  • the sub-address family identifier value corresponding to the resource type of computing resources is added to the Subsequent Address Family Identifiers (SAFI) in the BGP message to represent the resource type of computing resources corresponding to the computing power route.
  • SAFI Subsequent Address Family Identifiers
  • the types of computing resources can be divided into hardware types, such as graphics processing unit (GPU), central processing unit (CPU), memory, etc., and can also include software types, that is, services that provide specific functions.
  • the computing power resource identification value of AFI confirm that the current BGP message is a computing power resource notification message.
  • the computing power resource type identification value of SAFI confirm the resource type of the computing power resource carried by the current BGP message to distinguish it from other non-computing power routes to avoid confusion. This enables more accurate computing power resource notification and simplifies the planning and management of computing power resources.
  • Step S1200 Fill the computing power resource information of the first node connection into the Border Gateway Protocol message.
  • the computing power resource information connected to the first node can be the information of the computing power resources of the first node itself, such as the CPU, GPU or other modules or devices with computing power of the first node itself, or it can be the information of the computing power resources of other network devices that are communicatively connected to the first node, such as information of the server connected to the first node; it can be understood that the computing power resource information connected to the first node includes the information of computing power resources such as internal or external chips, modules, devices with computing power that are communicatively connected to the first node, and no specific limitation is made here.
  • the computing power resource information includes at least one computing power reachable information
  • the computing power reachable information includes a computing power resource identification type field, a computing power resource identification length field, and a computing power resource identification value field.
  • one computing power reachable information corresponds to one computing power resource.
  • the computing power resource information is filled into the BGP UPDATE message, and the computing power reachability information is expressed in TLV (Type, Length, Value) format, where Type corresponds to the computing power resource identification type field, Length corresponds to the computing power resource identification length field, and Value corresponds to the computing power resource identification value field.
  • TLV Type, Length, Value
  • the computing power resource identification type or computing power resource identification value in each computing power reachable information is different.
  • each computing power reachable information has a different computing power resource identifier and a different computing power resource identifier value.
  • the computing power resource identifiers of each computing power reachable information are different, but the computing power resource identifier values are the same.
  • the computing power resource identifier of each computing power reachable information is the same, but the computing power resource identifier values are different.
  • the computing power reachable information can be distinguished, making the configuration of the computing power resource identifier value more flexible and simplifying the planning and management of computing power resources.
  • the computing power resource identification type includes at least one of the following: numerical type; IPv4 address type; IPv6 address type; multi-protocol label switching type.
  • the numerical type directly uses a uniquely identifiable numerical value as the computing power resource identification
  • the IPv4 address type uses the IPv4 address as the computing power resource identification
  • the IPv6 address type uses the IPv6 address as the computing power resource identification
  • the multi-protocol label switching type uses the numerical value generated by the multi-protocol label switching technology as the computing power resource identification. Since the AFI value that can identify the message as a computing power type is set in the BGP message, it can be distinguished from non-computing power routing, and the reuse of IPv4 addresses and IPv6 addresses can be realized.
  • the computing power resource identifier length corresponding to the computing power resource identifier type includes at least one of the following: 32 bits, 64 bits, 128 bits, 256 bits, and 512 bits. It can be understood that the computing power resource identifier length can be set according to the number of computing power resources, that is, the number of numerical computing power resource identifiers. When the number of numerical computing power resource identifiers is small, a shorter computing power resource identifier length can be set. When the number of numerical computing power resource identifiers is large, a longer computing power resource identifier length can be set. This application does not specifically limit the computing power resource identifier length.
  • the computing power resource information includes computing power resource attributes, and the computing power resource attributes include at least one of the following: the number of computing power resources; the unit of computing power resources.
  • the computing power resource notification may not notify the computing power resource identifier, but only the type and quantity of the computing power resource.
  • the computing power resource information includes computing power resource attributes, and the computing power resource attributes include the computing power resource identifier, ...
  • the power resource attributes include the quantity of computing power resources and the unit of computing power resources; wherein, the quantity of computing power resources is the aggregated computing power of the same type, and the unit of computing power resources is the computing power unit of the corresponding type of computing power resources.
  • FIG3 is a schematic diagram of an R1 node and a connected computing power resource network provided in an example of the present application.
  • the computing power network connected to the R1 node includes computing power GPU-1, computing power GPU-2, computing power CPU-1, and computing power CPU-2, and the R1 node is respectively connected to computing power GPU-1, computing power GPU-2, computing power CPU-1, and computing power CPU-2 for communication.
  • the connected computing power GPU-1 and computing power GPU-2 can be packaged and announced, and the service capabilities that the computing power CPU-1 and computing power CPU-2 can also be packaged and announced; at this time, the R1 node does not need to separately announce the computing power resource identifier of each GPU and CPU, and the computing power resource announcement is made by carrying the BGP message of the SAFI corresponding to the hardware type computing power resource and the AFI corresponding to the computing power resource type, and writing the corresponding computing power resource attributes.
  • the GPU connected to the R1 node can provide a total of 200 Giga Floating-point Operations Per Second (GFLOPS) of computing power resources.
  • GFLOPS Giga Floating-point Operations Per Second
  • Total Length indicates the length of the subsequent content
  • 200 indicates the capacity of the announced computing power resources
  • Type 1 indicates the unit of computing power resources.
  • the BGP message carries computing resource attributes, and the corresponding number of computing resources and units of computing resources are written in them.
  • hardware-type computing resources may also include other hardware with computing capabilities such as memory, and the corresponding notifications are similar, with different units of computing resources carried in the computing resource attributes to distinguish different types; for example, if the computing resource is memory, and the memory is 300, in the corresponding computing resource attributes, the quantity of computing resources is written as 300, and the unit of computing resources is written as TB.
  • the method further includes: filling a computing power encapsulation type into a border gateway protocol message, wherein the computing power encapsulation type is used to indicate an encapsulation type used for computing power traffic.
  • the encapsulation type includes at least one of the following: IPv4 type; IPv6 type; Multi-Protocol Label Switching type; Segment Routing based on IPv6 type; Bit Index Explicit Replication type.
  • the encapsulation type includes but is not limited to IPv4, IPv6, Multi-Protocol Label Switching (MPLS), Segment Routing IPv6 (SRv6), Bit Index Explicit Replication (BIER), etc.
  • the computing power resource notification method further includes: filling the computing power extension group attribute into the border gateway protocol message, wherein the computing power extension group attribute is used to carry computing power site information.
  • the computing power extension group attribute information includes computing power site information.
  • the computing power GPU-1 and computing power GPU-2 of the R1 node belong to computing power site 1, and the computing power CPU-1 and computing power CPU-2 belong to computing power site 2.
  • the BGP message includes the computing power resource information corresponding to the computing power GPU-1 and computing power GPU-2, and carries the computing power site information through the computing power extended community attribute to indicate that the computing power GPU-1 and computing power GPU-2 in the current BGP message belong to computing power site 1.
  • the BGP message When the computing power resources of computing power site 2 are announced, the BGP message includes the computing power resource information corresponding to the computing power CPU-1 and computing power CPU-2, and carries the computing power site information through the computing power extended community attribute to indicate that the computing power CPU-1 and computing power CPU-2 in the current BGP message belong to computing power site 2. Since different resources belong to different computing sites and there is a computing power extended group attribute to distinguish computing power sites, the allocation of computing power resource identifiers can be based on computing power sites.
  • GPU-1 and GPU-2 are assigned computing power resource identifiers 1 and 2 respectively, and in computing power site 2, CPU-1 and CPU-2 can also allocate computing resource IDs 1 and 2 respectively; it can set the same computing resource ID value for computing resources at different sites without affecting the accurate identification and notification of computing resources, thereby simplifying the ID allocation of computing resources.
  • Step S1300 Send a Border Gateway Protocol message to the second node.
  • the first node When neighbors establish a link, the first node sends a BGP message to the adjacent node.
  • the BGP message at this time is an OPEN message.
  • the OPEN message carries an AFI value representing the type of computing resources and a SAFI value representing the resource type of computing resources.
  • the first node uses the OPEN message to notify the neighboring node of its capability, informing that it has the capability to notify computing resources.
  • the first node receives the OPEN message from the neighboring node to confirm whether the neighboring node has the capability to notify computing resources.
  • the first node After the chain is established, the first node sends a BGP message to the neighboring node that has the ability to announce computing power resources.
  • the BGP message at this time is an UPDATE message.
  • the UPDATE message carries an AFI value that represents the computing power routing type and a SAFI value that represents the resource type of computing power resources, as well as computing power resource information connected to the first node.
  • the first node notifies the neighboring node of computing power resources through the UPDATE message. When announcing computing power resources through the UPDATE message, it only notifies the neighboring node that has the ability to announce computing power resources. Avoid neighboring nodes that do not have the ability to announce computing power resources from receiving unsupported computing power traffic, causing processing errors. It is understandable that the second node and the neighboring node are the same reference.
  • the border gateway protocol message is an update message; the method further includes: configuring the next hop attribute as the loopback address of the first node; and filling the next hop attribute into the update message.
  • the loopback address is used as an identifier of the corresponding node, such as the loopback address of the first node can be used as the identifier of the first node.
  • FIG5 is a flow chart of a method for establishing a routing table provided in an embodiment of the present application.
  • the method for establishing a routing table can be applied to, but not limited to, network devices such as routers, servers, terminals, or network communication systems provided in the above embodiments.
  • the method for establishing a routing table is applied to the second node, and can include, but is not limited to, step S21000.
  • Step S2100 Generate a computing power routing table with the next hop being the first node according to the computing power resource notification message notified by the first node; wherein the computing power resource notification message is obtained according to the computing power resource notification method of any embodiment.
  • the first node notifies computing power resources related information such as computing power reachability information, computing power encapsulation type information, computing power extended group attribute information, etc. through BGP messages as needed.
  • each neighbor with computing power resource notification capability forms a computing power routing table and selects an encapsulation method for computing power traffic.
  • the next hop in the computing power routing table can be the IP address of the service site that provides computing power resources, or the IP address of the BGP device connected to the service site that provides the computing power resources.
  • the first node represents a node that announces computing resources to neighboring nodes
  • the second node represents a neighboring node that receives computing resource announcements from other nodes.
  • Each node in the same network can be either a first node or a second node.
  • FIG6 is a flow chart of a computing power flow processing method provided in an embodiment of the present application.
  • the computing power flow processing method can be applied to, but not limited to, network devices such as routers, servers, terminals, or network communication systems provided in the above embodiments.
  • the computing power flow processing method is applied to the second node, and may include but is not limited to steps S3100, S3200, S3300, and S3400.
  • Step S3100 Receive computing power traffic sent to the computing power resources connected to the first node.
  • the second node receives the computing power flow, confirms the target computing power resource of the computing power flow according to the requirements of the configuration, controller or orchestrator, and checks the computing power routing table. If the target computing power resource of the computing power flow is in the computing power resource network connected to the second node, it is directly forwarded to the corresponding target computing power resource for processing; if the target computing power resource of the computing power flow is on a remote device across multiple ends, that is, in the computing power resource network connected to other nodes, such as the computing power resource network connected to the first node, the computing power flow needs to be encapsulated before being sent to the target node.
  • Step S3200 Obtain computing power resource information and computing power encapsulation type according to the Border Gateway Protocol message notified by the first node.
  • the second node first establishes a computing power routing table according to the Border Gateway Protocol message notified by the first node; After receiving the computing power flow of the target computing power resource in the computing power resource network connected to the first node, the computing power resource information and computing power encapsulation type of the first node are obtained from the computing power routing table.
  • Step S3300 Encapsulate the computing power flow according to the computing power resource information and the computing power encapsulation type.
  • the second node encapsulates the computing power flow according to the computing power resource information and the computing power encapsulation type obtained from the first node.
  • encapsulating the computing power flow includes at least one of the following: performing computing power head encapsulation and outer layer encapsulation on the computing power flow; performing outer layer encapsulation on the computing power flow.
  • the computing power head encapsulation of the computing power flow is omitted, and only the outer layer encapsulation of the computing power flow is performed.
  • performing a computing head encapsulation and an outer layer encapsulation on the computing power flow includes: encapsulating at least one of a computing power resource identifier or a second node identifier carried in the computing power resource information in the computing head; and performing an outer layer encapsulation on the computing power flow according to an encapsulation type in the computing power encapsulation type.
  • the second node identifier is a loopback address of the second node.
  • the computing power head encapsulation includes a target computing power resource identifier, an encapsulation node identifier, and related information.
  • the target computing power resource identifier is the computing power resource identifier carried in the computing power resource information corresponding to the first node
  • the encapsulation node identifier is the second node identifier, that is, the information of the node that performs the computing power encapsulation.
  • the corresponding encapsulation node identifier can use the same identifier format as the target computing power resource identifier.
  • the destination IP address of the computing power traffic is set to the loopback address of the first node (target node) to perform outer layer encapsulation of the computing power traffic.
  • outer layer encapsulation of computing power flow includes: when the encapsulation type in the computing power encapsulation type is SRv6, encapsulating the computing power resource identifier as a parameter into the last segment in the segment list corresponding to the encapsulation type.
  • the computing power head includes computing power site information.
  • the target computing power resource has a computing power site
  • the computing power head is packaged, the computing power site information corresponding to the target computing power resource needs to be packaged together.
  • Step S3400 Sending encapsulated computing power traffic to the first node.
  • the border gateway protocol message is obtained according to the computing power resource notification method in the above embodiment.
  • the second node sends the encapsulated computing power traffic to the first node.
  • the first node After the first node receives the encapsulated computing power traffic, the first node decapsulates the encapsulated computing power traffic according to its own computing power routing table, and forwards the computing power traffic to the corresponding computing power resource in the computing power resource network to which it is connected for processing.
  • FIG7 is a flow chart of a computing power flow processing method provided by another embodiment of the present application.
  • the computing power flow processing method can be applied to, but not limited to, network devices such as routers, servers, terminals, or network communication systems provided by the above embodiments.
  • the computing power flow processing method is applied to the first node, and may include but is not limited to steps S4100, S4200, and S4300.
  • Step S4100 receiving the computing power flow sent by the second node.
  • the computing power flow is obtained according to the computing power flow processing method provided in the above embodiments.
  • the first node receives the encapsulated computing power flow sent by the second node.
  • Step S4200 Decapsulate the computing power flow to obtain target computing power resource information and decapsulated computing power flow.
  • the first node decapsulates the computing power flow to obtain target computing power resource information and decapsulated computing power flow.
  • the target computing power resource information includes a computing power resource identifier.
  • the target computing power resource information includes a computing power site identifier.
  • Step S4300 forward the decapsulated computing traffic to the target computing resource according to the computing routing table and the target computing resource information.
  • the first node forwards the decapsulated computing traffic to the corresponding target computing resource in the computing resource network connected to the first node for processing according to its own computing routing table and the target computing resource information.
  • the first node when the target computing power resource information includes a computing power resource identifier, the first node sends the unsealed computing power flow to the computing power resource corresponding to the computing power resource identifier for processing.
  • the first node when the target computing power resource information includes a computing power site identifier, forwards the unsealed computing power traffic to the computing power site corresponding to the computing power site identifier, and processes the unsealed computing power traffic through the computing power resources in the computing power site.
  • the first node represents the target node of the computing power flow
  • the second node represents the node that encapsulates and sends the computing power flow.
  • the first node can have one or more, and the second node can also have one or more, which is not specifically limited in this application.
  • Figure 8 is a schematic diagram of a BGP network system provided in an example of the present application. As shown in Figure 8, the network includes R1 nodes, R2 nodes, R3 nodes, R4 nodes, R5 nodes and R6 nodes, wherein R1 nodes, R2 nodes, R4 nodes and R5 nodes all have corresponding connected computing resource networks.
  • R1 node, R2 node, R4 node and R5 node all support computing power routing and are connected to their corresponding computing power resource networks as computing power gateway devices.
  • R1 node, R2 node, R3 node, R4 node, R5 node and R6 node will form a BGP-based MESH network.
  • R1 node/R2 node/R4 node/R5 node add computing power capacity notification in the BGP OPEN message, which is specifically manifested as follows:
  • the value of the advertised capability code is 1, indicating the multicast protocol border gateway protocol (Multiprotocol BGP).
  • the AFI and SAFI values indicating the computing power follow. Assume that the value indicating the computing power AFI is 32.
  • each node After each node announces its capabilities, each node also learns about the capabilities of other nodes. For example, R1 node can learn that R2 node, R4 node and R5 node all have the ability to announce computing resources, but R3 node and R6 node do not have this ability. Therefore, when announcing computing resources, R1 node will only announce to R2 node, R4 node and R5 node, but not to R3 node and R6 node, thereby avoiding R3 node and R6 node receiving unsupported computing traffic and causing processing errors. In addition, R2 node, R4 node and R5 node can learn the computing resource types supported by R1 node based on the SAFI value announced by R1 node. The controller or orchestrator in the network can obtain all computing resource information in the network by connecting to only any one of R1 node, R2 node, R4 node and R5 node.
  • the computing resources connected to the R1 node include GPU and CPU.
  • the computing resources shown in Figure 3 may be concentrated on one device or distributed on multiple devices. Assuming that these computing resources can directly provide computing services to the outside world, it is necessary to separately notify the outside world of their identifiers as computing resources. Assuming that computing resources are distinguished by 64-bit identifiers, for example, computing GPU-1 is represented by 1, computing GPU-2 is represented by 2, computing CPU-1 is represented by 3, and computing CPU-2 is represented by 4. When announcing computing resources, the R1 node will announce them according to the 64-bit numerical computing resource identifier, as shown in Figure 9.
  • Type is 1, indicating a numerical value
  • Sub-Type indicates the length of the value of the specific computing resource identifier.
  • the length of the corresponding computing resource identifier is 64 bits (Sub-Type 1 indicates that the length of the computing resource identifier is 32 bits, Sub-Type 3 indicates that the length of the computing resource identifier is 128 bits, and so on); Sub-Type is followed by the sum of the lengths of the specific computing resource identifiers (Length), where Length is the sum of the lengths of all computing resource identifiers under the corresponding Sub-Type.
  • the BGP UPDATE message When announcing the computing resources of the R1 node, the BGP UPDATE message also needs to carry the next hop (NEXT-HOP) attribute, and the value in the NEXT-HOP attribute is set to the loopback address of the R1 node (usually it can also be the BGP neighbor establishment address); in addition, you can also choose to carry the computing resource attribute value, where the computing resource attribute includes the number of computing resources and the unit of computing resources.
  • the computing resource attribute value can be used by the controller or orchestrator as a computing power measurement.
  • the R1 node can complete the computing power resource notification and generate a local computing power routing table leading to each computing power resource.
  • the R1 node connects computing power GPU-1, computing power GPU-2, computing power CPU-1 and computing power CPU-2 through different interfaces.
  • a computing power resource is revoked, such as when the computing power GPU-2 in Figure 3 is no longer available and can no longer provide computing power services
  • the R1 node will send a new UPDATE message to revoke the route corresponding to the computing power resource identifier of computing power GPU-2;
  • the message sending format refers to the above format of this example, the only difference is that it is sent in the revoked route of the BGP protocol.
  • R2 After receiving the notification from R1, other nodes such as R2, R4 and R5 will establish a computing power routing table with the next hop being R1, and its content is the R1 node corresponding to computing power resource identifiers 1 to 4.
  • IPv4 and IPv6 addresses By setting AFI and SAFI to identify computing resources through messages and resource types of computing resources, and using numerical computing resource identifiers, existing addresses such as IPv4 and IPv6 addresses can be reused, and non-computing message announcements will not be confused, and computing resource announcements can be accurately made.
  • numerical computing resource identifiers are used instead of IPv4 and IPv6 addresses for identification, which saves limited IPv4 and IPv6 addresses, simplifies the planning and management of computing resources, and makes computing resource announcements simpler and more efficient.
  • the encapsulation process of computing power flow is further explained. Assuming that the R2 node receives a computing power flow, according to the requirements of the configuration (or controller, or orchestrator), it is confirmed that the computing power flow needs to be sent to the node with the computing power resource identifier 3 (that is, the computing power CPU-1 connected to the R1 node), then the R2 node first encapsulates the computing power flow with a computing power header, as shown in Figure 10.
  • the computing power header includes the target computing power resource identifier, the encapsulation node identifier, the extended TLV length, and the extended TLV.
  • the structure of the target computing power resource identifier and the encapsulation node identifier can refer to the structure of Figure 11, that is, Type, Sub-Type and Value, where Type is the type of the computing power resource identifier, which is consistent with the computing power resource identifier previously notified. If the computing power resource identifier is a numerical type, Type is set to 1; Sub-Type is set to 2, indicating that the length of the computing power resource identifier is 64 bits; Value is 3, indicating the computing power resource identifier value of the computing power CPU-1.
  • the encapsulation node identifier corresponds to the node information for implementing computing power encapsulation, that is, the R2 node.
  • the R2 node can be represented by its loopback address, and the encapsulation node identifier can use the same identifier structure as the target computing power resource identifier.
  • the loopback address of the R2 node is used as the encapsulation node identifier, and the loopback address is an IPv6 address
  • the Type is set to 3 (the numerical computing power resource identifier is 1, the IPv4 address type computing power resource identifier is 2, the IPv6 address type computing power resource identifier is 3, the MPLS label type computing power resource identifier is 4, etc.
  • the numerical value here is an example, and the actual numerical value may be different from this). Since the IPv6 address is fixed at 128 bits, there is no need to set the Sub-Type identifier, and the loopback address of the R2 node can be directly filled in the Value.
  • the outer encapsulation is performed.
  • the destination IP address of the outer encapsulation is set to the loopback address of the R1 node, and the next header (Next Header) field of the outer encapsulation is set to the value representing the computing power flow, such as 160. Therefore, the computing power flow can be forwarded to the R1 node through the network.
  • the intermediate nodes in the network only forward according to the destination IP address of the outermost encapsulation, and will not process the computing power flow carried.
  • the R1 node When processing the message, the R1 node identifies the computing head through the Next Header value of 160 in the outer encapsulation, and further determines that the traffic needs to be sent to the computing power resource with the "computing power resource identifier" of 3 for processing based on the target computing power resource identifier in the computing power header. Therefore, after removing the outer encapsulation and computing head encapsulation, the R1 node forwards the decapsulated computing power flow to the computing power CPU-1 for processing.
  • the computing power flow can be efficiently and accurately sent to the corresponding computing power resources for processing, thereby improving the processing efficiency of the computing power flow.
  • the transmission channel can also be specified through the path attributes of BGP.
  • the R1 node announces computing power resources, it hopes that other nodes can send computing power traffic in the form of SRv6 encapsulation. Therefore, when the R1 node announces computing power resources, in addition to carrying computing power reachability information, it also carries the path attributes indicating SRv6 encapsulation. After receiving the announcement from the R1 node, the R2 node, the R4 node, and the R5 node will use SRv6 encapsulation to encapsulate the computing power traffic and send it to the R1 node.
  • the R2 node finds that a certain computing power flow needs to be sent to the computing power GPU-1 connected to the R1 node for processing, it also finds that the R1 node specifies the SRv6 encapsulation type, so:
  • the R2 node When encapsulating, the R2 node can first encapsulate the computing power header and then encapsulate the outer SRv6 header as in Example 2; or, when encapsulating, the R2 node can not encapsulate the computing power header, but directly encapsulate the target computing power resource identifier as a parameter (Argument) into the last segment (Segment) of the segment list (Segment List) in SRv6, and set the lower 64 bits of the segment to the target computing power resource identifier.
  • the encapsulation of SRv6 is described in detail in the RFC8754 standard, which will not be repeated here.
  • the computing power flow can thus be forwarded segment by segment by segment of SRv6 and finally reach the R1 node.
  • the R1 node reads the last segment of the Segment List in SRv6, reads the target computing power resource identifier from it, searches the local computing power routing table, and completes the sending of the computing power flow to the computing power
  • SRv6 is only an outer encapsulation method for the example of this application.
  • IPv4/IPv6, MPLS, BIER and other types of outer encapsulation methods can be specified through the path attributes of BGP. This application is not limited to the above-mentioned encapsulation types, and more encapsulation types can be introduced according to the actual deployment situation.
  • Example 1 Based on Example 1, in some computing resource networks, there may be a situation where computing resources that have been assigned IP addresses coexist with computing resources that have only been assigned numerical computing resource identifiers.
  • FIG 12 is a schematic diagram of the R2 node and the computing power resource network connected to it provided in an example of the present application.
  • computing power server 1 and computing power server 2 have been assigned IPv4 addresses, but computing power GPU-11 and computing power GPU-12 do not have IP addresses, but are assigned computing power resource identifiers. It is assumed that the computing power resource identifier of computing power GPU-11 is 11, and the computing power resource identifier of computing power GPU-12 is 12.
  • the computing power resource announcement message structure of the R2 node is shown in Figure 13.
  • the Total Length after AFI and SAFI indicates the length of all computing power reachable information.
  • the IP address type computing power resource identifier is encapsulated.
  • the R1 node, the R4 node, and the R5 node After receiving the notification from the R2 node, the R1 node, the R4 node, and the R5 node establish a computing power routing table locally and distinguish whether the computing power resource is a numerical type or an IPv4 address type according to the computing power resource type (TYPE).
  • TYPE computing power resource type
  • the computing power resource notification method in this example can also be referred to, so as to ensure that computing power resources can still be accurately notified in a mixed computing power resource deployment scenario.
  • the computing power GPU-1 and computing power GPU-2 of the R1 node belong to computing power site 1, and the computing power CPU-1 and computing power CPU-2 belong to computing power site 2.
  • the R1 node announces computing power resources, it can carry the computing power site information through the computing power extended community attribute. That is, when announcing computing power resources, the R1 node will announce the computing power resource identifiers of computing power GPU-1 and computing power GPU-2, and use the computing power extended community attribute to carry the information of computing power site 1 to announce that computing power GPU-1 and computing power GPU-2 belong to computing power site 1.
  • the computing power extended community attribute is used to carry the information of computing power site 2 to announce that computing power CPU-1 and computing power CPU-2 belong to computing power site 2. Since each resource belongs to a different computing power site, the allocation of computing power resource identifiers can be based on the computing power site. For example, computing power identifiers 1 and 2 are allocated to computing power GPU-1 and computing power GPU-2 in computing power site 1, and computing power identifiers 1 and 2 are also allocated to computing power CPU-1 and computing power CPU-2 in computing power site 2. Through the above settings, the computing power resource identifiers can be reused and still accurately notified, simplifying the planning and management of computing power resources.
  • R1, R2, R4 and R5 nodes After receiving the notification from R1, R2, R4 and R5 nodes will establish two computing power routing tables for computing power site 1 and computing power site 2 respectively.
  • the R2 node When the R2 node receives a computing power flow, according to the requirements of the configuration (or controller, or orchestrator), it confirms that the computing power flow needs to be sent to the computing power GPU-1 in the computing power site 1 connected to the R1 node for processing.
  • the R2 node encapsulates the computing power head, the target computing power resource identifier and the encapsulation node identifier are filled in according to Example 2.
  • the subsequent extended TLV length is set to the total length of the TLVs that need to be carried subsequently.
  • the Type in the extended TLV is set to the value that identifies the computing power site, assuming it is 100, and the Length is set to 1 byte (the specific value of Length is set according to the number of computing power sites.
  • Example 2 For the outer encapsulation, refer to Example 2 or Example 3, which will not be repeated here.
  • the R1 node After receiving the computing power flow, the R1 node decapsulates the computing power header. If it is found that the computing power header carries the computing power site identification value 1, it means that the computing power flow needs to be sent to the computing power resources of computing power site 1 for processing. Then the R1 node searches the computing power routing table corresponding to computing power site 1 and sends the computing power flow to computing power GPU-1 for processing. This achieves the limitation of the scope of computing power resources and simplifies the identification allocation of computing power resources.
  • FIG14 is a schematic diagram of the structure of a communication device provided in an embodiment of the present application.
  • the communication device 2000 includes a memory 2100 and a processor 2200.
  • the number of the memory 2100 and the processor 2200 can be one or more, and FIG14 takes a memory 2101 and a processor 2201 as an example; the memory 2101 and the processor 2201 in the network device can be connected via a bus or other means, and FIG14 takes the connection via a bus as an example.
  • the memory 2101 is a computer-readable storage medium that can be used to store software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the method provided in any embodiment of the present application.
  • the processor 2201 implements the method provided in any of the above embodiments by running the software programs, instructions, and modules stored in the memory 2101.
  • the memory 2101 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application required for at least one function.
  • the memory 2101 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device or other non-volatile solid-state storage device.
  • the memory 2101 further includes a memory remotely arranged relative to the processor 2201, and these remote memories may be connected to the device via a network. Examples of the above-mentioned network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network and a combination thereof.
  • An embodiment of the present application also provides a computer-readable storage medium storing computer-executable instructions, which are used to execute a computing power resource notification method, a routing table establishment method, or a computing power traffic processing method as provided in any embodiment of the present application.
  • An embodiment of the present application also provides a computer program product, including a computer program or computer instructions, which are stored in a computer-readable storage medium.
  • a processor of a computer device reads the computer program or computer instructions from the computer-readable storage medium, and the processor executes the computer program or computer instructions, so that the computer device executes the computing power resource notification method, routing table establishment method, or computing power traffic processing method provided in any embodiment of the present application.
  • the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, a physical component may have multiple functions, or a function or step may be performed by several physical components in cooperation.
  • Some physical components or all physical components may be implemented as software executed by a processor, such as a central processing unit, a digital signal processor or a microprocessor, or implemented as hardware, or implemented as an integrated circuit, such as an application-specific integrated circuit.
  • a processor such as a central processing unit, a digital signal processor or a microprocessor
  • Such software may be distributed on a computer-readable medium, which may include a computer storage medium (or non-transitory medium) and a communication medium (or temporary medium).
  • computer storage medium includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storing information (such as computer-readable instructions, data structures, program modules or other data).
  • Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, disk storage or other magnetic storage devices, or any other medium that can be used to store desired information and can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media.
  • a component can be, but is not limited to, a process running on a processor, a processor, an object, an executable file, an execution thread, a program, or a computer.
  • an application running on a computing device and a computing device can be a component.
  • One or more components can reside in a process or execution thread, Components may be located on one computer or distributed between two or more computers.
  • the components may be executed from various computer-readable media having various data structures stored thereon.
  • Components may communicate, for example, through local or remote processes based on signals having one or more data packets (e.g., data from two components interacting with another component in a local system, a distributed system, or a network, such as the Internet interacting with other systems via signals).
  • signals having one or more data packets (e.g., data from two components interacting with another component in a local system, a distributed system, or a network, such as the Internet interacting with other systems via signals).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供一种算力资源通告方法、路由表建立方法、算力流量处理方法、通信设备及计算机可读存储介质。该算力资源通告方法包括:将边界网关协议消息中的地址族类型设置为算力路由类型,其中,算力路由为算力资源所对应的路由;将第一节点连接的算力资源信息填充至边界网关协议消息中;向第二节点发送边界网关协议消息。

Description

算力资源通告方法、算力流量处理方法、通信设备及介质
相关申请的交叉引用
本申请基于申请号为202211398346.2、申请日为2022年11月09日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请实施例涉及通信技术领域,尤其涉及一种算力资源通告方法、路由表建立方法、算力流量处理方法、通信设备及计算机可读存储介质。
背景技术
相关技术中,在算力网络中的算力资源分布在分离的算力节点,通过边界网关协议(Border Gateway Protocol,BGP)或各类内部网关协议(Interior Gateway Protocol,IGP)对算力路由进行发布与通告,实现分布式的算力资源发现与调用。
但算力资源类型较多,大到可以为服务器,小的也可以为一个CPU,若是对全部资源采用互联网协议第四版(Internet Protocol version 4,IPv4)或互联网协议第六版(“Internet Protocol version 6,IPv6)地址进行分配,会对有限的IP地址造成大量的浪费,提升了算力资源的管理和规划部署的难度。因此,如何降低算力资源的管理和规划部署难度是一个亟待解决的问题。
发明内容
本申请实施例提供一种算力资源通告方法、路由表建立方法、算力流量处理方法、通信设备及计算机可读存储介质,旨在降低算力资源的管理和规划部署难度。
第一方面,本申请实施例提供一种算力资源通告方法,应用于第一节点,所述方法包括:将边界网关协议消息中的地址族类型设置为算力路由类型,其中,所述算力路由为算力资源所对应的路由;将所述第一节点连接的算力资源信息填充至所述边界网关协议消息中;向第二节点发送所述边界网关协议消息。
第二方面,本申请实施例提供一种路由表建立方法,应用于第二节点,所述方法包括:根据第一节点通告的算力资源通告消息,生成下一跳为所述第一节点的算力路由表;其中,所述算力资源通告消息根据第一方面所述的算力资源通告方法得到。
第三方面,本申请实施例提供一种算力流量处理方法,应用于第二节点,所述方法包括:接收发往第一节点所连接的算力资源的算力流量;根据所述第一节点通告的边界网关协议消息,得到算力资源信息与算力封装类型;根据所述算力资源信息与所述算力封装类型,对所述算力流量进行封装;向所述第一节点发送封装后的算力流量;其中,所述边界网关协议消息根据第一方面所述的算力资源通告方法得到。
第四方面,本申请实施例提供一种算力流量处理方法,应用于第一节点,所述方法包括:接收第二节点发送的算力流量,其中,所述算力流量根据第三方面任一项所述的算力流量处理方法得到;对所述算力流量进行解封装,得到目标算力资源信息与解封装后的算力流量; 根据算力路由表与所述目标算力资源信息,向目标算力资源转发所述解封装后的算力流量。
第五方面,本申请实施例提供一种通信设备,包括:至少一个处理器;至少一个存储器,用于存储至少一个程序;当至少一个所述程序被至少一个所述处理器执行时实现如第一方面所述的算力资源通告方法,或,如第二方面所述的路由表建立方法,或,如第三方面或第四方面所述的算力流量处理方法。
第六方面,本申请实施例提供一种计算机可读存储介质,其中存储有处理器可执行的程序,所述处理器可执行的程序被处理器执行时用于实现如第一方面所述的算力资源通告方法,或,如第二方面所述的路由表建立方法,或,如第三方面或第四方面所述的算力流量处理方法。
在本申请实施例中,通过将BGP中的地址族类型设置为算力路由类型,并将节点对应的算力资源信息填充至BGP消息中,进行算力资源通告,实现对算力资源地准确通告,避免与其他非算力路由混淆。通过基于算力资源标识,将算力流量进行封装,从而引导到正确的算力资源对其进行处理,进一步简化算力资源的规划管理,并使得算力资源的通告更加简单高效。
附图说明
图1为本申请一实施例提供的一种网络通信系统的示意图;
图2为本申请一实施例提供的算力资源通告方法的流程图;
图3为本申请一示例提供的R1节点及连接的算力资源网络的示意图;
图4为本申请一示例提供的BGP消息格式示意图;
图5为本申请一实施例提供的路由表建立方法的流程图;
图6为本申请一实施例提供的算力流量处理方法的流程图;
图7为本申请另一实施例提供的算力流量处理方法的流程图;
图8为本申请一示例提供的BGP网络系统的示意图;
图9为本申请一示例提供的R1节点通告算力资源时的消息结构示意图;
图10为本申请一示例提供的算力头结构示意图;
图11为本申请一示例提供的目标算力资源标识和封装节点标识的结构示意图;
图12为本申请一示例提供的R2节点及其连接的算力资源网络的示意图;
图13为本申请一示例提供的R2节点的算力资源通告消息结构;
图14为本申请一实施例提供的通信设备的结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
需要说明的是,虽然在装置示意图中进行了功能模块划分,在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于装置中的模块划分,或流程图中的顺序执行所示出或描述的步骤。说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
本申请实施例中,“进一步地”、“示例性地”或者“可选地”等词用于表示作为例子、例证 或说明,不应被解释为比其它实施例或设计方案更优选或更具有优势。使用“进一步地”、“示例性地”或者“可选地”等词旨在以具体方式呈现相关概念。
相关技术中,算力网络是一种根据业务需求,在云、边、端之间按需分配和灵活调度计算资源、存储资源以及网络资源的新型信息基础设施。算力网络的本质是一种算力资源服务,未来企业客户或者个人用户不仅需要网络和云,也需要把计算任务通过最优网络路径调度到最优的计算节点。算力资源可能分布在网络中的任何位置,高效的算力网络会使用联网技术将算力资源进行整合。
算力网络包括算网基础设施层,编排管理层,运营服务层。是“连接+算力+能力”的新型信息服务体系。为了实现算力调度,将算力资源所对应的路由进行通告是关键的一环。为了方便描述,在后文均用“算力路由”的简写来代替“算力资源所对应的路由”。算力路由可通告给分布式的路由器节点,也可以通告给集中式的控制器设备,控制器根据这些路由信息,对用户的算力请求进行正确调度。
边界网关协议(Border Gateway Protocol,BGP)是一种对网络提供互联的重要路由协议。可通过维护IP路由表或“前缀”表来实现可达性,属于矢量路由协议。BGP基于路径、网络策略或规则集来决定路由。
在相关技术的算力网络中,算力资源分布在分离的算力节点中,BGP或各类内部网关协议(Interior Gateway Protocol,IGP)可以将算力节点连接起来,进一步地,可以通过BGP和IGP对算力路由进行发布与通告,从而实现分布式的算力资源发现与调用。目前,算力资源均通过IPv4或者IPv6路由来进行标识,从而实现在BGP/IGP协议中的路由通告。
但算力资源类型广泛,可能是服务器,也可能仅仅是一个CPU,如果为了正确通告这些算力资源,对这些资源全部分配IPv4或者IPv6地址,会造成有限IP地址的大量浪费,增大了算力资源的管理和规划部署难度。另外,将已有的IP地址分配给算力资源,对算力资源进行通告,极易与非算力路由通告产生混淆,由此造成设备处理困难,并且可能引起路由安全问题。
为了解决上述问题,本申请提供一种算力资源通告方法、路由表建立方法、算力流量处理方法、通信设备及计算机可读存储介质。通过将BGP中的地址族类型设置为算力路由类型,并将节点对应的算力资源信息填充至BGP消息中,进行算力资源通告,实现对算力资源地准确通告,避免与其他非算力路由混淆。通过增加能够唯一区分的数值标识作为算力资源标识,对算力资源进行区分,不局限于IPv4或IPv6地址,减少IPv4或IPv6地址的浪费,并能够降低算力资源的管理与规划部署难度。本申请还通过基于算力资源标识,将算力流量进行封装,其中,算力流量为需要提供算力资源服务的流量数据,从而将算力流量引导到正确的算力资源对其进行处理,进一步简化算力资源的规划管理,并使得算力资源的通告更加简单高效。
图1是本申请实施例提供的一种网络通信系统的示意图,如图所示,在实施例中,示例性地,通信系统包括第一网络设备110、第二网络设备120、第三网络设备130、第四网络设备140、第五网络设备150、第六网络设备160;各网络设备之间相互通信连接。
可以理解的是,本实施例的通信系统中的设备数量及设备间通信关系能够根据实际需求进行扩展和变化,在此不做具体限定。
本申请实施例的技术方案可以应用于各种通信系统,例如:算力网络(Computing Aware Network)、IP承载网(IP Bearer Network)、宽带码分多址移动通信系统(Wideband Code Division Multiple Access,WCDMA)、演进的全球陆地无线接入网络(Evolved Universal Terrestrial Radio  Access Network,E UTRAN)系统、下一代无线接入网络(Next Generation Radio Access Network,NG RAN)系统、长期演进(Long Term Evolution,LTE)系统、全球互联微波接入(Worldwide Interoperability For Microwave Access,WiMAX)通信系统、第五代(5th Generation,5G)系统、如新一代无线接入技术(New Radio Access Technology,NR)、及未来的通信系统,如6G系统等。
本申请实施例的技术方案可以应用于各种通信技术,例如微波通信、光波通信、毫米波通信等。本申请实施例对采用的具体技术和具体设备形态不做限定。
本申请实施例中的第一网络设备110、第二网络设备120、第三网络设备130、第四网络设备140、第五网络设备150、第六网络设备160,后续统称为网络设备,可以是任意一种具备通信能力的网络设备,网络设备可以是路由器、服务器、具备通信功能的汽车、智能汽车、手机(Mobile Phone)、穿戴式设备、平板电脑(Pad)、带收发功能的电脑、虚拟现实(Virtual Reality,VR)设备、增强现实(Augmented Reality,AR)设备、工业控制(Industrial Control)中的网络设备、无人驾驶(Self driving)中的网络设备、远程手术(Remote Medical Surgery)中的网络设备、智能电网(Smart Grid)中的网络设备、运输安全(Transportation Safety)中的网络设备、智慧城市(Smart City)中的网络设备、智慧家庭(Smart Home)中的网络设备、车载通信网络中的网络设备等。本申请实施例对网络设备所采用的具体技术和具体设备形态不做限定。
图2为本申请一实施例提供的算力资源通告方法的流程图。如图2所示,该算力资源通告方法可以但不限于应用于路由器、服务器、终端等网络设备,或上述实施例提供的网络通信系统。在图2的实施例中,该算力资源通告方法应用于第一节点,可以包括但不限于步骤S1100、S1200及S1300。
步骤S1100:将边界网关协议消息中的地址族类型设置为算力路由类型。其中,算力路由为算力资源所对应的路由。在BGP的地址族标识的类型中新增用于表征算力路由的类型,当接收到携带该地址族的BGP消息时,能够根据地址族类型识别该BGP消息为算力相关消息,实现算力资源的准确通告,避免与非算力路由之间产生混乱。
在一实施例中,边界网关协议包括地址族标识与子地址族标识;步骤S1100包括:将地址族标识的值设置为算力路由类型对应的地址族标识值;将子地址族标识的值设置为算力资源的资源类型对应的资源类型值。
在一实施例中,算力资源的资源类型至少包括以下之一:无类型算力资源;硬件类型算力资源;软件类型算力资源。
在BGP消息中的地址族标识(Address Family Identifiers,AFI)中新增算力路由类型对应的地址族标识值,如设置AFI=32,当读取到BGP消息的AFI值为32时,表征该BGP消息为算力通告相关消息。可以理解的是,AFI的取值可以为其他未被占用的标识数值,在本申请中不做具体限定。
在BGP消息中的子地址族标识(Subsequent Address Family Identifiers,SAFI)中新增算力资源的资源类型对应的子地址族标识值,用于表征该算力路由对应的算力资源的资源类型。
算力资源的类型可分为硬件类,如图形处理器(Graphics Processing Unit,GPU)、中央处理器(Central Processing Unit,CPU)、内存等,还可包含软件类,即可提供特定功能的服务,通过设置不同的SAFI值标识不同种类的算力资源,示例性地,用SAFI=10表示无类型的算力资源,用SAFI=20表示硬件类的算力资源,用SAFI=30表示软件类的算力资源等,其中,无类型的算力资源适用于较为简单的网络,硬件类和软件类的算力资源则可适用于复 杂算力网络场景。可以理解的是,SAFI的取值可以为其他未被占用的标识数值,在本申请中不做具体限定。
根据AFI的算力资源标识值确认当前BGP消息为算力资源通告消息,根据SAFI的算力资源类型标识值确认当前BGP消息携带的算力资源的资源类型,与其他非算力路由区分,避免混淆,能够更加准确的进行算力资源通告,并且能够简化算力资源的规划管理。
步骤S1200:将第一节点连接的算力资源信息填充至边界网关协议消息中。
其中,与第一节点连接的算力资源信息可以为第一节点自身的算力资源的信息,如第一节点自身的CPU、GPU或其他具备算力的模块或设备,也可以为与第一节点进行通信连接的其他网络设备的算力资源的信息,如与第一节点连接的服务器的信息等;可以理解的是,第一节点连接的算力资源信息包括与第一节点进行通信连接的内部或外部的具备算力的芯片、模块、设备等算力资源的信息,在此不做具体限定。
在一实施例中,算力资源信息包括至少一个算力可达信息,算力可达信息包括算力资源标识类型字段、算力资源标识长度字段与算力资源标识值字段。其中,一个算力可达信息与一个算力资源对应。
在一实施例中,将算力资源信息填充至BGP的UPDATE消息中,采用TLV(Type,Length,Value)格式表达算力可达信息,其中,Type对应算力资源标识类型字段,Length对应算力资源标识长度字段,Value对应算力资源标识值字段。
在一实施例中,若算力资源信息中包括多个算力可达信息,每个算力可达信息中的算力资源标识类型或算力资源标识值不相同。
在一实施例中,每个算力可达信息的算力资源标识不同,算力资源标识值不同。
在另一实施例中,每个算力可达信息的算力资源标识不同,算力资源标识值相同。
在另一实施例中,每个算力可达信息的算力资源标识相同,算力资源标识值不同。
通过算力资源标识和算力资源标识值的设置,两者其中一个不同,即可对算力可达信息做出区分,使得算力资源标识值的配置更加灵活,简化了算力资源的规划管理。
在一实施例中,算力资源标识类型至少包括以下之一:数值型;IPv4地址类型;IPv6地址类型;多协议标签交换类型。其中,数值型即直接采用可唯一指代的数值作为算力资源标识,IPv4地址类型即采用IPv4地址作为算力资源标识,IPv6地址类型即采用IPv6地址作为算力资源标识,多协议标签交换类型即采用多协议标签交换技术生成的数值作为算力资源标识。由于在BGP消息中设置了能够标识消息为算力类型的AFI值,因此能够与非算力路由进行区分,实现IPv4地址、IPv6地址的复用。
可以理解的是,为了兼容算力资源标识类型混杂的部署场景,不同类型的算力资源标识可以一并进行算力资源通告。
在一实施例中,若算力资源标识类型为数值型,算力资源标识类型对应的算力资源标识长度至少包括以下之一:32位、64位、128位、256位、512位。可以理解的是,算力资源标识长度可以根据算力资源的数量,即数值型算力资源标识的数量进行设置,数值型算力资源标识的数量较少时,可以设置较短的算力资源标识长度,数值型算力资源标识的数量较多时,可以设置较长的算力资源标识长度,本申请不对算力资源标识长度做具体限定。
在一实施例中,算力资源信息包括算力资源属性,算力资源属性至少包括以下之一:算力资源的数量;算力资源的单位。在部分场景中,如算力交易场景中,算力资源通告可以不通告算力资源标识,仅通告算力资源的类型和数量,则算力资源信息包括算力资源属性,算 力资源属性包括算力资源的数量和算力资源的单位;其中,算力资源的数量为相同种类的汇总的算力,算力资源的单位为对应种类的算力资源的算力单位。
示例性地,图3为本申请一示例提供的R1节点及连接的算力资源网络的示意图,如图3所示,R1节点连接的算力网络包括算力GPU-1、算力GPU-2、算力CPU-1和算力CPU-2,R1节点分别与算力GPU-1、算力GPU-2、算力CPU-1和算力CPU-2进行通信连接。对于R1节点,可将连接的算力GPU-1和算力GPU-2进行打包通告,还可以将算力CPU-1和算力CPU-2能支持的服务能力打包通告;此时,R1节点不需要单独通告每个GPU和CPU的算力资源标识,通过携带硬件类型算力资源对应的SAFI和算力资源类型对应的AFI的BGP消息,并写入对应的算力资源属性,进行算力资源通告。假设R1节点所连的GPU总共可以对外提供200每秒10亿次的浮点运算数(Giga Floating-point Operations Per Second,GFLOPS)的算力资源,此时的BGP消息如图4所示,AFI=32表示消息类型为算力资源类型,SAFI=20表示硬件类型算力资源,总长度(Total Length)表示后续所携带的内容长度,200表示所通告的算力资源的能力数量,Type 1(GFLOPS)表示算力资源的单元。
软件型算力资源同样可以采用上述的方式进行通告,假设R1节点连接的CPU可以对外提供图形计算或者人工智能计算等服务,可以采用其中AFI=32、SAFI=30的BGP消息进行通告,AFI=32示消息类型为算力资源类型,SAFI=30表示软件型算力资源,BGP消息中携带算力资源属性分别写入对应的算力资源的数量和算力资源的单位。
可以理解的是,在其他示例中,硬件型算力资源还可包括内存等其他具备运算能力的硬件,其相应的通告类似,在算力资源属性中携带不同的算力资源的单位用以区分不同的类型;如算力资源为内存,内存为300,在其对应的算力资源属性中,算力资源的数量写入300,算力资源的单位写入TB。
在一实施例中,方法还包括:将算力封装类型填充至边界网关协议消息中,其中,算力封装类型用于指示算力流量使用的封装类型。
在一实施例中,封装类型至少包括以下之一:IPv4类型;IPv6类型;多协议标签交换类型;基于IPv6的段路由类型;位索引显式复制类型。其中,封装类型包括但不限于IPv4、IPv6、多协议标签交换(Multi-Protocol Label Switching,MPLS)、基于IPv6的段路由(Segment Routing IPv6,SRv6)、位索引显式复制(Bit Index Explicit Replication,BIER)等。
在一实施例中,算力资源通告方法还包括:将算力扩展团体属性填充至边界网关协议消息中,其中,算力扩展团体属性用于携带算力站点信息。具体地,算力扩展团体属性信息包括算力站点信息。通过算力扩展团体属性的设置,对多个算力资源组成的算力站点进行通告,实现对算力资源的范围限定,简化算力资源的标识分配,降低算力资源的规划管理难度。
示例性地,以图3所示的R1节点为例,如图3所示,R1节点的算力GPU-1、算力GPU-2属于算力站点1,算力CPU-1、算力CPU-2属于算力站点2。对算力站点1的算力资源进行通告时,BGP消息包括算力GPU-1、算力GPU-2对应的算力资源信息,并且通过算力扩展团体属性携带算力站点信息,以指示当前BGP消息中的算力GPU-1、算力GPU-2属于算力站点1。对算力站点2的算力资源进行通告时,BGP消息包括算力CPU-1、算力CPU-2对应的算力资源信息,并且通过算力扩展团体属性携带算力站点信息,以指示当前BGP消息中的算力CPU-1、算力CPU-2属于算力站点2。由于各资源所属的算力站点不同,且有算力扩展团体属性用于区分算力站点,因此算力资源标识的分配可基于算力站点来进行,例如,在算力站点1中为GPU-1和GPU-2分配算力资源标识分别为1和2,在算力站点2中为CPU-1和 CPU-2也可分配算力资源标识分别为1和2;能够给不同站点的算力资源设置相同的算力资源标识值,同时不影响算力资源的准确识别与通告,简化算力资源的标识分配。
步骤S1300:向第二节点发送边界网关协议消息。
在邻居建链时,第一节点向建链的相邻节点,发送BGP消息,此时的BGP消息为OPEN消息,OPEN消息中带有表征算力资源类型的AFI值和表征算力资源的资源类型的SAFI值,第一节点通过该OPEN消息向邻居节点进行能力通告,告知自身具备算力资源通告能力;同时,第一节点接收邻居节点的OPEN消息,确认邻居节点是否具备算力资源通告能力。
在建链完成后,第一节点向具备算力资源通告能力的邻居节点发送BGP消息,此时的BGP消息为UPDATE消息,UPDATE消息中带有表征算力路由类型的AFI值和表征算力资源的资源类型的SAFI值以及第一节点连接的算力资源信息,第一节点通过该UPDATE消息向邻居节点进行算力资源通告,在通过UPDATE消息进行算力资源通告时,仅向具备算力资源通告能力的邻居节点进行通告。避免不具备算力资源通告能力的邻居节点收到无法支持的算力流量,引起处理错误。可以理解的是,第二节点与邻居节点为同一指代。
在一实施例中,边界网关协议消息为更新消息;方法还包括:将下一跳属性配置为第一节点的回环地址;将下一跳属性填充至更新消息中。其中,回环地址用于作为对应节点的标识,如第一节点的回环地址则可以作为第一节点的标识。
图5为本申请一实施例提供的路由表建立方法的流程图。如图5所示,该路由表建立方法可以但不限于应用于路由器、服务器、终端等网络设备,或上述实施例提供的网络通信系统。在图2的实施例中,该路由表建立方法应用于第二节点,可以包括但不限于步骤S21000。
步骤S2100:根据第一节点通告的算力资源通告消息,生成下一跳为第一节点的算力路由表;其中,算力资源通告消息根据任一实施例的算力资源通告方法得到。
第一节点根据需要通过BGP消息将算力可达消息、算力封装类型信息、算力扩展团体属性信息等算力资源相关的信息进行通告,各具备算力资源通告能力的邻居接收到相关通告后形成算力路由表,同时为算力流量选择封装方式。算力路由表中的下一跳可以是提供算力资源的服务站点的IP地址,也可以是提供改算力资源的服务站点所连接的BGP设备的IP地址。
可以理解的是,在本申请中,第一节点表征向邻居节点进行算力资源通告的节点,第二节点表征接收其他节点发出的算力资源通告的邻居节点,在同一网络中的每一节点既可以是第一节点也可以是第二节点。
图6为本申请一实施例提供的算力流量处理方法的流程图。如图6所示,该算力流量处理方法可以但不限于应用于路由器、服务器、终端等网络设备,或上述实施例提供的网络通信系统。在图2的实施例中,该算力流量处理方法应用于第二节点,可以包括但不限于步骤S3100、S3200、S3300及S3400。
步骤S3100:接收发往第一节点所连接的算力资源的算力流量。
第二节点接收算力流量,根据配置、控制器或编排器的要求,确认算力流量的目标算力资源,检查算力路由表,若该算力流量的目标算力资源就在第二节点所连接的算力资源网络中,则直接转发给对应的目标算力资源进行处理;若该算力流量的目标算力资源在跨越多端的远端设备上,即在其他节点连接的算力资源网络中,如在第一节点所连接的算力资源网络中,则需要对算力流量进行封装后再发送目标节点发送。
步骤S3200:根据第一节点通告的边界网关协议消息,得到算力资源信息与算力封装类型。第二节点在先根据第一节点的通告的边界网关协议消息,建立算力路由表;在第二节点 接收到目标算力资源在第一节点连接的算力资源网络中的算力流量后,从算力路由表中获取第一节点的算力资源信息和算力封装类型。
步骤S3300:根据算力资源信息与算力封装类型,对算力流量进行封装。第二节点根据获取的第一节点的算力资源信息与算力封装类型,对算力流量进行封装。
在一实施例中,对算力流量进行封装包括至少以下之一:对算力流量进行算力头封装与外层封装;对算力流量进行外层封装。
在一实施例中,在一些场景,如算力封装类型中的封装类型为SRv6的情况下,省略对算力流量进行算力头封装,仅对算力流量进行外层封装。
在一实施例中,对算力流量进行算力头封装与外层封装包括:在算力头中封装算力资源信息中携带的算力资源标识或第二节点标识中的至少一个;根据算力封装类型中的封装类型,对算力流量进行外层封装。具体地,第二节点标识为第二节点的回环地址。
可以理解的是,在算力流量处理方法的相关实施例中,算力资源信息和算力封装类型对应的具体细节与算力资源通告方法中各实施例对应,在此不做赘述。
在一实施例中,算力头封装包括目标算力资源标识、封装节点标识以及相关信息。其中,目标算力资源标识为第一节点对应的算力资源信息中携带的算力资源标识,封装节点标识为第二节点标识,即执行算力封装的节点的信息。进一步地,在执行算力封装的节点具备算力资源标识时,对应的封装节点标识可以采用与目标算力资源标识相同的标识格式。
在一实施例中,将算力流量的目的IP地址设置为第一节点(目标节点)的回环地址,进行对算力流量的外层封装。
在一实施例中,对算力流量进行外层封装包括:在算力封装类型中的封装类型为SRv6的情况下,将算力资源标识作为参数封装至封装类型对应的段列表中的最后一个段中。
在一实施例中,算力头中包括算力站点信息。在目标算力资源存在算力站点时,在进行算力头封装时,需要将目标算力资源对应的算力站点信息一同封装。
步骤S3400:向第一节点发送封装后的算力流量。其中,边界网关协议消息根据上述实施例中的算力资源通告方法得到。第二节点向第一节点发送封装后的算力流量,第一节点接收到封装后的算力流量后,第一节点根据自身的算力路由表指示将封装的算力流量解封装后,将算力流量转发给其所连接的算力资源网络中对应的算力资源进行处理。
图7为本申请另一实施例提供的算力流量处理方法的流程图。如图7所示,该算力流量处理方法可以但不限于应用于路由器、服务器、终端等网络设备,或上述实施例提供的网络通信系统。在图7的实施例中,该算力流量处理方法应用于第一节点,可以包括但不限于步骤S4100、S4200及S4300。
步骤S4100:接收第二节点发送的算力流量。其中,算力流量根据上述各实施例提供的算力流量处理方法得到。第一节点接收第二节点发送的封装后的算力流量。
步骤S4200:对算力流量进行解封装,得到目标算力资源信息与解封装后的算力流量。第一节点对算力流量进行解封装,得到目标算力资源信息与解封装后的算力流量。
在一实施例中,目标算力资源信息包括算力资源标识。
在一实施例中,目标算力资源信息包括算力站点标识。
步骤S4300:根据算力路由表与目标算力资源信息,向目标算力资源转发解封装后的算力流量。第一节点根据自身的算力路由表与目标算力资源信息,将解封装后的算力流量转发至第一节点连接的算力资源网络中对应的目标算力资源进行处理。
在一实施例中,目标算力资源信息包括算力资源标识时,第一节点将解封后的算力流量准发给算力资源标识对应的算力资源进行处理。
在一实施例中,目标算力资源信息包括算力站点标识时,第一节点将解封后的算力流量转发给算力站点标识对应的算力站点,通过算力站点中的算力资源对解封后的算力流量进行处理。
可以理解的是,算力流量处理方法的相关实施例中,第一节点表征算力流量的目标节点,第二节点表征对算力流量进行封装发送的节点,第一节点可以具有一个或多个,第二节点也可以具有一个或多个,在本申请中不做具体限定。
为了进一步阐述本申请实施例提供的算力资源通告方法、路由表建立方法、算力流量处理方法,采用下述示例进行详细说明。
图8为本申请一示例提供的BGP网络系统的示意图。如图8所示,网络中包括R1节点、R2节点、R3节点、R4节点、R5节点和R6节点,其中,R1节点、R2节点、R4节点、R5节点均有对应连接的算力资源网络。
示例1:
在如图8所示的网络中,R1节点、R2节点、R4节点和R5节点均支持算力路由并作为算力网关设备与各自对应的算力资源网络相连接,R1节点、R2节点、R3节点、R4节点、R5节点、R6节点会形成一个基于BGP协议的MESH网络。其中,R1节点/R2节点/R4节点/R5节点在BGP的OPEN消息中,增加算力能力通告,具体表现为:
在OPEN消息中,通告能力编码(capability code)值为1,表示组播协议边界网关协议(Multiprotocol BGP),在通告capability code 1后,跟随表示算力的AFI和SAFI值;假设表示算力AFI的值为32;通告的SAFI代表算力资源类型,支持无类型算力资源的通告,可设置SAFI=10;支持硬件类型算力资源的通告,可设置SAFI=20,支持软件类型算力资源的通告,可设置SAFI=30,可以理解的是,AFI和SAFI的值仅是示例性说明,并非具体限定。
在各节点进行能力通告后,各节点同时也获知了其他节点的能力情况,以R1节点为例,R1节点能够获知R2节点、R4节点和R5节点都具有算力资源通告能力,但R3节点和R6节点不具备该能力,因此R1节点在通告算力资源时,仅会通告给R2节点、R4节点和R5节点,而不会通告给R3节点和R6节点,由此避免R3节点和R6节点收到无法支持的算力流量而引起处理错误。并且,R2节点、R4节点和R5节点可以根据R1节点通告的SAFI值,获知R1节点支持的算力资源类型。网络中的控制器或者编排器,可以仅与R1节点、R2节点、R4节点、R5节点中的任意一台连接,即可获取网络中所有的算力资源信息。
如图3所示,R1节点所连接的算力资源,包括GPU与CPU,如图3中所示的算力资源可能集中在一台设备上,也可能分布在多台设备上。假设这些算力资源可直接对外提供算力服务,因此需分别对外通告其作为算力资源的标识,假设算力资源用64位标识进行区分,比如算力GPU-1用1来表示,算力GPU-2用2来表示,算力CPU-1用3来表示,算力CPU-2用4来表示。R1节点在通告算力资源时,将按照64位的数值型算力资源标识来通告,如图9所示。
具体表现为,如图9所示,在R1节点使用BGP的UPDATE消息通告算力资源时,携带表示算力的AFI和SAFI,其中,AFI=32表示该消息为算力资源通告消息,SAFI=10表示算力资源为无类型的算力资源;Total Length携带消息总长信息;在SAFI后,使用TLV(Type,Length,Value)的格式来表示具体的算力资源标识类型和值,在图9中,Type为1表示数值 型的算力资源标识,Sub-Type表示具体的算力资源标识的值的长度,Sub-Type值为2则可对应算力资源标识的长度为64位(Sub-Type为1可表示算力资源标识的长度为32位,Sub-Type为3可表示算力资源标识的长度表示为128位,以此类推);Sub-Type后再跟具体的算力资源标识的长度之和(Length),其中,Length为对应的Sub-Type下,所有的算力资源的标识的长度之和。在本示例中,R1节点连接了算力资源1~4,因此算力资源标识的长度之和为4个64位,也就是32个字节(1字节=8位);在Length之后则是4个算力资源的具体算力资源标识值。
在通告R1节点的算力资源时,BGP的UPDATE消息中还需要携带下一跳(NEXT-HOP)属性,并且NEXT-HOP属性中的值设置为R1节点的回送(Loopback)地址(通常也可以是BGP的邻居建立地址);另外,还可以选择携带算力资源属性值,其中,算力资源属性包括算力资源的数量和算力资源的单位,该算力资源属性值可被控制器或者编排器用来做算力度量。
由此R1节点可完成算力资源通告,并生成通往各个算力资源的本地算力路由转发表,如图3中,R1节点通过不同的接口连接算力GPU-1、算力GPU-2、算力CPU-1和算力CPU-2。在某个算力资源被撤销时,如图3中的算力GPU-2不再可用,不能再提供算力服务时,R1节点将发送新的UPDATE消息撤销算力GPU-2的算力资源标识所对应路由;消息发送格式参照本示例上述格式,区别仅在于在BGP协议的撤销路由中发送。
其他节点如R2节点、R4节点和R5节点收到R1节点的通告后,将建立下一跳为R1节点的算力路由表,其内容为算力资源标识1~4对应的R1节点。
通过AFI、SAFI的设置对算力资源通过消息和算力资源的资源类型进行标识,以及采用数值型的算力资源标识,能够复用IPv4和IPv6地址等现有的地址,且不会非算力消息通告混淆,准确进行算力资源通告,同时,通过数值型的算力资源标识代替IPv4和IPv6地址进行标识,节省了有限的IPv4和IPv6地址,能够简化算力资源的规划和管理,使得算力资源通告更加简单高效。
示例2:
以R2节点为例为例,进一步阐述算力流量的封装流程。假设R2节点在收到某条算力流量,根据配置(或控制器、或编排器)的要求,确认该算力流量需要发送给算力资源标识为3的节点(即R1节点所连的算力CPU-1),则R2节点先给该算力流量封装算力头,如图10所示,算力头包括目标算力资源标识、封装节点标识、扩展TLV长度、扩展TLV,其中,目标算力资源标识和封装节点标识的结构可参考图11的结构,即Type、Sub-Type和Value,其中Type是算力资源标识的类型,该类型与在先通告的算力资源标识一致,如算力资源标识是数值型的,Type设置为1;Sub-Type设置为2,表示算力资源标识的长度是64位;Value为3,表示算力CPU-1的算力资源标识值。
封装节点标识对应实施算力封装的节点信息,即R2节点,R2节点可用其Loopback地址表示,封装节点标识可以采用与目标算力资源标识相同的标识结构。假设用R2节点的Loopback地址作为封装节点标识,且该Loopback地址为IPv6地址时,Type设置为3(数值型算力资源标识为1,IPv4地址类型算力资源标识为2,IPv6地址类型算力资源为3,MPLS标签类型算力资源标识为4等,此处的数值是示例,实际数值可与此不同),由于IPv6地址固定128位,因此无需设置Sub-Type标识,直接在Value里填入R2节点的Loopback地址即可。
完成算力头封装后,再进行外层封装,外层封装的目的IP地址设置为R1节点的Loopback地址,该外层封装的下一报头(Next Header)字段设置为表示算力流量的值,如160。从而该算力流量可以经过网络转发到R1节点,网络中间节点仅根据最外层封装的目的IP地址转发,并不会处理所携带的算力流量。R1节点在处理该报文时,通过外层封装中的Next Header值为160辨识出算力头,进一步根据算力头中的目标算力资源标识,确定该流量需要发送给“算力资源标识”为3的算力资源处理,因此R1节点去除外层封装和算力头封装后,将解封装的算力流量转发给算力CPU-1进行处理。
通过根据目标算力资源的相关消息,对算力流量进行封装后发送,将算力流量高效准确地发送至对应的算力资源进行处理,提升了算力流量的处理效率。
示例3:
在本申请中,还可以通过BGP的路径属性指定传输通道。例如,R1节点在通告算力资源时,希望其他节点能够以SRv6封装形式发送算力流量,因此在R1节点通告算力资源时,除了携带算力可达信息,还会携带表示SRv6封装的路径属性。R2节点、R4节点和R5节点在收到R1节点的通告后,会用SRv6封装对算力流量进行外层封装并发送给R1节点。
以R2节点为例,R2节点发现某算力流量需要发送给R1节点连接的算力GPU-1处理时,还发现R1节点指定了SRv6封装类型,因此:
R2节点在封装时可以如示例2,先完成算力头的封装,再封装外层SRv6头;或者,R2节点在封装时,可以不进行算力头的封装,而直接将目标算力资源标识作为参数(Argument)封装到SRv6中段列表(Segment List)的最后一个段(Segment)中,将该Segment的低64位设置为目标算力资源标识,SRv6的封装参考RFC8754标准中有详述,在此不再赘述。算力流量由此可以通过SRv6的逐段转发,最终到达R1节点,R1节点读取SRv6中Segment List的最后一个Segment,从中读取到目标算力资源标识,查找本地的算力路由表,完成算力流量向算力GPU-1的发送。
可以理解的是,SRv6仅为本申请示例的一种外层封装方式,根据实际部署中的不同,还可以选择IPv4/IPv6、MPLS、BIER等类型的外层封装方式,采用何种封装类型可通过BGP的路径属性进行指定;本申请并不限定于上述几种封装类型,还可以根据实际部署情况引入更多的封装类型。
示例4:
基于示例1,在某些算力资源网络中,可能存在已分配了IP地址的算力资源与只分配了数值型算力资源标识的算力资源共存的情况。
图12为本申请一示例提供的R2节点及其连接的算力资源网络的示意图,如图12所示,R2节点所连接的算力资源网络中,算力服务器1与算力服务器2已被分配IPv4地址,但算力GPU-11和算力GPU-12不具有IP地址,而是被分配了算力资源标识,假设算力GPU-11的算力资源标识为11,算力GPU-12的算力资源标识为12。
R2节点在通告其算力资源给R1节点、R4节点和R5节点时,需要将两种类型的算力资源标识一并通告。R2节点的算力资源通告消息结构如图13所示,AFI和SAFI后面的Total Length,表示了所有算力可达信息的长度。在封装数值型算力资源标识后,再封装IP地址类型的算力资源标识,如图13所示,TYPE=1表示数值型算力资源标识,后接用于表示所有数值型算力资源标识的长度之和Length,以及R2节点的算力GPU-11和算力GPU-12对应的算力资源标识的具体值。TYPE=2表示IPv4地址类型算力资源标识,后接用于表示所有IPv4 地址类型算力资源标识的长度之和Length,以及R2节点的算力服务器1和算力服务器2对应的算力资源标识的具体值。
例如,TYPE=2表示IPv4地址类型算力资源标识,R2节点有算力服务器1和算力服务器2的IPv4地址需要通告,因此TYPE=2后的长度值等于8字节,表示后面有2个IPv4地址。其他流程同实示例1,在此不再累述。
R1节点、R4节点和R5节点在收到R2节点的通告后,在本地建立算力路由表,并根据算力资源类型(TYPE)区分算力资源是数值型还是IPv4地址类型。
可以理解的是,若算力服务器地址为IPv6地址或者其他类型,也能够参照本示例中的算力资源通告方式,从而保证在算力资源混杂部署场景中,仍能够准确地进行算力资源通告。
示例5:
以R1节点为例,如图3所示,R1节点的算力GPU-1、算力GPU-2属于算力站点1,算力CPU-1、算力CPU-2属于算力站点2,则R1节点在通告算力资源时,可以通过算力扩展团体属性来携带算力站点信息。即R1节点在通告算力资源时,会将算力GPU-1、算力GPU-2的算力资源标识进行通告,并通过算力扩展团体属性携带算力站点1的信息通告算力GPU-1、算力GPU-2属于算力站点1。当通告算力CPU-1和算力CPU-2的算力资源时,通过算力扩展团体属性携带算力站点2的信息通告算力CPU-1、算力CPU-2属于算力站点2。由于各资源属于的算力站点不同,算力资源标识的分配可基于算力站点来进行,例如,在算力站点1中为算力GPU-1和算力GPU-2分配算力标识1和2,在算力站点2中为算力CPU-1和算力CPU-2同样分配算力标识1和2。通过上述设置,实现了对算力资源标识的复用且仍能够准确通告,简化了算力资源的规划和管理。
R2节点、R4节点和R5节点在收到R1节点的通告后,R1节点、R2节点、R4节点和R5节点将分别为算力站点1和算力站点2建立两张算力路由表。
当R2节点收到某算力流量后,根据配置(或控制器,或编排器)的要求,确认该算力流量需要发送给R1节点连接的算力站点1中的算力GPU-1进行处理,则R2节点在封装算力头时,其目标算力资源标识与封装节点标识的填入参考示例2,其后的扩展TLV长度设置为后续需要携带的TLV总长度,扩展TLV中Type设置为标识算力站点的值,假设为100,Length设置为1个字节(Length的具体值根据算力站点的数量进行设置,此处假设算力站点数量在255个之内,设置为1个字节),然后在Value中填入算力站点的标识值1。外层封装参考示例2或示例3,在此不再赘述。
R1节点在收到该算力流量后,进行算力头的解封装处理,如果发现算力头中携带了算力站点标识值1,说明该算力流量需要发送给算力站点1的算力资源处理,则R1节点查找算力站点1对应的算力路由表,将该算力流量发送给算力GPU-1进行处理。实现了对算力资源的范围限定,可简化算力资源的标识分配。
图14是本申请一实施例提供的通信设备的结构示意图。如图14所示,该通信设备2000包括存储器2100、处理器2200。存储器2100、处理器2200的数量可以是一个或多个,图14中以一个存储器2101和一个处理器2201为例;网络设备中的存储器2101和处理器2201可以通过总线或其他方式连接,图14中以通过总线连接为例。
存储器2101作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请任一实施例提供的方法对应的程序指令/模块。处理器2201通过运行存储在存储器2101中的软件程序、指令以及模块实现上述任一实施例提供的方法。
存储器2101可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序。此外,存储器2101可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件或其他非易失性固态存储器件。在一些实例中,存储器2101进一步包括相对于处理器2201远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
本申请一实施例还提供了一种计算机可读存储介质,存储有计算机可执行指令,该计算机可执行指令用于执行如本申请任一实施例提供的算力资源通告方法、路由表建立方法或算力流量处理方法。
本申请一实施例还提供了一种计算机程序产品,包括计算机程序或计算机指令,该计算机程序或计算机指令存储在计算机可读存储介质中,计算机设备的处理器从计算机可读存储介质读取计算机程序或计算机指令,处理器执行计算机程序或计算机指令,使得计算机设备执行如本申请任一实施例提供的算力资源通告方法、路由表建立方法或算力流量处理方法。
本申请实施例描述的系统架构以及应用场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域技术人员可知,随着系统架构的演变和新应用场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请实施例描述的系统架构以及应用场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域技术人员可知,随着系统架构的演变和新应用场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、设备中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。
在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
在本说明书中使用的术语“部件”、“模块”、“系统”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。例如,部件可以是但不限于,在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序或计算机。通过图示,在计算设备上运行的应用和计算设备都可以是部件。一个或多个部件可驻留在进程或执行线程中, 部件可位于一个计算机上或分布在2个或更多个计算机之间。此外,这些部件可从在上面存储有各种数据结构的各种计算机可读介质执行。部件可例如根据具有一个或多个数据分组(例如来自于自与本地系统、分布式系统或网络间的另一部件交互的二个部件的数据,例如通过信号与其它系统交互的互联网)的信号通过本地或远程进程来通信。

Claims (25)

  1. 一种算力资源通告方法,应用于第一节点,所述方法包括:
    将边界网关协议消息中的地址族类型设置为算力路由类型,其中,所述算力路由为算力资源所对应的路由;
    将所述第一节点连接的算力资源信息填充至所述边界网关协议消息中;
    向第二节点发送所述边界网关协议消息。
  2. 根据权利要求1所述的算力资源通告方法,其中,所述算力资源信息包括至少一个算力可达信息,所述算力可达信息包括算力资源标识类型字段、算力资源标识长度字段与算力资源标识值字段。
  3. 根据权利要求2所述的算力资源通告方法,若所述算力资源信息中包括多个算力可达信息,每个所述算力可达信息中的算力资源标识类型或算力资源标识值不相同。
  4. 根据权利要求2所述的算力资源通告方法,其中,所述算力资源标识类型至少包括以下之一:
    数值型;
    IPv4地址类型;
    IPv6地址类型;
    多协议标签交换类型。
  5. 根据权利要求4所述的算力资源通告方法,其中,若所述算力资源标识类型为所述数值型,所述算力资源标识类型对应的所述算力资源标识长度至少包括以下之一:
    32位、64位、128位、256位、512位。
  6. 根据权利要求1所述的算力资源通告方法,其中,所述算力资源信息包括算力资源属性,所述算力资源属性至少包括以下之一:
    算力资源的数量;
    算力资源的单位。
  7. 根据权利要求1所述的算力资源通告方法,其中,所述方法还包括:
    将算力封装类型填充至所述边界网关协议消息中,其中,所述算力封装类型用于指示算力流量使用的封装类型。
  8. 根据权利要求7所述的算力资源通告方法,其中,所述封装类型至少包括以下之一:
    IPv4类型;
    IPv6类型;
    多协议标签交换类型;
    基于IPv6的段路由类型;
    位索引显式复制类型。
  9. 根据权利要求1所述的算力资源通告方法,其中,所述方法还包括:
    将算力扩展团体属性填充至所述边界网关协议消息中,其中,所述算力扩展团体属性用于携带算力站点信息。
  10. 根据权利要求9所述的算力资源通告方法,其中,所述算力扩展团体属性信息包括算力站点信息。
  11. 根据权利要求1至10任一项所述的算力资源通告方法,其中,所述边界网关协议包 括地址族标识与子地址族标识;所述将边界网关协议消息中的地址类型设置为算力路由类型,包括:
    将所述地址族标识的值设置为算力路由类型对应的地址族标识值;
    将子地址族标识的值设置为算力资源的资源类型对应的资源类型值。
  12. 根据权利要求11所述的算力资源通告方法,其中,所述算力资源的资源类型至少包括以下之一:
    无类型算力资源;
    硬件类型算力资源;
    软件类型算力资源。
  13. 根据权利要求1所述的算力资源通告方法,其中,所述边界网关协议消息为更新消息;所述方法还包括:
    将下一跳属性配置为所述第一节点的回环地址;
    将下一跳属性填充至所述更新消息中。
  14. 一种路由表建立方法,应用于第二节点,所述方法包括:
    根据第一节点通告的算力资源通告消息,生成下一跳为所述第一节点的算力路由表;其中,所述算力资源通告消息根据权利要求1至13任一项所述的算力资源通告方法得到。
  15. 一种算力流量处理方法,应用于第二节点,所述方法包括:
    接收发往第一节点所连接的算力资源的算力流量;
    根据所述第一节点通告的边界网关协议消息,得到算力资源信息与算力封装类型;
    根据所述算力资源信息与所述算力封装类型,对所述算力流量进行封装;
    向所述第一节点发送封装后的算力流量;其中,所述边界网关协议消息根据权利要求7或8所述的算力资源通告方法得到。
  16. 根据权利要求15所述的算力流量处理方法,其中,所述对算力流量进行封装包括至少以下之一:
    对所述算力流量进行算力头封装与外层封装;
    对所述算力流量进行外层封装。
  17. 根据权利要求16所述的算力流量处理方法,其中,所述对所述算力流量进行算力头封装与外层封装包括:
    在算力头中封装所述算力资源信息中携带的算力资源标识或第二节点标识中的至少一个;
    根据所述算力封装类型中的封装类型,对所述算力流量进行外层封装。
  18. 根据权利要求17所述的算力流量处理方法,其中,在所述算力资源信息中携带第二节点标识的情况下,所述第二节点标识为第二节点的回环地址。
  19. 根据权利要求16所述的算力流量处理方法,其中,所述对所述算力流量进行外层封装包括:
    在所述算力封装类型中的封装类型为SRv6的情况下,将所述算力资源标识作为参数封装至SRv6报文的段列表中的最后一个段中。
  20. 根据权利要求16至18任一项所述的算力流量处理方法,其中,所述算力头中包括算力节点信息。
  21. 一种算力流量处理方法,应用于第一节点,所述方法包括:
    接收第二节点发送的算力流量,其中,所述算力流量根据权利要求15至20任一项所述 的算力流量处理方法得到;
    对所述算力流量进行解封装,得到目标算力资源信息与解封装后的算力流量;
    根据算力路由表与所述目标算力资源信息,向目标算力资源转发所述解封装后的算力流量。
  22. 根据权利要求21所述的算力流量处理方法,其中,所述目标算力资源信息包括算力资源标识;所述根据算力路由表与所述目标算力资源信息,向目标算力资源转发所述解封装后的算力流量,包括:
    将所述解封装后的算力流量转发给所述算力资源标识对应的算力资源。
  23. 根据权利要求21所述的算力流量处理方法,其中,所述目标算力资源信息包括算力节点标识;所述根据算力路由表与所述目标算力资源信息,向目标算力资源转发所述解封装后的算力流量,包括:
    将所述解封装后的算力流量转发给所述算力节点标识对应的算力节点。
  24. 一种通信设备,包括:
    至少一个处理器;
    至少一个存储器,用于存储至少一个程序;
    当至少一个所述程序被至少一个所述处理器执行时实现如权利要求1至13任意一项所述的算力资源通告方法,或,如权利要求14所述的路由表建立方法,或,如权利要求15至23任意一项所述的算力流量处理方法。
  25. 一种计算机可读存储介质,其中存储有处理器可执行的程序,所述处理器可执行的程序被处理器执行时用于实现如权利要求1至13任意一项所述的算力资源通告方法,或,如权利要求14所述的路由表建立方法,或,如权利要求15至23任意一项所述的算力流量处理方法。
PCT/CN2023/097507 2022-11-09 2023-05-31 算力资源通告方法、算力流量处理方法、通信设备及介质 WO2024098731A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211398346.2 2022-11-09
CN202211398346.2A CN118055066A (zh) 2022-11-09 2022-11-09 算力资源通告方法、算力流量处理方法、通信设备及介质

Publications (1)

Publication Number Publication Date
WO2024098731A1 true WO2024098731A1 (zh) 2024-05-16

Family

ID=91031843

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/097507 WO2024098731A1 (zh) 2022-11-09 2023-05-31 算力资源通告方法、算力流量处理方法、通信设备及介质

Country Status (2)

Country Link
CN (1) CN118055066A (zh)
WO (1) WO2024098731A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114978978A (zh) * 2022-06-07 2022-08-30 中国电信股份有限公司 一种算力资源调度方法、装置、电子设备及介质
CN115065637A (zh) * 2022-06-10 2022-09-16 亚信科技(中国)有限公司 传输算力资源信息的方法、装置和电子设备
CN115225722A (zh) * 2021-04-20 2022-10-21 中兴通讯股份有限公司 算力资源的通告方法及装置、存储介质、电子装置
WO2022227800A1 (zh) * 2021-04-30 2022-11-03 华为技术有限公司 一种通信方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225722A (zh) * 2021-04-20 2022-10-21 中兴通讯股份有限公司 算力资源的通告方法及装置、存储介质、电子装置
WO2022227800A1 (zh) * 2021-04-30 2022-11-03 华为技术有限公司 一种通信方法及装置
CN114978978A (zh) * 2022-06-07 2022-08-30 中国电信股份有限公司 一种算力资源调度方法、装置、电子设备及介质
CN115065637A (zh) * 2022-06-10 2022-09-16 亚信科技(中国)有限公司 传输算力资源信息的方法、装置和电子设备

Also Published As

Publication number Publication date
CN118055066A (zh) 2024-05-17

Similar Documents

Publication Publication Date Title
CN110912795B (zh) 一种传输控制方法、节点、网络系统及存储介质
WO2021179732A1 (zh) 报文封装方法、报文转发方法、通告方法、电子设备、和存储介质
US20230344754A1 (en) Message indication method and apparatus, and device and storage medium
CN107968750B (zh) 报文传输方法、装置及节点
JP7140910B2 (ja) 通信方法、デバイス、及びシステム
EP3148131B1 (en) Address information publishing method and apparatus
WO2020134139A1 (zh) 一种业务数据的转发方法、网络设备及网络系统
JP5656137B2 (ja) ボーダ・ゲートウェイ・プロトコル・ルートの更新方法およびシステム
US11405307B2 (en) Information transfer method and device
US20210044538A1 (en) Tunnel establishment method, apparatus, and system
WO2016131225A1 (zh) 报文转发处理方法、装置、控制器及路由转发设备
CN110896379B (zh) 报文的发送方法、绑定关系的通告方法、装置及存储介质
WO2021143279A1 (zh) 段路由业务处理方法和装置、路由设备及存储介质
WO2022184169A1 (zh) 报文转发方法、系统、存储介质及电子装置
WO2022110535A1 (zh) 一种报文发送方法、设备及系统
WO2021057530A1 (zh) 确定路由前缀与分段标识间映射关系的方法、装置及系统
CN107294859B (zh) 一种信息传递方法、装置及系统
CN114666267A (zh) 以太虚拟专用网的数据处理方法、设备及存储介质
WO2023274083A1 (zh) 路由发布和转发报文的方法、装置、设备和存储介质
JP2022136267A (ja) メッセージ生成方法および装置ならびにメッセージ処理方法および装置
CN112491706A (zh) 数据报文的处理方法及装置、存储介质、电子装置
EP4020903A1 (en) Method and apparatus for preventing traffic bypassing
WO2024098731A1 (zh) 算力资源通告方法、算力流量处理方法、通信设备及介质
CN114915519A (zh) 通信方法和通信装置
WO2023050981A1 (zh) 虚拟专用网络业务标识的分配方法、报文处理方法及装置