WO2020151461A1 - 对数据包进行路由的方法和装置 - Google Patents

对数据包进行路由的方法和装置 Download PDF

Info

Publication number
WO2020151461A1
WO2020151461A1 PCT/CN2019/129881 CN2019129881W WO2020151461A1 WO 2020151461 A1 WO2020151461 A1 WO 2020151461A1 CN 2019129881 W CN2019129881 W CN 2019129881W WO 2020151461 A1 WO2020151461 A1 WO 2020151461A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
computing
task type
data packet
performance
Prior art date
Application number
PCT/CN2019/129881
Other languages
English (en)
French (fr)
Inventor
孙丰鑫
庄冠华
王元伟
李峰
杨小敏
顾叔衡
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP19911896.9A priority Critical patent/EP3905637A4/en
Publication of WO2020151461A1 publication Critical patent/WO2020151461A1/zh
Priority to US17/380,383 priority patent/US20210352014A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/123Evaluation of link metrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/70Routing based on monitoring results
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1012Server selection for load balancing based on compliance of requirements or conditions with available server resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1014Server selection for load balancing based on the content of a request

Definitions

  • the present disclosure relates to the field of network communication technology, in particular to a method and device for routing data packets.
  • the routing node can determine the next hop routing node according to the current network conditions, and forward the data packet to the next hop routing node.
  • the data packet can carry information indicating the target data that needs to be obtained from the data node, carrying instant messaging information, or carrying information indicating that the computing node is required to perform the target type of computing task (a data packet carrying this information can be called Data packets of the computing task type), etc.
  • the computing nodes can perform the same type of computing task. For data packets of the computing task type, just forward it to any computing node that can perform the corresponding computing task, and the computing node can perform computing tasks and output operations. result.
  • the image recognition type data packet carries the image to be recognized.
  • the computing node receives the image recognition type data packet, it can acquire and recognize the image to be recognized, and return the recognition result to the initiating node of the task.
  • computing nodes that can execute computing task A include computing node B and computing node C, and according to the current network conditions, it is determined to forward the data packet corresponding to computing task A to computing node C corresponding to the path with the best current network condition.
  • the computing node corresponding to the path with the best current network condition is not necessarily the best node for performing computing tasks. If the above situation occurs, the initiating node of the data packet of the computing task type will need to wait a long time to obtain the desired result.
  • a method for routing data packets includes:
  • the address of the target node is determined as the destination address of the data packet, and the data packet is forwarded based on the destination address.
  • the method provided by the embodiments of the present disclosure in the process of routing data packets, in addition to considering the current network conditions, it is also based on the computing performance of each node that can perform the computing task indicated by the computing task type data packet, Determine the destination node. In this way, it can be ensured that the destination node can quickly complete the calculation task and feed back the calculation result to the initiating node of the data packet, thereby shortening the waiting time of the initiating node of the data packet.
  • the computing performance includes computing delay
  • the link state includes the round-trip delay of data packets
  • For each other node determine the sum of the operation delay corresponding to the other node and the round trip delay of the data packet between the local node and the other node;
  • the node corresponding to the minimum sum value is determined to be the target node.
  • the local node determines the nodes that can execute the computing tasks corresponding to each computing task type, as well as the computing delay required by each node to execute the computing tasks of each computing task type and the round-trip delay of data packets between the local nodes. For each other node, determine the sum of the operation delay corresponding to the other node and the round trip delay of the data packet between the local node and the other node, and determine the node corresponding to the smallest sum as the target node among at least one other node.
  • the method further includes:
  • the first correspondence is not stored in the local node, and the first correspondence needs to be established after the local node is started.
  • the other node and the local node belong to the same preset network area
  • the computing performance includes load information and computing delay
  • the computing performance corresponding to a computing task type includes:
  • the method also includes:
  • the operation delay corresponding to the current load information is determined as the operation delay corresponding to the at least one operation task type.
  • the historical load information and calculation delay related data can be imported and stored in the local node in advance.
  • the local node can fit these historical data to determine the relationship between the load information and the calculation delay, and then, when at least one When the current load information corresponding to the calculation task type is calculated, the calculation delay corresponding to the current load information can be determined. Furthermore, the operation delay corresponding to at least one operation task type can be determined.
  • the other node and the local node do not belong to the same preset network area
  • the computing performance includes computing delay
  • the receiving the at least one computing returned by the other node The computing performance corresponding to the task type includes:
  • the number of times of computing performance update is also stored in the first correspondence, and the method further includes:
  • the query computing task type and corresponding update times carried in the computing performance query request are acquired, wherein the computing performance query request is used To indicate the computing performance of other nodes in the same preset network area as the local node;
  • the computing performance corresponding to the query computing task type and the determined update times are sent to any other node.
  • the first corresponding relationship can be initially established, but because the operation delay is not fixed, but changes dynamically according to specific conditions over time, the operation delay needs to be updated.
  • the method further includes:
  • the second computing task type corresponding to the target data packet is determined, and the target data packet belongs to the local node.
  • Other nodes in the same preset network area forward the target data packet;
  • a device for routing data packets includes at least one module configured to implement the method for routing data packets provided in the first aspect.
  • a node in a third aspect, includes a processor and a memory.
  • the processor is configured to execute instructions stored in the memory; the processor executes the instructions to implement the data packet routing provided in the first aspect. method.
  • a computer-readable storage medium including instructions, which when the computer-readable storage medium runs on a node, cause the node to perform the method described in the first aspect.
  • a computer program product containing instructions which when the computer program product runs on a node, causes the node to execute the method described in the first aspect.
  • the method provided by the embodiments of the present disclosure in the process of routing data packets, in addition to considering the current network conditions, it is also based on the computing performance of each node that can perform the computing task indicated by the computing task type data packet, Determine the destination node. In this way, it can be ensured that the destination node can quickly complete the calculation task and feed back the calculation result to the initiating node of the data packet, thereby shortening the waiting time of the initiating node of the data packet.
  • Fig. 1 is a schematic flowchart of a method for routing data packets according to an exemplary embodiment
  • Fig. 2 is a schematic flowchart showing a method for routing data packets according to an exemplary embodiment
  • Fig. 3 is a schematic diagram showing a network structure according to an exemplary embodiment
  • Fig. 4 is a schematic flow chart showing a method for routing data packets according to an exemplary embodiment
  • Fig. 5 is a schematic flow chart showing a method for routing data packets according to an exemplary embodiment
  • Fig. 6 is a schematic structural diagram showing a device for routing data packets according to an exemplary embodiment
  • Fig. 7 is a schematic diagram showing a structure of a node according to an exemplary embodiment.
  • An exemplary embodiment of the present disclosure provides a method for routing data packets. As shown in FIG. 1, the processing flow of the method may include the following steps:
  • Step S110 When a data packet of the computing task type is received, determine the first computing task type corresponding to the data packet.
  • the local node when it receives a data packet, it can determine the task type corresponding to the received data packet, including indicating the need to obtain target data from the data node, transmitting instant messaging information, or instructing the computing node to perform the target type of computing task (This kind of data packet can be called the data packet of the computing task type) and so on.
  • the local node When the local node receives the data packet of the computing task type, it can determine the first computing task type corresponding to the data packet. In practical applications, when the local node receives a data packet, it can obtain the Internet Protocol Address (IP) carried in the header of the data packet. Then, the local node can determine the type of the IP address carried, and if the carried IP address is the IP address of any node, the data packet is forwarded based on the IP address of any node. If the carried IP address corresponds to any operation task, it can be determined that the received data packet is a data packet of the operation task type.
  • IP Internet Protocol Address
  • the local node can obtain the computing task type identification carried in the header of the data packet to determine the first computing task type corresponding to the data packet. It should be noted that in the local node, a new routing protocol needs to be run to obtain the operation task type identifier carried in the header of the data packet based on the new routing protocol, and route the data packet based on the operation task type identifier.
  • Step S120 Determine the computing performance of at least one other node and at least one other node corresponding to the first computing task type based on the pre-acquired first correspondence between computing task types, other nodes, and computing performance.
  • new routing table entries can be added, including the type of computing task and computing performance.
  • a first correspondence between the computing task type, other nodes, and computing performance may be pre-established in the local node, and based on the first correspondence, the computing performance corresponding to at least one other node and at least one other node corresponding to the first computing task type is determined.
  • the computing tasks that different computing nodes can perform can be one or multiple, and the computing tasks that different computing nodes can perform can be the same or different. Therefore, first it is possible to determine which nodes can perform the operation task of the first operation task type, and then select the optimal node among these nodes that can perform the operation task of the first operation task type.
  • the user wants the cloud to help identify all the characters in the target image it can be achieved by sending data packets corresponding to all the characters in the target image.
  • the local node receives the data packet corresponding to all the characters in the identification target image, it can obtain the operation task type identifier in the data packet. Based on the operation task type identifier, search for nodes that can execute the operation task corresponding to the operation task type identifier, including node A, node B, and node C. Then, the computing performance corresponding to these nodes can be determined respectively.
  • the computing performance may include computing time delay and other parameter information that can reflect the execution capabilities of different nodes in performing computing tasks.
  • Step S130 Determine a target node among at least one other node based on the computing performance corresponding to at least one other node and the link state between the local node and at least one other node.
  • the routing table entries may also include link states corresponding to different nodes, and the link states may include the round-trip delay of data packets between the local node and other nodes.
  • the local node can determine at least one other node corresponding to the first computing task type, and then determine the link state between the local node and the at least one other node. Based on factors such as the computing performance corresponding to at least one other node and the link state between the local node and at least one other node, the target node is determined in the at least one other node.
  • Step S130 may include: for each other node, determining the operation delay corresponding to the other node and the data between the local node and other nodes The sum value of the packet round trip delay; among at least one other node, the node corresponding to the smallest sum value is determined as the target node.
  • a correspondence relationship including the type of computing task, other nodes, computing delay, and round trip delay of data packets between the local node and other nodes can be established in advance.
  • the node corresponding to each computing task type may be a computing node that belongs to the same preset network area as the local node, or a routing node that does not belong to the same preset network area as the local node. If the node corresponding to the computing task type is a routing node M that does not belong to the same preset network area as the local node, the data packet needs to be forwarded to the routing node M, and then the routing node M forwards the data packet to the routing node M that belongs to Computing nodes in the same preset network area.
  • Step S140 Determine the address of the target node as the destination address of the data packet, and forward the data packet based on the destination address.
  • the address of the target node can be queried, the address of the target node is determined as the destination address of the data packet, and the data packet is forwarded based on the destination address.
  • other routing nodes receive a computing task type data packet with the destination node's address as the destination address, they can forward the data packet based only on the network status, and finally forward the data packet to the destination node.
  • the destination node When the destination node receives a data packet of the computing task type with its own address as the destination address, it can directly forward the data packet to the local computing node, or it can re-determine the local computing node correspondence based on the method provided by the embodiments of the present disclosure Is the sum of the calculation delay and the round-trip delay of the data packet still the smallest? If not, the destination node is re-determined.
  • the data packet is finally processed by the computing node, and the processing result is returned to the routing node that belongs to the same preset network area as the computing node, and the routing node returns the processing result to the initiating node of the computing task according to the original path.
  • routing nodes can be laid out in a combination of distributed and centralized methods.
  • the superior routing node is the central controller of the subordinate routing node.
  • the subordinate routing node can accept the control of the superior routing node, and the subordinate routing node can directly obtain the same level of routing node from the superior routing node. Node information, so as to prevent the lower-level routing nodes from detecting the same-level routing nodes one by one to obtain node information, which can improve the efficiency of obtaining node information, and routing nodes that are at the same level can exchange routing information with each other.
  • the routing nodes at the upper and lower levels can be laid out in a central structure, and the routing nodes at the same level can be laid out in a distributed structure. As the level becomes higher and higher, the number of routing nodes decreases, and the routing nodes converge as the levels increase, and finally the entire network composed of routing nodes takes a cone shape.
  • the nodes that execute the new routing protocol in the foregoing network can all be used as local nodes in the method provided in the embodiments of the present disclosure.
  • the first correspondence is not stored in the local node, and the first correspondence needs to be established after the local node is started.
  • the method provided by the embodiment of the present disclosure may further include: when the local node starts, for each other node, sending a computing task type query request to other nodes, receiving at least one computing task type returned by other nodes, and sending to other nodes
  • the node sends a computing performance query request corresponding to at least one computing task type, and receives computing performance corresponding to at least one computing task type returned by other nodes; based on at least one computing task type and at least one computing task type corresponding to each other node Computing performance, establishing the first correspondence between computing task types, other nodes, and computing performance.
  • the upper node of the local node when the local node is started, can detect that the local node is started, and the upper node can send the node information of the peer node of the local node to the local node, so that the local node can determine the peer node.
  • peer nodes include computing nodes that belong to the same preset network area as the local node, and routing nodes that do not belong to the same preset network area as the local node, such as node A, node B, node C, and node D in Table 1.
  • the local node can create Table 1 based on nodes of the same level. At this time, there are only other nodes in Table 1, and the initial values of other table items are all 0.
  • the local node can send a computing task type query request to other nodes, receive at least one computing task type returned by other nodes, send at least one computing performance query request corresponding to at least one computing task type to other nodes, and receive at least one computing task type returned by other nodes Corresponding computing performance. Then, the local node may establish a first correspondence between the computing task type, other nodes, and computing performance based on at least one computing task type and computing performance corresponding to the at least one computing task type corresponding to each other node.
  • computing performance may include load information and computing time delay
  • the step of receiving computing performance corresponding to at least one computing task type returned by other nodes may specifically include: receiving data returned by other nodes Current load information corresponding to at least one computing task type. Then, the local node can determine the operation delay corresponding to the current load information as the operation delay corresponding to at least one operation task type according to the second correspondence between the prestored load information and the operation delay.
  • the historical load information and calculation delay related data can be imported and stored in the local node in advance.
  • the local node can fit these historical data to determine the relationship between the load information and the calculation delay, and then, when at least one When the current load information corresponding to the calculation task type is calculated, the calculation delay corresponding to the current load information can be determined. Furthermore, the operation delay corresponding to at least one operation task type can be determined.
  • the calculation delay can directly reflect the execution ability of the calculation node to perform a certain calculation task in a simple and clear way. Although there are many factors that affect the execution of a calculation task by the calculation node, they can be directly reflected in the calculation delay in the end. The shorter the operation delay, the stronger the execution ability of the operation node to perform a certain operation task. Factors that can affect a computing node to perform a certain computing task include the performance of the Central Processing Unit (CPU), the performance of the Graphics Processing Unit (GPU), and real-time load. In practical applications, some computing tasks have higher requirements on the performance of the CPU, but not on the performance of the GPU. Some computing tasks have higher requirements for GPU performance, but not for CPU performance. For example, computing tasks of the image recognition type have higher requirements for GPU performance.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the computing performance may include computing latency
  • the step of receiving computing performance corresponding to at least one computing task type returned by other nodes may include: receiving at least one computing returned by other nodes The operation delay corresponding to the task type.
  • the computing node N that belongs to the same preset network area as the routing node M can maintain the computing performance of the computing node N by the routing node M, and the routing node P that does not belong to the same preset network area as the routing node M, because the routing node P maintains In order to determine the computing performance of the computing node Q that belongs to the same preset network area as itself, the routing node M can directly detect the computing performance of the computing node Q from the routing node P.
  • the round-trip delay of data packets between the local node and other nodes can be determined according to the preset cycle through interactive methods such as the Packet Internet Groper (PING) Time delay.
  • PING Packet Internet Groper
  • the operation delay Update when the local node is started, the first corresponding relationship can be initially established.
  • the operation delay is not fixed, but changes dynamically according to the specific conditions over time, it is necessary to perform the operation delay Update.
  • a computing node For a computing node that belongs to the same preset network area as the local node, whenever a target data packet whose destination address is another node in the same preset network area as the local node is received, determine the second computing task corresponding to the target data packet Type, and forward the target data packet to other nodes in the same preset network area as the local node; when receiving the operation result corresponding to the target data packet returned by other nodes in the same preset network area as the local node, confirm The calculation delay between the time point of forwarding the target data packet and the current time point, the calculation delay is determined as the calculation performance corresponding to the second calculation task type; the calculation performance corresponding to the second calculation task type is substituted for the first correspondence Neutralizes the computing performance corresponding to the second computing task type corresponding to other nodes in the same preset network area as the local node, and updates the number of updates of the computing performance after replacement in the first correspondence.
  • the computing node can reflect the current status of the computing node when executing the computing task corresponding to the data packet of the computing task type, and the local node can Calculate these conditions and update the new computing capabilities corresponding to computing nodes belonging to the same preset network area.
  • a local node when a local node forwards a data packet that performs image recognition to a computing node belonging to the same preset network area, it can record the time point of the forwarding. When receiving the recognition result returned by the computing node, it can determine the time point of returning the recognition result The calculation delay between the time point and the forwarding point can then determine how long the current calculation node needs to perform the calculation task of image recognition.
  • the first corresponding relationship also stores the update times of the computing performance. Whenever the local node updates the operating delay of the computing nodes belonging to the same preset network area, the update times can be increased by one. The initial value of the update times can be set to 0. As shown in Table 2, it is the correspondence between the type of computing task, other nodes, computing delay, the round trip delay of data packets between the local node and other nodes, and the number of updates.
  • the local node may send a detection packet (also referred to as a computing performance query request) to the routing node to obtain the computing performance of the node that needs to be updated.
  • a detection packet also referred to as a computing performance query request
  • the number of times of computing performance update is also stored in the first correspondence.
  • the method provided in the embodiment of the present disclosure may further include: when receiving a computing performance query request sent by any other node among other nodes, obtaining the computing performance query request The type of query calculation task carried and the corresponding update times, where the calculation performance query request is used to indicate the calculation performance of other nodes in the same preset network area as the query and the local node; in the first correspondence, the query calculation task is determined The update times of the computing performance corresponding to the type; if the determined update times are greater than the update times carried in the computing performance query request, the computing performance corresponding to the query computing task type and the determined update times are sent to any other node.
  • the current local node needs to update the new computing capabilities of any other node among other nodes, firstly, based on the first correspondence relationship, determine all the computing task type identifiers and correspondences related to any other node The number of updates.
  • the operation task type identifier and the corresponding update times related to any other node are carried in the detection packet and sent to any other node.
  • any other node After receiving the detection packet sent by the local node, any other node determines the update times corresponding to the operation task type corresponding to the operation node of any other node belonging to the same preset network area. If the determined update times are greater than the update times carried in the detection packet, it is determined that the operation performance corresponding to the corresponding operation task type needs to be updated. Any other node carries the operation performance corresponding to all the determined operation task types that need to be updated and the update times recorded in any other node in the detection response packet and sends it to the local node. It should be noted that if there is a target computing task type in any other node that is not recorded in the first correspondence of the local node, it also needs to be sent to the local node so that the local node adds a record about the target computing task type.
  • a User Equipment (UE) device such as UE1 initiates a data packet of computing task type 1
  • the data packet arrives at node 1 (local node), and node 1 acts as a management node to perform routing processing for the data packet .
  • Node 1 looks up table 3 and determines that the nodes that can perform the computing task corresponding to computing task type 1 are node local, node 2, node 3, and node 4. Calculate the sum of the operation delay of each node and the round-trip delay of the data packet, and find that the sum corresponding to node 3 is the smallest, and node 1 can forward the data packet to node 3.
  • UE User Equipment
  • node 1 When UE2 initiates a data packet of operation task type 2, the data packet arrives at node 1, and node 1 looks up table 3 to determine that the nodes that can perform the operation task corresponding to operation task type 2 are node local, node 2 and node 3. Calculate the sum of the operation delay of each node and the round-trip delay of the data packet, and find that the sum corresponding to the node local is the smallest, and the node 1 can forward the data packet to the node local.
  • node 1 After a period of time, when UE1 initiates a data packet of computing task type 1 again, the data packet arrives at node 1. As the load on node 3 is more at this time, the computing delay rises to 50ms, so this time the corresponding sum of node 2 With the smallest value, node 1 can forward data packets to node 2.
  • the method provided by the embodiments of the present disclosure in the process of routing data packets, in addition to considering the current network conditions, it is also based on the computing performance of each node that can perform the computing task indicated by the computing task type data packet, Determine the destination node. In this way, it can be ensured that the destination node can quickly complete the calculation task and feed back the calculation result to the initiating node of the data packet, thereby shortening the waiting time of the initiating node of the data packet.
  • the device includes:
  • the determining module 610 is configured to determine the first operation task type corresponding to the data packet when a data packet of the operation task type is received; based on the pre-acquired first correspondence between the operation task type, other nodes, and operation performance, determine The at least one other node corresponding to the first computing task type and the computing performance corresponding to the at least one other node; based on the computing performance corresponding to the at least one other node and the performance between the local node and the at least one other node.
  • the link state, in the at least one other node, determining the target node can specifically implement the determining function in the above steps S110-130 and other implicit steps.
  • the sending module 620 is configured to determine the address of the target node as the destination address of the data packet, and forward the data packet based on the destination address, which can specifically implement the sending function in step S140 above, and others Implied steps.
  • the computing performance includes computing delay
  • the link state includes the round-trip delay of data packets
  • the determining module 610 is configured to:
  • For each other node determine the sum of the operation delay corresponding to the other node and the round trip delay of the data packet between the local node and the other node;
  • the node corresponding to the minimum sum value is determined to be the target node.
  • the device further includes:
  • the receiving module is configured to send a computing task type query request to each other node when the local node starts, receive at least one computing task type returned by the other node, and send to the other node
  • the computing performance query request corresponding to the at least one computing task type, and receiving the computing performance corresponding to the at least one computing task type returned by the other nodes
  • the establishment module is configured to establish a first correspondence between the computing task type, other nodes, and computing performance based on at least one computing task type corresponding to each other node and computing performance corresponding to the at least one computing task type.
  • the other node and the local node belong to the same preset network area
  • the computing performance includes load information and computing time delay
  • the receiving module is configured to receive the at least one returned by the other node The current load information corresponding to a computing task type
  • the determining module 610 is further configured to determine the operation delay corresponding to the current load information according to the second correspondence between the prestored load information and the operation delay, as the operation time corresponding to the at least one operation task type Extension.
  • the other node and the local node do not belong to the same preset network area
  • the computing performance includes computing delay
  • the receiving module is configured to:
  • the device further includes:
  • the obtaining module is configured to obtain the query operation task type and the corresponding update times carried in the operation performance query request when the operation performance query request sent by any other node among the other nodes is received, wherein the The computing performance query request is used to indicate to query the computing performance of other nodes in the same preset network area as the local node;
  • the determining module 610 is further configured to determine, in the first correspondence, the number of times of updating the computing performance corresponding to the query computing task type;
  • the sending module 620 is further configured to, when the determined update times are greater than the update times carried in the computing performance query request, send to any other node the computing performance corresponding to the query computing task type and determine The number of updates.
  • the determining module 610 is further configured to determine the first data packet corresponding to the target data packet whenever a target data packet whose destination address is another node in the same preset network area as the local node is received Two types of computing tasks, and forward the target data packet to other nodes in the same preset network area as the local node; when receiving all the returned data from other nodes in the same preset network area as the local node
  • the operation delay between the time point of forwarding the target data packet and the current time point is determined, and the operation delay is determined as the operation performance corresponding to the second operation task type ;
  • the device also includes:
  • An update module configured to replace the second computing task type in the first correspondence relationship with other nodes in the same preset network area that the local node belongs to with the computing performance corresponding to the second computing task type Corresponding computing performance, and updating the update times of the computing performance after replacement in the first corresponding relationship.
  • determining module 610 and sending module 620 may be implemented by a processor, or implemented by a processor in cooperation with a memory and a transceiver.
  • the device in addition to considering the current network conditions, it also considers the computing performance of each node that can perform the computing tasks indicated by the computing task type data packets. Determine the destination node. In this way, it can be ensured that the destination node can quickly complete the calculation task and feed back the calculation result to the initiating node of the data packet, thereby shortening the waiting time of the initiating node of the data packet.
  • the device for routing data packets provided in the above embodiments only uses the division of the above functional modules to illustrate when routing data packets.
  • the above functions can be allocated by Different functional modules are completed, that is, the internal structure of the node is divided into different functional modules to complete all or part of the functions described above.
  • the device for routing a data packet provided in the above-mentioned embodiment belongs to the same concept as the embodiment of the method for routing a data packet. For the specific implementation process, refer to the method embodiment, which will not be repeated here.
  • the node 700 may include a processor 710, a memory 740, and a transceiver 730, and the transceiver 730 may be connected to the processor 710, as shown in FIG.
  • the transceiver 730 may include a receiver and a transmitter, and may be used to receive or send messages or data.
  • the transceiver 730 may be a network card.
  • the node 700 may also include an acceleration component (which may be referred to as an accelerator). When the acceleration component is a network acceleration component, the acceleration component may be a network card.
  • the processor 710 may be the control center of the node 700, and uses various interfaces and lines to connect various parts of the entire node 700, such as the transceiver 730.
  • the processor 710 may be a central processing unit (CPU).
  • the processor 710 may include one or more processing units.
  • the processor 710 may also be a digital signal processor, an application specific integrated circuit, a field programmable gate array, or other programmable logic devices.
  • the node 700 may further include a memory 740, and the memory 740 may be used to store software programs and modules.
  • the processor 710 reads the software codes and modules stored in the memory to execute various functional applications and data processing of the node 700.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本公开是关于一种对数据包进行路由的方法和装置,属于网络通信技术领域。所述方法包括:当接收到运算任务类型的数据包时,确定数据包对应的第一运算任务类型;基于预先获取的运算任务类型、其他节点和运算性能的第一对应关系,确定第一运算任务类型对应的至少一个其他节点和至少一个其他节点对应的运算性能;基于至少一个其他节点对应的运算性能和本地节点分别与至少一个其他节点之间的链路状态,在至少一个其他节点中,确定目标节点;将目标节点的地址确定为数据包的目的地址,基于目的地址,对数据包进行转发。采用本公开,可以保证目的节点能够快速完成运算任务并将运算结果反馈给数据包的发起节点。

Description

对数据包进行路由的方法和装置
本申请要求于2019年1月22日提交的申请号为201910057402.8、发明名称为“对数据包进行路由的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开是关于网络通信技术领域,尤其是关于一种对数据包进行路由的方法和装置。
背景技术
当路由节点接收到任一数据包时,路由节点可以根据当前的网络状况,确定下一跳路由节点,并将数据包转发到下一跳路由节点。其中,数据包可以携带用于指示需要从数据节点获取的目标数据的信息、携带即时通信信息或者携带用于指示需要运算节点执行目标类型的运算任务的信息(携带这种信息的数据包可称为运算任务类型的数据包)等。
可以执行同一类型运算任务的运算节点可以有多个,对于运算任务类型的数据包,只要将其转发至任一可以执行对应的运算任务的运算节点,该运算节点就可以执行运算任务,输出运算结果。例如,图像识别类型的数据包中携带有待识别图像,当运算节点接收到图像识别类型的数据包时,可以获取并识别待识别图像,将识别结果返回任务的发起节点。
在实现本公开的过程中,发明人发现至少存在以下问题:
在相关技术中,在对数据包进行路由的过程中,完全是根据当前的网络状况,确定如何转发数据包的。例如,可以执行运算任务A的运算节点有运算节点B和运算节点C,根据当前的网络状况,确定将运算任务A对应的数据包转发至当前的网络状况最佳的路径对应的运算节点C。然而,在实际应用中,当前的网络状况最佳的路径对应的运算节点不一定是最优的执行运算任务的节点。如果发生上述情况,会造成运算任务类型的数据包的发起节点需要等待很长时间,才能获得想要的结果。
发明内容
为了克服相关技术中存在的问题,本公开提供了以下技术方案:
第一方面,提供了一种对数据包进行路由的方法,所述方法包括:
当接收到运算任务类型的数据包时,确定所述数据包对应的第一运算任务类型;
基于预先获取的运算任务类型、其他节点和运算性能的第一对应关系,确定所述第一运算任务类型对应的至少一个其他节点和所述至少一个其他节点对应的运算性能;
基于所述至少一个其他节点对应的运算性能和本地节点分别与所述至少一个其他节点之间的链路状态,在所述至少一个其他节点中,确定目标节点;
将所述目标节点的地址确定为所述数据包的目的地址,基于所述目的地址,对所述数据包进行转发。
通过本公开实施例提供的方法,在对数据包进行路由的过程中,除了考虑当前的网络状况之外,还根据能够执行运算任务类型的数据包指示的运算任务的每个节点的运算性能,确 定目的节点。这样,可以保证目的节点能够快速完成运算任务并将运算结果反馈给数据包的发起节点,从而缩短数据包的发起节点的等待时间。
在一种可能的实现方式中,所述运算性能包括运算时延,所述链路状态包括数据包往返时延,所述基于所述至少一个其他节点对应的运算性能和本地节点分别与所述至少一个其他节点之间的链路状态,在所述至少一个其他节点中,确定目标节点,包括:
对于每个其他节点,确定所述其他节点对应的运算时延和本地节点与所述其他节点之间的数据包往返时延的和值;
在所述至少一个其他节点中,确定最小和值对应的节点为目标节点。
本地节点确定能够执行每种运算任务类型对应的运算任务的节点、以及每个节点执行每种运算任务类型的运算任务所需的运算时延和本地节点之间的数据包往返时延。对于每个其他节点,确定其他节点对应的运算时延和本地节点与其他节点之间的数据包往返时延的和值,在至少一个其他节点中,确定最小和值对应的节点为目标节点。
在一种可能的实现方式中,所述方法还包括:
当所述本地节点启动时,对于每个其他节点,向所述其他节点发送运算任务类型查询请求,接收所述其他节点返回的至少一个运算任务类型,向所述其他节点发送所述至少一个运算任务类型对应的运算性能查询请求,接收所述其他节点返回的所述至少一个运算任务类型对应的运算性能;
基于每个其他节点分别对应的至少一个运算任务类型和所述至少一个运算任务类型对应的运算性能,建立运算任务类型、其他节点和运算性能的第一对应关系。
在本地节点未启动之前,本地节点中未存储有第一对应关系,第一对应关系需要在本地节点启动之后,自行建立。
在一种可能的实现方式中,所述其他节点和所述本地节点属于同一预设网络区域内,所述运算性能包括负载信息和运算时延,所述接收所述其他节点返回的所述至少一个运算任务类型对应的运算性能,包括:
接收所述其他节点返回的所述至少一个运算任务类型对应的当前的负载信息;
所述方法还包括:
根据预先存储的负载信息和运算时延的第二对应关系,确定所述当前的负载信息对应的运算时延,作为所述至少一个运算任务类型对应的运算时延。
可以预先在本地节点中导入并存储历史负载信息和运算时延相关数据,本地节点可以对这些历史数据进行拟合,以确定负载信息和运算时延之间的关系,进而,当确定了至少一个运算任务类型对应的当前的负载信息时,就可以确定当前的负载信息对应的运算时延。进而,就可以确定至少一个运算任务类型对应的运算时延。
在一种可能的实现方式中,所述其他节点和所述本地节点不属于同一预设网络区域内,所述运算性能包括运算时延,所述接收所述其他节点返回的所述至少一个运算任务类型对应的运算性能,包括:
接收所述其他节点返回的所述至少一个运算任务类型对应的运算时延。
在一种可能的实现方式中,所述第一对应关系中还存储有运算性能的更新次数,所述方法还包括:
当接收到所述其他节点中的任一其他节点发送的运算性能查询请求时,获取所述运算性 能查询请求中携带的查询运算任务类型和对应的更新次数,其中,所述运算性能查询请求用于指示查询和所述本地节点属于同一预设网络区域内的其他节点的运算性能;
在所述第一对应关系中,确定所述查询运算任务类型对应的运算性能的更新次数;
如果确定出的更新次数大于所述运算性能查询请求中携带的更新次数,则向所述任一其他节点发送所述查询运算任务类型对应的运算性能和确定出的更新次数。
在本地节点启动时,可以初步建立第一对应关系,但是由于运算时延不是固定不变的,而是随着时间的推移根据具体情况动态变化的,因此需要对运算时延进行更新。
在一种可能的实现方式中,所述方法还包括:
每当接收到目的地址为和所述本地节点属于同一预设网络区域内的其他节点的目标数据包时,确定所述目标数据包对应的第二运算任务类型,并向和所述本地节点属于同一预设网络区域内的其他节点转发所述目标数据包;
当接收到和所述本地节点属于同一预设网络区域内的其他节点返回的所述目标数据包对应的运算结果时,确定转发所述目标数据包的时间点和当前时间点之间的运算时延,将所述运算时延确定为所述第二运算任务类型对应的运算性能;
用所述第二运算任务类型对应的运算性能,替换所述第一对应关系中和所述本地节点属于同一预设网络区域内的其他节点对应的所述第二运算任务类型对应的运算性能,并对所述第一对应关系中替换后的运算性能的更新次数进行更新。
第二方面,提供了一种对数据包进行路由的装置,该装置包括至少一个模块,该至少一个模块用于实现上述第一方面所提供的对数据包进行路由的方法。
第三方面,提供了一种节点,该节点包括处理器、存储器,处理器被配置为执行存储器中存储的指令;处理器通过执行指令来实现上述第一方面所提供的对数据包进行路由的方法。
第四方面,提供了计算机可读存储介质,包括指令,当所述计算机可读存储介质在节点上运行时,使得所述节点执行上述第一方面所述的方法。
第五方面,提供了一种包含指令的计算机程序产品,当所述计算机程序产品在节点上运行时,使得所述节点执行上述第一方面所述的方法。
本公开的实施例提供的技术方案可以包括以下有益效果:
通过本公开实施例提供的方法,在对数据包进行路由的过程中,除了考虑当前的网络状况之外,还根据能够执行运算任务类型的数据包指示的运算任务的每个节点的运算性能,确定目的节点。这样,可以保证目的节点能够快速完成运算任务并将运算结果反馈给数据包的发起节点,从而缩短数据包的发起节点的等待时间。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。在附图中:
图1是根据一示例性实施例示出的一种对数据包进行路由的方法的流程示意图;
图2是根据一示例性实施例示出的一种对数据包进行路由的方法的流程示意图;
图3是根据一示例性实施例示出的一种网络结构示意图;
图4是根据一示例性实施例示出的一种对数据包进行路由的方法的流程示意图;
图5是根据一示例性实施例示出的一种对数据包进行路由的方法的流程示意图;
图6是根据一示例性实施例示出的一种对数据包进行路由的装置的结构示意图;
图7是根据一示例性实施例示出的一种节点的结构示意图。
通过上述附图,已示出本公开明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本公开构思的范围,而是通过参考特定实施例为本领域技术人员说明本公开的概念。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
本公开一示例性实施例提供了一种对数据包进行路由的方法,如图1所示,该方法的处理流程可以包括如下的步骤:
步骤S110,当接收到运算任务类型的数据包时,确定数据包对应的第一运算任务类型。
在实施中,当本地节点接收到数据包时,可以确定接收到的数据包对应的任务类型,包括指示需要从数据节点获取目标数据、传输即时通信信息或者指示需要运算节点执行目标类型的运算任务(这种数据包可称为运算任务类型的数据包)等。
当本地节点接收到运算任务类型的数据包时,可以确定数据包对应的第一运算任务类型。在实际应用中,当本地节点接收到数据包时,可以获取数据包的包头中携带的互联网协议地址(Internet Protocol Address,IP)。接着,本地节点可以确定携带的IP地址的类型,如果携带的IP地址为任一节点的IP地址,则基于任一节点的IP地址,对数据包进行转发。如果携带的IP地址对应于任一运算任务,则可以确定接收到的数据包为运算任务类型的数据包。
由于运算任务可以为多种,所以需要通过运算任务类型标识对不同的运算任务进行区分,本地节点可以获取数据包的包头中携带的运算任务类型标识,确定数据包对应的第一运算任务类型。需要说明的是,在本地节点中,需要运行新的路由协议,以基于新的路由协议获取数据包的包头中携带的运算任务类型标识,并基于运算任务类型标识,对数据包进行路由处理。
步骤S120,基于预先获取的运算任务类型、其他节点和运算性能的第一对应关系,确定第一运算任务类型对应的至少一个其他节点和至少一个其他节点对应的运算性能。
在实施中,可以增加新的路由表项,包括运算任务类型和运算性能。可以在本地节点中预先建立运算任务类型、其他节点和运算性能的第一对应关系,基于第一对应关系,确定第一运算任务类型对应的至少一个其他节点和至少一个其他节点对应的运算性能。
不同运算节点可以执行的运算任务可以为一种也可以为多种,并且不同运算节点可以执行的运算任务可以相同也可以不同。因此,首先可以确定哪些节点可以执行第一运算任务类型的运算任务,其次再在这些能够执行第一运算任务类型的运算任务的节点中,选取最优的一个节点。
例如,用户想让云端帮助识别目标图像中的所有人物,可以通过发送识别目标图像中的所有人物对应的数据包实现。当本地节点接收到识别目标图像中的所有人物对应的数据包时,可以获取数据包中的运算任务类型标识。基于运算任务类型标识,查找可以执行该运算任务类型标识对应的运算任务的节点,包括节点A、节点B和节点C。接着,可以分别确定这些节点对应的运算性能。其中,运算性能可以包括运算时延等可以体现不同节点在执行运算任务上的执行能力的参数信息。
步骤S130,基于至少一个其他节点对应的运算性能和本地节点分别与至少一个其他节点之间的链路状态,在至少一个其他节点中,确定目标节点。
在实施中,路由表项中还可以包括不同节点对应的链路状态,链路状态可以包括本地节点与其他节点之间的数据包往返时延。在实际应用中,本地节点可以确定第一运算任务类型对应的至少一个其他节点,进而确定本地节点分别与至少一个其他节点之间的链路状态。综合至少一个其他节点对应的运算性能和本地节点分别与至少一个其他节点之间的链路状态等因素,在至少一个其他节点中,确定目标节点。
可选地,运算性能包括运算时延,链路状态包括数据包往返时延,步骤S130可以包括:对于每个其他节点,确定其他节点对应的运算时延和本地节点与其他节点之间的数据包往返时延的和值;在至少一个其他节点中,确定最小和值对应的节点为目标节点。
在实施中,如表1所示,可以预先建立包括运算任务类型、其他节点、运算时延、本地节点与其他节点之间的数据包往返时延的对应关系。
表1
Figure PCTCN2019129881-appb-000001
可以基于表1,确定能够执行每种运算任务类型对应的运算任务的节点、以及每个节点执行每种运算任务类型的运算任务所需的运算时延和本地节点之间的数据包往返时延。对于每个其他节点,确定其他节点对应的运算时延和本地节点与其他节点之间的数据包往返时延的和值,在至少一个其他节点中,确定最小和值对应的节点为目标节点。
如图2所示,每种运算任务类型对应的节点可以是与本地节点属于同一预设网络区域内的运算节点,也可以是与本地节点不属于同一预设网络区域内的路由节点。如果运算任务类型对应的节点是与本地节点不属于同一预设网络区域内的路由节点M,则需要将数据包转发至路由节点M,再由路由节点M将数据包转发至与路由节点M属于同一预设网络区域内的运算节点。
步骤S140,将目标节点的地址确定为数据包的目的地址,基于目的地址,对数据包进行转发。
在实施中,在确定目标节点后,可以查询目标节点的地址,将目标节点的地址确定为数据包的目的地址,基于目的地址,对数据包进行转发。其他路由节点在接收到以目标节点的地址为目的地址的运算任务类型的数据包时,可以只基于网络状态,对数据包进行转发,最 终将数据包转发到目的节点上。
目的节点在接收到以自己的地址为目的地址的运算任务类型的数据包时,可以将数据包直接转发本地的运算节点,也可以基于本公开实施例提供的方法,重新确定本地的运算节点对应的运算延时和数据包往返时延的和值是不是依然是最小的,如果不是,则重新确定目的节点。
数据包最终由运算节点进行处理,将处理结果返回给与运算节点属于同一预设网络区域的路由节点,再由路由节点按照原路径将处理结果返回给运算任务的发起节点。
在所有路由节点启动之前,可以为这些路由节点进行区域规划以及级别规划。如图3所示,可以采用分布式与中心式相结合的方式对路由节点进行布局。路由节点之间可以存在上下级的关系,上级路由节点为下级路由节点的中心控制器,下级路由节点可以接受上级路由节点的控制,下级路由节点可以直接从上级路由节点中获取同级路由节点的节点信息,以避免下级路由节点再去挨个探测同级路由节点获取节点信息,这样可以提高获取节点信息的效率,互为同级的路由节点之间可以互相交换路由信息。
上下级路由节点可以采用中心式的结构进行布局,同级路由节点可以采用分布式的结构进行布局。随着层级越来越高,路由节点的数量越来越少,路由节点随着层级的增加呈收敛状态,最终整个由路由节点组成的网络呈锥形状。
上述网络中执行新的路由协议的节点,都可以作为本公开实施例提供的方法中的本地节点。在本地节点未启动之前,本地节点中未存储有第一对应关系,第一对应关系需要在本地节点启动之后,自行建立。
可选地,本公开实施例提供的方法还可以包括:当本地节点启动时,对于每个其他节点,向其他节点发送运算任务类型查询请求,接收其他节点返回的至少一个运算任务类型,向其他节点发送至少一个运算任务类型对应的运算性能查询请求,接收其他节点返回的至少一个运算任务类型对应的运算性能;基于每个其他节点分别对应的至少一个运算任务类型和至少一个运算任务类型对应的运算性能,建立运算任务类型、其他节点和运算性能的第一对应关系。
在实施中,当本地节点启动时,本地节点的上级节点可以检测到本地节点启动,上级节点可以将本地节点的同级节点的节点信息发送至本地节点,这样本地节点可以确定同级节点。其中,同级节点包括和本地节点属于同一预设网络区域的运算节点、和本地节点不属于同一预设网络区域的路由节点,如表1中的节点A、节点B、节点C、节点D。
本地节点可以基于同级节点,建立表1,此时表1中只有其他节点,其他表项的初始值全部为0。本地节点可以向其他节点发送运算任务类型查询请求,接收其他节点返回的至少一个运算任务类型,向其他节点发送至少一个运算任务类型对应的运算性能查询请求,接收其他节点返回的至少一个运算任务类型对应的运算性能。接着,本地节点可以基于每个其他节点分别对应的至少一个运算任务类型和至少一个运算任务类型对应的运算性能,建立运算任务类型、其他节点和运算性能的第一对应关系。
对于和本地节点属于同一预设网络区域的运算节点,运算性能可以包括负载信息和运算时延,接收其他节点返回的至少一个运算任务类型对应的运算性能的步骤具体可以包括:接收其他节点返回的至少一个运算任务类型对应的当前的负载信息。接着,本地节点可以根据预先存储的负载信息和运算时延的第二对应关系,确定当前的负载信息对应的运算时延,作 为至少一个运算任务类型对应的运算时延。
可以预先在本地节点中导入并存储历史负载信息和运算时延相关数据,本地节点可以对这些历史数据进行拟合,以确定负载信息和运算时延之间的关系,进而,当确定了至少一个运算任务类型对应的当前的负载信息时,就可以确定当前的负载信息对应的运算时延。进而,就可以确定至少一个运算任务类型对应的运算时延。
运算时延可以简单明了的直接反应运算节点的执行某一运算任务的执行能力,虽然影响运算节点执行某一运算任务的因素有很多,但最终都可以直接反应在运算时延上。运算时延越短,证明运算节点执行某一运算任务的执行能力越强。可以影响运算节点执行某一运算任务的因素包括中央处理器(Central Processing Unit,CPU)的性能、图形处理器(Graphics Processing Unit,GPU)的性能、实时负载等。在实际应用中,有些运算任务对CPU的性能要求较高,对GPU的性能要求不高。有些运算任务对GPU的性能要求较高,对CPU的性能要求不高。例如,图像识别类型的运算任务对GPU的性能要求较高。
对于和本地节点不属于同一预设网络区域的路由节点,运算性能可以包括运算时延,接收其他节点返回的至少一个运算任务类型对应的运算性能的步骤可以包括:接收其他节点返回的至少一个运算任务类型对应的运算时延。
和路由节点M属于同一预设网络区域的运算节点N,可以由路由节点M维护运算节点N的运算性能,而和路由节点M不属于同一预设网络区域的路由节点P,由于路由节点P维护着和自己属于同一预设网络区域的运算节点Q的运算性能,因此路由节点M可以直接从路由节点P中探测运算节点Q的运算性能。
对于本地节点和其他节点之间的数据包往返时延,可以按照预设的周期,通过因特网包探索器(Packet Internet Groper,PING)等交互方式,确定本地节点和其他节点之间的数据包往返时延。
通过上述方式,在本地节点启动时,可以初步建立第一对应关系,但是由于运算时延不是固定不变的,而是随着时间的推移根据具体情况动态变化的,因此需要对运算时延进行更新。
对于和本地节点属于同一预设网络区域的运算节点,每当接收到目的地址为和本地节点属于同一预设网络区域内的其他节点的目标数据包时,确定目标数据包对应的第二运算任务类型,并向和本地节点属于同一预设网络区域内的其他节点转发目标数据包;当接收到和本地节点属于同一预设网络区域内的其他节点返回的目标数据包对应的运算结果时,确定转发目标数据包的时间点和当前时间点之间的运算时延,将运算时延确定为第二运算任务类型对应的运算性能;用第二运算任务类型对应的运算性能,替换第一对应关系中和本地节点属于同一预设网络区域内的其他节点对应的第二运算任务类型对应的运算性能,并对第一对应关系中替换后的运算性能的更新次数进行更新。
由于本地节点需要为属于同一预设网络区域的运算节点转发运算任务类型的数据包,运算节点在执行运算任务类型的数据包对应的运算任务时,可以反映当前的运算节点的状况,本地节点可以统计这些状况,并对属于同一预设网络区域的运算节点对应的运算新能进行更新。
例如,当本地节点向属于同一预设网络区域的运算节点转发执行图像识别的数据包时,可以记录转发的时间点,当收到运算节点返回的识别结果时,可以确定返回识别结果的时间 点和转发的时间点之间的运算时延,进而就可以确定当前运算节点执行图像识别的运算任务需要多长时间。第一对应关系中还存储有运算性能的更新次数,每当本地节点对属于同一预设网络区域的运算节点的运算时延进行更新时,可以将更新次数加1。更新次数的初始值可以设置为0。如表2所示,是运算任务类型、其他节点、运算时延、本地节点与其他节点之间的数据包往返时延、更新次数的对应关系。
表2
Figure PCTCN2019129881-appb-000002
对于和本地节点不属于同一预设网络区域的路由节点,本地节点可以向路由节点发送探测包(也可称为运算性能查询请求),以获取需要更新的节点的运算性能。第一对应关系中还存储有运算性能的更新次数,本公开实施例提供的方法还可以包括:当接收到其他节点中的任一其他节点发送的运算性能查询请求时,获取运算性能查询请求中携带的查询运算任务类型和对应的更新次数,其中,运算性能查询请求用于指示查询和本地节点属于同一预设网络区域内的其他节点的运算性能;在第一对应关系中,确定查询运算任务类型对应的运算性能的更新次数;如果确定出的更新次数大于运算性能查询请求中携带的更新次数,则向任一其他节点发送查询运算任务类型对应的运算性能和确定出的更新次数。
如图4所示,如果当前本地节点需要对其他节点中的任一其他节点的运算新能进行更新,首先可以基于第一对应关系,确定所有与任一其他节点相关的运算任务类型标识和对应的更新次数。将与任一其他节点相关的运算任务类型标识和对应的更新次数携带在探测包中,发送至任一其他节点。
任一其他节点在接收到本地节点发送的探测包后,确定和任一其他节点属于同一预设网络区域的运算节点对应的运算任务类型对应的更新次数。如果确定出的更新次数大于探测包中携带的更新次数,则确定对应的运算任务类型对应的运算性能需要更新。任一其他节点将所有确定出的需要更新的运算任务类型对应的运算性能和任一其他节点中记录的更新次数携带在探测回包中,发送至本地节点。需要说明的是,如果任一其他节点中存在目标运算任务类型未记录在本地节点的第一对应关系中,也需要发送至本地节点,以使本地节点增加一条关于目标运算任务类型的记录。
如表3所示,是运算任务类型、其他节点、运算时延、本地节点与其他节点之间的数据包往返时延、更新次数的对应关系。
表3
Figure PCTCN2019129881-appb-000003
如图5所示,当用户设备(User Equipment,UE)设备如UE1发起运算任务类型1的数据包时,数据包到达节点1(本地节点),节点1作为管理节点,为数据包进行路由处理。节点1查找表3,确定能够执行运算任务类型1对应的运算任务的节点有节点local、节点2、节点3和节点4。计算每个节点的运算时延与数据包往返时延的和值,发现节点3对应的和值最小,节点1可以将数据包转发至节点3。
当UE2发起运算任务类型2的数据包时,数据包到达节点1,节点1查找表3,确定能够执行运算任务类型2对应的运算任务的节点有节点local、节点2和节点3。计算每个节点的运算时延与数据包往返时延的和值,发现节点local对应的和值最小,节点1可以将数据包转发至节点local。
经过一段时间之后,当UE1再次发起运算任务类型1的数据包时,数据包到达节点1,由于此时节点3带的负载较多,运算时延上升至50ms,所以本次节点2对应的和值最小,节点1可以将数据包转发至节点2。
通过本公开实施例提供的方法,在对数据包进行路由的过程中,除了考虑当前的网络状况之外,还根据能够执行运算任务类型的数据包指示的运算任务的每个节点的运算性能,确定目的节点。这样,可以保证目的节点能够快速完成运算任务并将运算结果反馈给数据包的发起节点,从而缩短数据包的发起节点的等待时间。
本公开又一示例性实施例提供了一种对数据包进行路由的装置,如图6所示,该装置包括:
确定模块610,用于当接收到运算任务类型的数据包时,确定所述数据包对应的第一运算任务类型;基于预先获取的运算任务类型、其他节点和运算性能的第一对应关系,确定所述第一运算任务类型对应的至少一个其他节点和所述至少一个其他节点对应的运算性能;基于所述至少一个其他节点对应的运算性能和本地节点分别与所述至少一个其他节点之间的链路状态,在所述至少一个其他节点中,确定目标节点,具体可以实现上述步骤S110-130中的确定功能,以及其他隐含步骤。
发送模块620,用于将所述目标节点的地址确定为所述数据包的目的地址,基于所述目的地址,对所述数据包进行转发,具体可以实现上述步骤S140中的发送功能,以及其他隐含步骤。
可选地,所述运算性能包括运算时延,所述链路状态包括数据包往返时延,所述确定模 块610,用于:
对于每个其他节点,确定所述其他节点对应的运算时延和本地节点与所述其他节点之间的数据包往返时延的和值;
在所述至少一个其他节点中,确定最小和值对应的节点为目标节点。
可选地,所述装置还包括:
接收模块,用于当所述本地节点启动时,对于每个其他节点,向所述其他节点发送运算任务类型查询请求,接收所述其他节点返回的至少一个运算任务类型,向所述其他节点发送所述至少一个运算任务类型对应的运算性能查询请求,接收所述其他节点返回的所述至少一个运算任务类型对应的运算性能;
建立模块,用于基于每个其他节点分别对应的至少一个运算任务类型和所述至少一个运算任务类型对应的运算性能,建立运算任务类型、其他节点和运算性能的第一对应关系。
可选地,所述其他节点和所述本地节点属于同一预设网络区域内,所述运算性能包括负载信息和运算时延,所述接收模块,用于接收所述其他节点返回的所述至少一个运算任务类型对应的当前的负载信息;
所述确定模块610,还用于根据预先存储的负载信息和运算时延的第二对应关系,确定所述当前的负载信息对应的运算时延,作为所述至少一个运算任务类型对应的运算时延。
可选地,所述其他节点和所述本地节点不属于同一预设网络区域内,所述运算性能包括运算时延,所述接收模块,用于:
接收所述其他节点返回的所述至少一个运算任务类型对应的运算时延。
可选地,所述第一对应关系中还存储有运算性能的更新次数,所述装置还包括:
获取模块,用于当接收到所述其他节点中的任一其他节点发送的运算性能查询请求时,获取所述运算性能查询请求中携带的查询运算任务类型和对应的更新次数,其中,所述运算性能查询请求用于指示查询和所述本地节点属于同一预设网络区域内的其他节点的运算性能;
所述确定模块610,还用于在所述第一对应关系中,确定所述查询运算任务类型对应的运算性能的更新次数;
所述发送模块620,还用于当确定出的更新次数大于所述运算性能查询请求中携带的更新次数时,向所述任一其他节点发送所述查询运算任务类型对应的运算性能和确定出的更新次数。
可选地,所述确定模块610,还用于每当接收到目的地址为和所述本地节点属于同一预设网络区域内的其他节点的目标数据包时,确定所述目标数据包对应的第二运算任务类型,并向和所述本地节点属于同一预设网络区域内的其他节点转发所述目标数据包;当接收到和所述本地节点属于同一预设网络区域内的其他节点返回的所述目标数据包对应的运算结果时,确定转发所述目标数据包的时间点和当前时间点之间的运算时延,将所述运算时延确定为所述第二运算任务类型对应的运算性能;
所述装置还包括:
更新模块,用于用所述第二运算任务类型对应的运算性能,替换所述第一对应关系中和所述本地节点属于同一预设网络区域内的其他节点对应的所述第二运算任务类型对应的运算性能,并对所述第一对应关系中替换后的运算性能的更新次数进行更新。
需要说明的是,上述确定模块610和发送模块620可以由处理器实现,或者由处理器配合存储器、收发器来实现。
通过本公开实施例提供的装置,在对数据包进行路由的过程中,除了考虑当前的网络状况之外,还根据能够执行运算任务类型的数据包指示的运算任务的每个节点的运算性能,确定目的节点。这样,可以保证目的节点能够快速完成运算任务并将运算结果反馈给数据包的发起节点,从而缩短数据包的发起节点的等待时间。
需要说明的是:上述实施例提供的对数据包进行路由的装置在对数据包进行路由时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将节点的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的对数据包进行路由的装置与对数据包进行路由的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
节点700可以包括处理器710、存储器740和收发器730,收发器730可以与处理器710连接,如图7所示。收发器730可以包括接收器和发送器,可以用于接收或者发送消息或数据,收发器730可以是网卡。节点700还可以包括加速部件(可称为加速器),当加速部件为网络加速部件时,加速部件可以为网卡。处理器710可以是节点700的控制中心,利用各种接口和线路连接整个节点700的各个部分,如收发器730等。在本公开实施例中,处理器710可以是中央处理器(Central Processing Unit,CPU),可选的,处理器710可以包括一个或多个处理单元。处理器710还可以是数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件等。节点700还可以包括存储器740,存储器740可用于存储软件程序以及模块,处理器710通过读取存储在存储器的软件代码以及模块,从而执行节点700的各种功能应用以及数据处理。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (23)

  1. 一种对数据包进行路由的方法,其特征在于,所述方法包括:
    当接收到运算任务类型的数据包时,确定所述数据包对应的第一运算任务类型;
    基于预先获取的运算任务类型、其他节点和运算性能的第一对应关系,确定所述第一运算任务类型对应的至少一个其他节点和所述至少一个其他节点对应的运算性能;
    基于所述至少一个其他节点对应的运算性能和本地节点分别与所述至少一个其他节点之间的链路状态,在所述至少一个其他节点中,确定目标节点;
    将所述目标节点的地址确定为所述数据包的目的地址,基于所述目的地址,对所述数据包进行转发。
  2. 根据权利要求1所述的方法,其特征在于,所述运算性能包括运算时延,所述链路状态包括数据包往返时延,所述基于所述至少一个其他节点对应的运算性能和本地节点分别与所述至少一个其他节点之间的链路状态,在所述至少一个其他节点中,确定目标节点,包括:
    对于每个其他节点,确定所述其他节点对应的运算时延和本地节点与所述其他节点之间的数据包往返时延的和值;
    在所述至少一个其他节点中,确定最小和值对应的节点为目标节点。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当所述本地节点启动时,对于每个其他节点,向所述其他节点发送运算任务类型查询请求,接收所述其他节点返回的至少一个运算任务类型,向所述其他节点发送所述至少一个运算任务类型对应的运算性能查询请求,接收所述其他节点返回的所述至少一个运算任务类型对应的运算性能;
    基于每个其他节点分别对应的至少一个运算任务类型和所述至少一个运算任务类型对应的运算性能,建立运算任务类型、其他节点和运算性能的第一对应关系。
  4. 根据权利要求3所述的方法,其特征在于,所述其他节点和所述本地节点属于同一预设网络区域内,所述运算性能包括负载信息和运算时延,所述接收所述其他节点返回的所述至少一个运算任务类型对应的运算性能,包括:
    接收所述其他节点返回的所述至少一个运算任务类型对应的当前的负载信息;
    所述方法还包括:
    根据预先存储的负载信息和运算时延的第二对应关系,确定所述当前的负载信息对应的运算时延,作为所述至少一个运算任务类型对应的运算时延。
  5. 根据权利要求3所述的方法,其特征在于,所述其他节点和所述本地节点不属于同一预设网络区域内,所述运算性能包括运算时延,所述接收所述其他节点返回的所述至少一个运算任务类型对应的运算性能,包括:
    接收所述其他节点返回的所述至少一个运算任务类型对应的运算时延。
  6. 根据权利要求1所述的方法,其特征在于,所述第一对应关系中还存储有运算性能的更新次数,所述方法还包括:
    当接收到所述其他节点中的任一其他节点发送的运算性能查询请求时,获取所述运算性能查询请求中携带的查询运算任务类型和对应的更新次数,其中,所述运算性能查询请求用 于指示查询和所述本地节点属于同一预设网络区域内的其他节点的运算性能;
    在所述第一对应关系中,确定所述查询运算任务类型对应的运算性能的更新次数;
    如果确定出的更新次数大于所述运算性能查询请求中携带的更新次数,则向所述任一其他节点发送所述查询运算任务类型对应的运算性能和确定出的更新次数。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    每当接收到目的地址为和所述本地节点属于同一预设网络区域内的其他节点的目标数据包时,确定所述目标数据包对应的第二运算任务类型,并向和所述本地节点属于同一预设网络区域内的其他节点转发所述目标数据包;
    当接收到和所述本地节点属于同一预设网络区域内的其他节点返回的所述目标数据包对应的运算结果时,确定转发所述目标数据包的时间点和当前时间点之间的运算时延,将所述运算时延确定为所述第二运算任务类型对应的运算性能;
    用所述第二运算任务类型对应的运算性能,替换所述第一对应关系中和所述本地节点属于同一预设网络区域内的其他节点对应的所述第二运算任务类型对应的运算性能,并对所述第一对应关系中替换后的运算性能的更新次数进行更新。
  8. 一种对数据包进行路由的装置,其特征在于,所述装置包括:
    确定模块,用于当接收到运算任务类型的数据包时,确定所述数据包对应的第一运算任务类型;基于预先获取的运算任务类型、其他节点和运算性能的第一对应关系,确定所述第一运算任务类型对应的至少一个其他节点和所述至少一个其他节点对应的运算性能;基于所述至少一个其他节点对应的运算性能和本地节点分别与所述至少一个其他节点之间的链路状态,在所述至少一个其他节点中,确定目标节点;
    发送模块,用于将所述目标节点的地址确定为所述数据包的目的地址,基于所述目的地址,对所述数据包进行转发。
  9. 根据权利要求8所述的装置,其特征在于,所述运算性能包括运算时延,所述链路状态包括数据包往返时延,所述确定模块,用于:
    对于每个其他节点,确定所述其他节点对应的运算时延和本地节点与所述其他节点之间的数据包往返时延的和值;
    在所述至少一个其他节点中,确定最小和值对应的节点为目标节点。
  10. 根据权利要求8所述的装置,其特征在于,所述装置还包括:
    接收模块,用于当所述本地节点启动时,对于每个其他节点,向所述其他节点发送运算任务类型查询请求,接收所述其他节点返回的至少一个运算任务类型,向所述其他节点发送所述至少一个运算任务类型对应的运算性能查询请求,接收所述其他节点返回的所述至少一个运算任务类型对应的运算性能;
    建立模块,用于基于每个其他节点分别对应的至少一个运算任务类型和所述至少一个运算任务类型对应的运算性能,建立运算任务类型、其他节点和运算性能的第一对应关系。
  11. 根据权利要求10所述的装置,其特征在于,所述其他节点和所述本地节点属于同一预设网络区域内,所述运算性能包括负载信息和运算时延,所述接收模块,用于接收所述其他节点返回的所述至少一个运算任务类型对应的当前的负载信息;
    所述确定模块,还用于根据预先存储的负载信息和运算时延的第二对应关系,确定所述 当前的负载信息对应的运算时延,作为所述至少一个运算任务类型对应的运算时延。
  12. 根据权利要求10所述的装置,其特征在于,所述其他节点和所述本地节点不属于同一预设网络区域内,所述运算性能包括运算时延,所述接收模块,用于:
    接收所述其他节点返回的所述至少一个运算任务类型对应的运算时延。
  13. 根据权利要求8所述的装置,其特征在于,所述第一对应关系中还存储有运算性能的更新次数,所述装置还包括:
    获取模块,用于当接收到所述其他节点中的任一其他节点发送的运算性能查询请求时,获取所述运算性能查询请求中携带的查询运算任务类型和对应的更新次数,其中,所述运算性能查询请求用于指示查询和所述本地节点属于同一预设网络区域内的其他节点的运算性能;
    所述确定模块,还用于在所述第一对应关系中,确定所述查询运算任务类型对应的运算性能的更新次数;
    所述发送模块,还用于当确定出的更新次数大于所述运算性能查询请求中携带的更新次数时,向所述任一其他节点发送所述查询运算任务类型对应的运算性能和确定出的更新次数。
  14. 根据权利要求13所述的装置,其特征在于,所述确定模块,还用于每当接收到目的地址为和所述本地节点属于同一预设网络区域内的其他节点的目标数据包时,确定所述目标数据包对应的第二运算任务类型,并向和所述本地节点属于同一预设网络区域内的其他节点转发所述目标数据包;当接收到和所述本地节点属于同一预设网络区域内的其他节点返回的所述目标数据包对应的运算结果时,确定转发所述目标数据包的时间点和当前时间点之间的运算时延,将所述运算时延确定为所述第二运算任务类型对应的运算性能;
    所述装置还包括:
    更新模块,用于用所述第二运算任务类型对应的运算性能,替换所述第一对应关系中和所述本地节点属于同一预设网络区域内的其他节点对应的所述第二运算任务类型对应的运算性能,并对所述第一对应关系中替换后的运算性能的更新次数进行更新。
  15. 一种节点,其特征在于,所述节点包括处理器、存储器和收发器,其中:
    所述处理器,用于当控制所述收发器接收到运算任务类型的数据包时,确定所述数据包对应的第一运算任务类型;基于所述存储器中预先获取的运算任务类型、其他节点和运算性能的第一对应关系,确定所述第一运算任务类型对应的至少一个其他节点和所述至少一个其他节点对应的运算性能;基于所述至少一个其他节点对应的运算性能和本地节点分别与所述至少一个其他节点之间的链路状态,在所述至少一个其他节点中,确定目标节点;
    所述收发器,用于将所述目标节点的地址确定为所述数据包的目的地址,基于所述目的地址,对所述数据包进行转发。
  16. 根据权利要求15所述的节点,其特征在于,所述运算性能包括运算时延,所述链路状态包括数据包往返时延,所述处理器,用于:
    对于每个其他节点,确定所述其他节点对应的运算时延和本地节点与所述其他节点之间的数据包往返时延的和值;
    在所述至少一个其他节点中,确定最小和值对应的节点为目标节点。
  17. 根据权利要求15所述的节点,其特征在于,所述收发器,用于当所述本地节点启动 时,对于每个其他节点,向所述其他节点发送运算任务类型查询请求,接收所述其他节点返回的至少一个运算任务类型,向所述其他节点发送所述至少一个运算任务类型对应的运算性能查询请求,接收所述其他节点返回的所述至少一个运算任务类型对应的运算性能;
    所述处理器,用于基于每个其他节点分别对应的至少一个运算任务类型和所述至少一个运算任务类型对应的运算性能,建立运算任务类型、其他节点和运算性能的第一对应关系。
  18. 根据权利要求17所述的节点,其特征在于,所述其他节点和所述本地节点属于同一预设网络区域内,所述运算性能包括负载信息和运算时延,所述收发器,用于接收所述其他节点返回的所述至少一个运算任务类型对应的当前的负载信息;
    所述处理器,用于根据所述存储器中预先存储的负载信息和运算时延的第二对应关系,确定所述当前的负载信息对应的运算时延,作为所述至少一个运算任务类型对应的运算时延。
  19. 根据权利要求17所述的节点,其特征在于,所述其他节点和所述本地节点不属于同一预设网络区域内,所述运算性能包括运算时延,所述收发器,用于:
    接收所述其他节点返回的所述至少一个运算任务类型对应的运算时延。
  20. 根据权利要求15所述的节点,其特征在于,所述第一对应关系中还存储有运算性能的更新次数,所述处理器,用于当接收到所述其他节点中的任一其他节点发送的运算性能查询请求时,获取所述运算性能查询请求中携带的查询运算任务类型和对应的更新次数,其中,所述运算性能查询请求用于指示查询和所述本地节点属于同一预设网络区域内的其他节点的运算性能;在所述第一对应关系中,确定所述查询运算任务类型对应的运算性能的更新次数;
    所述收发器,用于当确定出的更新次数大于所述运算性能查询请求中携带的更新次数时,向所述任一其他节点发送所述查询运算任务类型对应的运算性能和确定出的更新次数。
  21. 根据权利要求20所述的节点,其特征在于,所述处理器,还用于:
    每当接收到目的地址为和所述本地节点属于同一预设网络区域内的其他节点的目标数据包时,确定所述目标数据包对应的第二运算任务类型,并向和所述本地节点属于同一预设网络区域内的其他节点转发所述目标数据包;
    当接收到和所述本地节点属于同一预设网络区域内的其他节点返回的所述目标数据包对应的运算结果时,确定转发所述目标数据包的时间点和当前时间点之间的运算时延,将所述运算时延确定为所述第二运算任务类型对应的运算性能;
    用所述第二运算任务类型对应的运算性能,替换所述第一对应关系中和所述本地节点属于同一预设网络区域内的其他节点对应的所述第二运算任务类型对应的运算性能,并对所述第一对应关系中替换后的运算性能的更新次数进行更新。
  22. 一种计算机可读存储介质,其特征在于,包括指令,当所述计算机可读存储介质在节点上运行时,使得所述节点执行所述权利要求1-7中任一权利要求所述的方法。
  23. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在节点上运行时,使得所述节点执行所述权利要求1-7中任一权利要求所述的方法。
PCT/CN2019/129881 2019-01-22 2019-12-30 对数据包进行路由的方法和装置 WO2020151461A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19911896.9A EP3905637A4 (en) 2019-01-22 2019-12-30 METHOD AND DEVICE FOR DIVERSION OF DATA PACKETS
US17/380,383 US20210352014A1 (en) 2019-01-22 2021-07-20 Data packet routing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910057402.8 2019-01-22
CN201910057402.8A CN111464442B (zh) 2019-01-22 2019-01-22 对数据包进行路由的方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/380,383 Continuation US20210352014A1 (en) 2019-01-22 2021-07-20 Data packet routing method and apparatus

Publications (1)

Publication Number Publication Date
WO2020151461A1 true WO2020151461A1 (zh) 2020-07-30

Family

ID=71680076

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/129881 WO2020151461A1 (zh) 2019-01-22 2019-12-30 对数据包进行路由的方法和装置

Country Status (4)

Country Link
US (1) US20210352014A1 (zh)
EP (1) EP3905637A4 (zh)
CN (1) CN111464442B (zh)
WO (1) WO2020151461A1 (zh)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140207950A1 (en) * 2012-07-09 2014-07-24 Parentsware Llc Schedule and location responsive agreement compliance controlled information throttle
CN104348886A (zh) * 2013-08-08 2015-02-11 联想(北京)有限公司 一种信息处理的方法及一种电子设备

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101309201B (zh) * 2007-05-14 2012-05-23 华为技术有限公司 路由处理方法、路由处理器及路由器
US8059650B2 (en) * 2007-10-31 2011-11-15 Aruba Networks, Inc. Hardware based parallel processing cores with multiple threads and multiple pipeline stages
US20100223364A1 (en) * 2009-02-27 2010-09-02 Yottaa Inc System and method for network traffic management and load balancing
EP2502403A2 (en) * 2009-11-18 2012-09-26 Yissum Research Development Company of the Hebrew University of Jerusalem, Ltd. Communication system and method for managing data transfer through a communication network
CN102136989B (zh) * 2010-01-26 2014-03-12 华为技术有限公司 报文传输的方法、系统和设备
US8797913B2 (en) * 2010-11-12 2014-08-05 Alcatel Lucent Reduction of message and computational overhead in networks
CN104767682B (zh) * 2014-01-08 2018-10-02 腾讯科技(深圳)有限公司 路由方法和系统以及分发路由信息的方法和装置
CN105141541A (zh) * 2015-09-23 2015-12-09 浪潮(北京)电子信息产业有限公司 一种基于任务的动态负载均衡调度方法及装置
US9935893B2 (en) * 2016-03-28 2018-04-03 The Travelers Indemnity Company Systems and methods for dynamically allocating computing tasks to computer resources in a distributed processing environment
US10153964B2 (en) * 2016-09-08 2018-12-11 Citrix Systems, Inc. Network routing using dynamic virtual paths in an overlay network
CN107846358B (zh) * 2016-09-19 2020-07-10 北京金山云网络技术有限公司 一种数据传输方法、装置及网络系统
CN106789661B (zh) * 2016-12-29 2019-10-11 北京邮电大学 一种信息转发方法及天基信息网络系统
CN107087014B (zh) * 2017-01-24 2020-12-15 无锡英威腾电梯控制技术有限公司 一种负载均衡方法及其控制器
CN107634872B (zh) * 2017-08-29 2021-02-26 深圳市米联科信息技术有限公司 一种快速精确计量网络链路质量的方法和装置
EP3831021A1 (en) * 2018-07-27 2021-06-09 Gotenna Inc. VINEtm ZERO-CONTROL ROUTING USING DATA PACKET INSPECTION FOR WIRELESS MESH NETWORKS
US20200186478A1 (en) * 2018-12-10 2020-06-11 XRSpace CO., LTD. Dispatching Method and Edge Computing System

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140207950A1 (en) * 2012-07-09 2014-07-24 Parentsware Llc Schedule and location responsive agreement compliance controlled information throttle
CN104348886A (zh) * 2013-08-08 2015-02-11 联想(北京)有限公司 一种信息处理的方法及一种电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3905637A4

Also Published As

Publication number Publication date
EP3905637A4 (en) 2022-02-16
CN111464442A (zh) 2020-07-28
EP3905637A1 (en) 2021-11-03
US20210352014A1 (en) 2021-11-11
CN111464442B (zh) 2022-11-18

Similar Documents

Publication Publication Date Title
US10659344B2 (en) Information transmission method, apparatus and system
WO2018152919A1 (zh) 一种路径选取方法及系统、网络加速节点及网络加速系统
CN107547393B (zh) 一种计算转发路径的方法及网络设备
CN111934990B (zh) 消息传输方法及装置
KR102158654B1 (ko) 자원 구독 방법, 자원 구독 장치, 및 자원 구독 시스템
EP4030703A1 (en) Routing control method and apparatus
WO2017084448A1 (zh) 一种网络系统及网络运行方法
CN108989218B (zh) 一种基于网络融合架构的数据转发装置及方法
US20230291684A1 (en) Packet transmission method and apparatus, device, and computer-readable storage medium
WO2022166607A1 (zh) 发送报文的方法、装置、系统及存储介质
US8509233B2 (en) Method and apparatus for requesting multicast, processing and assisting multicast request
EP3585013B1 (en) Data transmission method and apparatus
CN110768911A (zh) 流量高效引流方法、装置、设备、系统及存储介质
US11343153B2 (en) BGP logical topology generation method, and device
US20230269164A1 (en) Method and apparatus for sending route calculation information, device, and storage medium
US11924103B2 (en) Traffic processing method, apparatus, and network device
JP6662195B2 (ja) 情報セントリックネットワーキングにおけるインテリジェント・ルーティング
WO2020151461A1 (zh) 对数据包进行路由的方法和装置
US11477109B2 (en) Method for synchronizing topology information in SFC network, and routing network element
CN109218182B (zh) 一种路由信息的同步方法及装置
CN114827015B (zh) 一种数据转发方法和虚拟化云网络架构
WO2015004287A1 (en) Interworking between first protocol entity of stream reservation protocol and second protocol entity of routing protocol
CN110830370B (zh) 一种基于ospf协议的ibgp传递路由更新方法
CN115002020B (zh) 基于ospf的数据处理方法及装置
CN105991372B (zh) 链路检测方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19911896

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019911896

Country of ref document: EP

Effective date: 20210726