WO2023185976A1 - 算力处理方法、装置及算力节点 - Google Patents

算力处理方法、装置及算力节点 Download PDF

Info

Publication number
WO2023185976A1
WO2023185976A1 PCT/CN2023/084951 CN2023084951W WO2023185976A1 WO 2023185976 A1 WO2023185976 A1 WO 2023185976A1 CN 2023084951 W CN2023084951 W CN 2023084951W WO 2023185976 A1 WO2023185976 A1 WO 2023185976A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing power
node
power node
information
terminal
Prior art date
Application number
PCT/CN2023/084951
Other languages
English (en)
French (fr)
Inventor
王晓云
杜宗鹏
陆璐
Original Assignee
中国移动通信有限公司研究院
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国移动通信有限公司研究院, 中国移动通信集团有限公司 filed Critical 中国移动通信有限公司研究院
Publication of WO2023185976A1 publication Critical patent/WO2023185976A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Definitions

  • the present disclosure relates to the field of communication technology, and in particular to a computing power processing method, device and computing power node.
  • the core mechanism of the computing power network is to conduct joint traffic scheduling based on the network conditions and computing power conditions, which can achieve the purpose of balancing the network load and computing power load, making better use of computing and network resources, and supporting "number in the east and calculation in the west”, The strategy of “double carbon compliance”.
  • CV Computer Vision
  • the current network is still a system designed for human cognition.
  • the frame rate of video content is selected taking into account human visual perception of moving objects, which is defined as 30 frames/second.
  • the collected audio also takes advantage of the masking effect mechanism of the human cognitive system. .
  • human cognition such encoding quality can be considered fine quality, but it is far from sufficient for use cases that require more than human capabilities, such as robotic monitoring systems that can detect anomalies from sounds that exceed human audible frequencies.
  • the response speed of humans when seeing an event is about 100ms, so many applications are designed based on this delay. However, applications other than humans, such as emergency parking systems, need to further shorten the response time.
  • the decision-making point is at the upper node, which has the following shortcomings:
  • the upper-layer node pull-through problem, and/or some inherent problems in centralized decision-making (such as centralized storage of computing network information, there is a single point of failure; centralized processing of computing power service access decisions, there is a performance bottleneck; Computing services are scheduled according to the principle of proximity, and central decision-making will bring more network delays).
  • the decision point can be at the entry node (Ingress Router), which is closer to the customer. It has the following shortcomings:
  • the purpose of the embodiments of the present disclosure is to provide a computing power processing method, device and computing power node to solve the centralization problem in the centralized architecture of the related technology computing power network and the cross-layer design problem in the distributed architecture.
  • the present disclosure provides a computing power processing method, which is applied to the first computing power node, including:
  • the terminal accesses the first computing power node according to the anycast IP address;
  • At least one computing power node that can provide computing power to the terminal is determined; wherein the computing power record information includes: information of multiple computing power nodes;
  • the computing power feedback information includes: the determined IP address of the at least one computing power node.
  • the method further includes:
  • the computing power record information is constructed by interacting with the second computing power node in the target computing power network through application layer information, including:
  • the computing power record information stored locally by the first computing power node includes selectively stored information of at least one computing power node.
  • the method also includes:
  • the information of at least one computing power node included in the computing power record information is selectively stored in the computing power record information stored locally in the first computing power node.
  • the methods of selectively storing the information of at least one computing power node include:
  • the location information of the computing power node selectively store the information of the computing power node whose distance from the first computing power node is less than a threshold value
  • the computing power type of the computing power node information of a preset type of computing power node is selectively stored.
  • At least one computing power node that can provide computing power to the terminal is determined, including:
  • the computing power request determine whether the first computing power node can provide services for the terminal
  • the first computing power node can provide services for the terminal, determine at least one computing power node that can provide computing power to the terminal as the first computing power node;
  • At least one computing power node that can provide computing power to the terminal is determined based on locally stored computing power record information.
  • At least one computing power node that can provide computing power to the terminal is determined based on locally stored computing power record information, including:
  • Receive a computing power response sent by at least one computing power node the computing power response is used to indicate that the computing power node can provide services for the terminal, or the computing power response is used to indicate that the computing power node can provide services for the terminal
  • the IP address of the computing power node is used to indicate that the computing power node can provide services for the terminal.
  • the computing power feedback information also includes:
  • Computing power information of computing power nodes and/or,
  • the location information of the computing power node is the location information of the computing power node.
  • the method also includes:
  • the information of the third computing power node is deleted from the locally stored computing power record information.
  • Embodiments of the present disclosure also provide a computing power processing device, applied to the first computing power node, including:
  • the first receiving module is used to receive the computing power request sent by the terminal; the terminal accesses the first computing power node according to the anycast IP address;
  • a first determination module configured to determine at least one computing power node that can provide computing power to the terminal according to the computing power request and locally stored computing power record information; wherein the computing power record information includes: a plurality of Information about computing power nodes;
  • the first sending module is configured to send computing power feedback information to the terminal, where the computing power feedback information includes: the determined IP address of the at least one computing power node.
  • Embodiments of the present disclosure also provide a computing power node.
  • the computing power node is a first computing power node.
  • the first computing power node includes a processor and a transceiver.
  • the transceiver receives and sends data under the control of the processor.
  • the processor is used to perform the following operations:
  • the terminal accesses the first computing power node according to the anycast IP address;
  • At least one computing power node that can provide computing power to the terminal is determined; wherein the computing power record information includes: information of multiple computing power nodes;
  • the computing power feedback information includes: the determined IP address of the at least one computing power node.
  • processor is also used to perform the following operations:
  • processor is also used to perform the following operations:
  • the computing power record information stored locally in the first computing power node is constructed.
  • the computing power record information stored locally in the first computing power node includes selectively stored Information about at least one computing power node.
  • processor is also used to perform the following operations:
  • the information of at least one computing power node included in the computing power record information is selectively stored in the computing power record information stored locally in the first computing power node.
  • the methods of selectively storing the information of at least one computing power node include:
  • the location information of the computing power node selectively store the information of the computing power node whose distance from the first computing power node is less than a threshold value
  • the computing power type of the computing power node information of a preset type of computing power node is selectively stored.
  • processor is also used to perform the following operations:
  • the computing power request determine whether the first computing power node can provide services for the terminal
  • the first computing power node can provide services for the terminal, determine at least one computing power node that can provide computing power to the terminal as the first computing power node;
  • At least one computing power node that can provide computing power to the terminal is determined based on locally stored computing power record information.
  • processor is also used to perform the following operations:
  • Receive a computing power response sent by at least one computing power node the computing power response is used to indicate that the computing power node can provide services for the terminal, or the computing power response is used to indicate that the computing power node can provide services for the terminal.
  • the computing power feedback information also includes:
  • Computing power information of computing power nodes and/or,
  • the location information of the computing power node is the location information of the computing power node.
  • processor is also used to perform the following operations:
  • the information of the third computing power node is deleted from the locally stored computing power record information.
  • Embodiments of the present disclosure also provide a computing power node, including a memory, a processor, and a program stored on the memory and executable on the processor. When the processor executes the program, the above-mentioned steps are implemented. Computing power processing method.
  • Embodiments of the present disclosure also provide a computer-readable storage medium on which a computer program is stored. When the program is executed by a processor, the steps in the computing power processing method as described above are implemented.
  • the computing power nodes in the target computing power network interact to determine the locally stored computing power record information through intercommunication at the application layer, and record the information according to the computing power Help terminals find suitable computing power nodes and complete related computing power tasks; on the one hand, the target computing power network is a decentralized architecture, which can avoid problems related to centralized decision-making; on the other hand, each computing power node is based on Interaction at the application layer also avoids the complicated cross-layer design of distributed computing networks based on the network layer.
  • Figure 1 shows a step flow chart of a computing power processing method provided by an embodiment of the present disclosure
  • Figure 2 shows an example diagram of the target computing power network in the computing power processing method provided by the embodiment of the present disclosure
  • Figure 3 shows another example diagram of the target computing power network in the computing power processing method provided by the embodiment of the present disclosure
  • Figure 4 shows a schematic structural diagram of a computing power processing device provided by an embodiment of the present disclosure
  • Figure 5 shows a schematic structural diagram of a computing power node provided by an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides a computing power processing method, applied to the first computing power node, including:
  • Step 101 Receive the computing power request sent by the terminal; the terminal accesses the first computing power node according to the anycast IP address;
  • Step 102 Determine at least one computing power node that can provide computing power to the terminal according to the computing power request and locally stored computing power record information; wherein the computing power record information includes: multiple computing power nodes information;
  • the information of the computing power node includes: the identification of the computing power node, the Internet Protocol (IP) address of the computing power node, the load of the computing power node, the location of the computing power node, the relationship between the computing power node and the first The distance information of the computing power node, the computing power type supported by the computing power node, the total computing power of the computing power node, the available computing power of the computing power node, information about the computing power services currently provided by the computing power node, etc.
  • IP Internet Protocol
  • Step 103 Send computing power feedback information to the terminal, where the computing power feedback information includes: the determined IP address of the at least one computing power node.
  • the computing power processing method provided by the embodiment of the present disclosure is expected to provide services nearby the terminal. At the same time, the nodes providing services must have sufficient computing power to more fully realize load balancing (LB).
  • LB load balancing
  • the terminal is connected to a nearby computing power node 1.
  • the computing power node 1 stores computing power record information (including information about surrounding computing power nodes) on demand or by type. Schedule according to the computing power requirements and the computing power record information stored by itself.
  • the information of computing power node 2 includes: it is closer to computing power node 1 and has a smaller load; the information of computing power node 3 includes: it is closer to computing power node 1; the information of computing power node 4 includes: : It is close to the computing power node 1 and supports service 1 or 2 or 3; the information of the computing power node 5 includes: it is far away from the computing power node 1, and the available computing power or the total computing power is very large.
  • the computing power feedback information also includes:
  • the computing power information of the computing power node (such as the total computing power of the computing power node, the available computing power of the computing power node, etc.); and/or,
  • the location information of the computing power node is the location information of the computing power node.
  • the first computing power node should confirm that the services currently supported by the feedback computing power node can match the services requested by the terminal.
  • the method before receiving the computing power request sent by the terminal, the method further includes:
  • the target computing power network can also be called a computing power alliance. All computing power nodes joining the computing power alliance will publish the same anycast (Identity Document, ID) or IP. To support terminal access.
  • ID anycast
  • IP IP
  • interacting with the second computing power node in the target computing power network through application layer information to construct the computing power record information includes:
  • the computing power record information stored locally in the first computing power node is constructed.
  • the computing power record information stored locally in the first computing power node includes selectively stored Information about at least one computing power node.
  • the process for a computing power node to join the target computing power network includes:
  • a new computing power node such as computing power node 2, computing power node 3 or computing power node 4
  • Computing power node 1 which receives the request to join, verifies the joining request. After passing the request, it sends the information of nearby computing power nodes it has collected to computing power node 2, computing power node 3 or computing power node 4, that is, the information it has stored. Computing power record information;
  • the computing power node 2, computing power node 3 or computing power node 4 that receives the computing power record information organizes the computing power record information and selectively records the information of the computing power nodes, for example, it gives priority to record the information of the computing power nodes closest to itself. Information about computing power nodes.
  • the method further includes:
  • the information of at least one computing power node included in the computing power record information is selectively stored in the computing power record information stored locally in the first computing power node.
  • the computing power node 2 that receives the computing power recording information selectively records the information of the computing power node
  • the computing power node 2 will further send the information to the computing power node 3.
  • This cycle continues until the number of computing nodes in the locally stored computing power record information reaches the preset value, then it is completed. Construction of locally stored computing power record information.
  • each computing power node selectively stores some information about surrounding computing power nodes according to the above operations (such as sending a first request to join the target computing power network and/or sending an information query request). , thereby obtaining a distributed computing power record information (optionally, the computing power record information can be specifically implemented through the computing power record table) to support distributed computing power scheduling.
  • distributed computing power scheduling is specifically: decision-making based on the local optimal principle, each business interacts with computing power nodes; it has strong scalability.
  • a method of selectively storing information of at least one computing power node includes:
  • the location information of the computing power node selectively store the information of the computing power node whose distance from the first computing power node is less than the threshold value; the computing power node in the computing power alliance does not need to store the computing power alliance Information about all other nodes in the node; only part of the information needs to be stored, and computing power addressing supports recursive search.
  • ways to selectively store information about at least one computing power node include:
  • the information of the preset type of computing power node is selectively stored; the computing power node in the computing power alliance can be stored by type according to the computing power capability information of other nodes to better Business matching; for example, related types include remote ultra-large computing power, near-end computing power nodes, and nodes that support Artificial Intelligence (AI) reasoning.
  • computing power node The computing power type is used to better match computing power demand and computing power supply, and it does not exclude having more type identifiers.
  • information about a preset type of computing power node is selectively stored according to the ID of the computing power service supported by the computing power node; for example, the relevant service is a face recognition service or a vehicle recognition service.
  • each computing power node stores the capability information of several other computing power nodes according to a certain screening method, such as storing the information of several candidate nodes nearby, or storing several candidates according to the computing power type.
  • Node information For example, each node stores information about 3 or 4 surrounding nodes.
  • the computing power node sorts the surrounding computing power nodes through a certain algorithm and stores them in the computing power record information. Those ranked first will be recommended first. The ranking can depend on distance, computing power idleness, etc., and there are no specific limitations here.
  • the method further includes:
  • the information of the third computing power node is deleted from the locally stored computing power record information. For example, if the third computing power node leaves the network, other computing power nodes will find that the third computing power node has not updated its status for a long time, and thus delete the information of the third computing power node from the information of surrounding nodes stored in it.
  • step 102 includes:
  • the computing power request determine whether the first computing power node can provide services for the terminal
  • the first computing power node can provide services for the terminal, determine at least one computing power node that can provide computing power to the terminal as the first computing power node;
  • At least one computing power node that can provide computing power to the terminal is determined based on locally stored computing power record information.
  • At least one computing power node that can provide computing power to the terminal is determined based on locally stored computing power record information, including:
  • Receive a computing power response sent by at least one computing power node the computing power response is used to indicate the computing power
  • the computing power node can provide services for the terminal, or the computing power response is used to indicate the IP address of the computing power node that can provide services for the terminal.
  • the purpose of the terminal's computing power scheduling process is to find a node that is close enough (to meet the delay required by the business) and has sufficient computing power.
  • the terminal’s computing power scheduling process includes:
  • Step (1) A new computing power request needs to find the nearest computing power node 1 according to anycast, and initiate a request, which includes the required computing power information and optionally carries location information;
  • Step (2) The computing power node 1 that receives the request verifies the computing power request. After passing the request, it determines whether it can provide the service, or determines whether there is more information based on the load and location of the computing power it has learned. Good nodes to provide services;
  • Step (3) If it is determined that other nodes are expected to provide services, send one or more requests to surrounding nodes;
  • Step (4) The computing power node that receives the request executes a process similar to steps (2)-(3) and continues to search for a suitable node until a node accepts the task and feeds it back to computing power node 1;
  • the computing power information, location information, etc. are used for terminal evaluation;
  • Step (6) The terminal accesses the service according to the feedback IP.
  • the embodiments of the present disclosure have constructed a decentralized distributed decision-making computing power alliance (target computing power network).
  • Nodes can join and leave freely; in the future, relevant computing power transactions, incentives, and security can be superimposed. and other mechanisms.
  • the benefits of decentralization include: supporting computing power sharing across operators, vendors, and management entities. And by disseminating computing power information through notifications at the application layer, each node records nearby computing power node information, or information about several computing power nodes of a specific type, to support distributed computing power addressing; nodes do not The information of all other nodes needs to be maintained, and only the information of the computing power nodes near the user needs to be maintained on demand.
  • an embodiment of the present disclosure also provides a computing power processing device, applied to the first computing power node, including:
  • the first receiving module 401 is used to receive the computing power request sent by the terminal; the terminal accesses the first computing power node according to the anycast IP address;
  • the first determination module 402 is configured to determine at least one computing power node that can provide computing power to the terminal according to the computing power request and locally stored computing power record information; wherein the computing power record information includes: Information about computing power nodes;
  • the first sending module 403 is configured to send computing power feedback information to the terminal, where the computing power feedback information includes: the determined IP address of the at least one computing power node.
  • the device further includes:
  • the construction module is used to interact with the second computing power node in the target computing power network through the information of the application layer to construct the computing power record information.
  • the building blocks include:
  • a first sub-module configured to send a first request to the second computing power node to join the target computing power network in an anycast manner
  • the second sub-module is configured to receive the computing power record information stored by the second computing power node and sent by the second computing power node after the first request has been verified;
  • the third sub-module is configured to construct the locally stored computing power record information of the first computing power node based on the computing power record information stored by the second computing power node.
  • the computing power record information stored locally by the first computing power node The record information includes selectively stored information of at least one computing power node.
  • the device further includes:
  • the second sending module is used to send an information query request to at least one computing power node corresponding to the selective storage
  • the second receiving module is configured to receive an information query response sent by the computing power node, where the information query response includes: computing power record information stored by the computing power node;
  • a storage module configured to selectively store the information of at least one computing power node included in the computing power record information into the computing power record information stored locally on the first computing power node according to the information query response.
  • the method of selectively storing the information of at least one computing power node includes:
  • the location information of the computing power node selectively store the information of the computing power node whose distance from the first computing power node is less than a threshold value
  • the computing power type of the computing power node information of a preset type of computing power node is selectively stored.
  • the determining module includes:
  • the fourth sub-module is used to determine whether the first computing power node can provide services for the terminal according to the computing power request;
  • a fifth submodule configured to determine at least one computing power node that can provide computing power to the terminal as the first computing power node when the first computing power node is able to provide services for the terminal;
  • the sixth submodule is configured to determine at least one node that can provide computing power to the terminal based on locally stored computing power record information when the first computing power node is unable to provide services for the terminal. Computing power node.
  • the sixth sub-module is further used for:
  • Receive a computing power response sent by at least one computing power node the computing power response is used to indicate that the computing power node can provide services for the terminal, or the computing power response is used to indicate that the computing power node can provide services for the terminal
  • the IP address of the computing power node is used to indicate that the computing power node can provide services for the terminal.
  • the computing power feedback information also includes:
  • Computing power information of computing power nodes and/or,
  • the location information of the computing power node is the location information of the computing power node.
  • the device further includes:
  • a deletion module configured to delete the information of the third computing power node from the locally stored computing power record information when the third computing power node leaves the target computing power network.
  • the computing power nodes in the target computing power network interact with each other through the application layer to determine the locally stored computing power record information, and help the terminal find the appropriate computing power node based on the computing power record information to complete the relevant tasks.
  • computing tasks on the one hand, the target computing network is a decentralized architecture, which can avoid problems related to centralized decision-making; on the other hand, the interaction between each computing node is based on the application layer, which also avoids cross-layer design The problem.
  • the computing power processing device provided by the embodiments of the present disclosure is a device capable of executing the above computing power processing method, then all embodiments of the above computing power processing method are applicable to this device, and can achieve the same or similar results. beneficial effects.
  • an embodiment of the present disclosure also provides a computing power node.
  • the computing power node is a first computing power node.
  • the first computing power node includes a processor 500 and a transceiver 510.
  • the transceiver 510 is Data is received and sent under the control of processor 500, which is configured to perform the following operations:
  • the terminal accesses the first computing power node according to the anycast IP address;
  • At least one computing power node that can provide computing power to the terminal is determined; wherein the computing power record information includes: information of multiple computing power nodes;
  • the computing power feedback information includes: the determined IP address of the at least one computing power node.
  • the processor is also used to perform the following operations:
  • the processor is also used to perform the following operations:
  • the computing power record information stored locally in the first computing power node is constructed.
  • the computing power record information stored locally in the first computing power node includes selectively stored Information about at least one computing power node.
  • the processor is also used to perform the following operations:
  • the information of at least one computing power node included in the computing power record information is selectively stored in the computing power record information stored locally in the first computing power node.
  • the method of selectively storing the information of at least one computing power node includes:
  • the location information of the computing power node selectively store the information of the computing power node whose distance from the first computing power node is less than a threshold value
  • the computing power type of the computing power node information of a preset type of computing power node is selectively stored.
  • the processor is also used to perform the following operations:
  • the computing power request determine whether the first computing power node can provide services for the terminal
  • the first computing power node can provide services for the terminal, determine at least one computing power node that can provide computing power to the terminal as the first computing power node;
  • At least one computing power node that can provide computing power to the terminal is determined based on locally stored computing power record information.
  • the processor is also used to perform the following operations:
  • Receive a computing power response sent by at least one computing power node the computing power response is used to indicate that the computing power node can provide services for the terminal, or the computing power response is used to indicate that the computing power node can provide services for the terminal
  • the IP address of the computing power node is used to indicate that the computing power node can provide services for the terminal.
  • the computing power feedback information also includes:
  • Computing power information of computing power nodes and/or,
  • the location information of the computing power node is the location information of the computing power node.
  • the processor is also used to perform the following operations:
  • the information of the third computing power node is deleted from the locally stored computing power record information.
  • the computing power nodes in the target computing power network interact with each other through the application layer to determine the locally stored computing power record information, and help the terminal find the appropriate computing power node based on the computing power record information to complete the relevant tasks.
  • computing tasks on the one hand, the target computing network is a decentralized architecture, which can avoid problems related to centralized decision-making; on the other hand, the interaction between each computing node is based on the application layer, which also avoids cross-layer design The problem.
  • the computing power node provided by the embodiments of the present disclosure is a node that can execute the above computing power processing method, then all embodiments of the above computing power processing method are applicable to this computing power node, and can all achieve the same or similar performance. beneficial effects.
  • Embodiments of the present disclosure also provide a computing power node, including a memory, a processor and a computing power node stored in the A computer program on the memory that can be run on the processor.
  • the processor executes the program, it implements each process in the embodiment of the computing power processing method as described above, and can achieve the same technical effect. In order to avoid Repeat, I won’t go into details here.
  • Embodiments of the present disclosure also provide a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, each process in the above-mentioned computing power processing method embodiment is implemented, and the same technology can be achieved. The effect will not be described here to avoid repetition.
  • the computer-readable storage medium is such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • embodiments of the present disclosure may be provided as methods, systems, or computer program products. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) embodying computer-usable program code therein.
  • a computer-readable storage media including, but not limited to, magnetic disk storage, optical storage, and the like
  • These computer program instructions may also be stored in a computer-readable storage medium capable of directing a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable storage medium produce a paper product including instruction means,
  • the instruction means implements the functions specified in a process or processes of the flowchart and/or a block or blocks of the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing the computer or other programmable device to perform a series of operating steps to produce computer-implemented processes, thereby causing the instructions to be executed on the computer or other programmable device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Power Sources (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本公开提供一种算力处理方法、装置及算力节点,该方法包括:第一算力节点接收终端发送的算力请求;所述终端按照任播的IP地址接入所述第一算力节点;根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点;其中,所述算力记录信息包括:多个算力节点的信息;向所述终端发送算力反馈信息,所述算力反馈信息包括:确定的所述至少一个算力节点的IP地址。

Description

算力处理方法、装置及算力节点
相关申请的交叉引用
本申请主张在2022年3月31日在中国提交的中国专利申请号No.202210337151.0的优先权,其全部内容通过引用包含于此。
技术领域
本公开涉及通信技术领域,尤其是在一种算力处理方法、装置及算力节点。
背景技术
算力网络的核心机制是综合网络情况和算力情况进行联合流量调度,能达到网络负载和算力负载都比较均衡的目的,更好的利用计算和网络资源,支撑“东数西算”、“双碳达标”的战略。
例如,计算机视觉(Computer Vision,CV)是算力网络的一个典型场景。广泛应用于智慧城市、工业生产等场景、有效提升了社会治理和工业水平。CV的实时视频分析需要大量的深度神经网络计算处理资源。
当前网络还是面向人类认知设计的系统,例如视频内容的帧率选择考虑到人类对运动物体的视觉感知力,定义为30帧/秒,采集的音频也利用了人类认知系统的掩盖效应机制。对于人类的认知,这样的编码质量可以被认为是精细的质量,但是对于需要超越人类的用例则远远不够,如机器人的监控系统可以从超过人类可听频率的声音中检测到异常。人类看到事件时的响应速度约为100ms,因此很多应用基于这个时延进行设计,但是人类之外的应用,如紧急停车系统,则需要进一步缩短响应时间。
已有技术中,包括算力网络的集中式架构,以及基于网络层的分布式架构。
对于集中式架构,决策点在上层节点,其存在以下缺点:
上层的节点拉通问题,和/或,集中决策的一些固有问题(如算网信息集中存储,存在单点故障隐患;算力服务接入决策集中处理,存在性能瓶颈; 算力服务按照就近调度原则,中心决策会带来更多的网络时延)。
对于基于网络层调度的分布式架构,决策点可以在在入口节点(Ingress Router),距离客户更近,其存在以下缺点:
属于一种跨层设计,需增强网络协议,推动困难大。目前的网络架构中,网络层不应该感知到太多的业务信息,会担心增加成本,影响转发效率。
发明内容
本公开实施例的目的在于提供一种算力处理方法、装置及算力节点,以解决相关技术算力网络的集中式架构中的集中化问题以及分布式架构中的跨层设计问题。
为了解决上述问题,本公开提供一种算力处理方法,应用于第一算力节点,包括:
接收终端发送的算力请求;所述终端按照任播的IP地址接入所述第一算力节点;
根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点;其中,所述算力记录信息包括:多个算力节点的信息;
向所述终端发送算力反馈信息,所述算力反馈信息包括:确定的所述至少一个算力节点的IP地址。
其中,接收终端发送的算力请求之前,所述方法还包括:
与目标算力网络中的第二算力节点通过应用层的信息交互,构建所述算力记录信息。
其中,与目标算力网络中的第二算力节点通过应用层的信息交互,构建所述算力记录信息,包括:
通过任播的方式向所述第二算力节点发送请求加入所述目标算力网络的第一请求;
接收所述第二算力节点对所述第一请求验证通过后发送的第二算力节点存储的算力记录信息;
根据所述第二算力节点存储的算力记录信息,构建所述第一算力节点本 地存储的算力记录信息,所述第一算力节点本地存储的算力记录信息包括选择性存储的至少一个算力节点的信息。
其中,所述方法还包括:
向选择性存储对应的至少一个算力节点发送信息查询请求;
接收所述算力节点发送的信息查询响应,所述信息查询响应包括:所述算力节点存储的算力记录信息;
根据所述信息查询响应,选择性将所述算力记录信息包括的至少一个算力节点的信息存储至所述第一算力节点本地存储的算力记录信息中。
其中,选择性存储至少一个算力节点的信息的方式包括:
根据所述算力节点的位置信息,选择性存储与所述第一算力节点之间的距离小于门限值的算力节点的信息;
或者,
根据所述算力节点的算力类型,选择性存储预设类型的算力节点的信息。
其中,根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点,包括:
根据所述算力请求,判断所述第一算力节点是否能够为所述终端提供服务;
在所述第一算力节点能够为所述终端提供服务的情况下,确定能够给所述终端提供算力的至少一个算力节点为所述第一算力节点;
或者,在所述第一算力节点不能够为所述终端提供服务的情况下,根据本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点。
其中,在所述第一算力节点不能够为所述终端提供服务的情况下,根据本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点,包括:
向本地存储的算力记录信息中的一个或多个算力节点发送所述算力请求;
接收至少一个算力节点发送的算力响应,所述算力响应用于指示所述算力节点能够为所述终端提供服务,或者,所述算力响应用于指示能够为所述终端提供服务的算力节点的IP地址。
其中,所述算力反馈信息还包括:
算力节点的算力信息;和/或,
算力节点的位置信息。
其中,所述方法还包括:
在第三算力节点离开所述目标算力网络的情况下,从本地存储的算力记录信息中删除所述第三算力节点的信息。
本公开实施例还提供一种算力处理装置,应用于第一算力节点,包括:
第一接收模块,用于接收终端发送的算力请求;所述终端按照任播的IP地址接入所述第一算力节点;
第一确定模块,用于根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点;其中,所述算力记录信息包括:多个算力节点的信息;
第一发送模块,用于向所述终端发送算力反馈信息,所述算力反馈信息包括:确定的所述至少一个算力节点的IP地址。
本公开实施例还提供一种算力节点,该算力节点为第一算力节点,该第一算力节点包括处理器和收发器,所述收发器在处理器的控制下接收和发送数据,所述处理器用于执行以下操作:
接收终端发送的算力请求;所述终端按照任播的IP地址接入所述第一算力节点;
根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点;其中,所述算力记录信息包括:多个算力节点的信息;
向所述终端发送算力反馈信息,所述算力反馈信息包括:确定的所述至少一个算力节点的IP地址。
其中,所述处理器还用于执行以下操作:
与目标算力网络中的第二算力节点通过应用层的信息交互,构建所述算力记录信息。
其中,所述处理器还用于执行以下操作:
通过任播的方式向所述第二算力节点发送请求加入所述目标算力网络的 第一请求;
接收所述第二算力节点对所述第一请求验证通过后发送的第二算力节点存储的算力记录信息;
根据所述第二算力节点存储的算力记录信息,构建所述第一算力节点本地存储的算力记录信息,所述第一算力节点本地存储的算力记录信息包括选择性存储的至少一个算力节点的信息。
其中,所述处理器还用于执行以下操作:
向选择性存储对应的至少一个算力节点发送信息查询请求;
接收所述算力节点发送的信息查询响应,所述信息查询响应包括:所述算力节点存储的算力记录信息;
根据所述信息查询响应,选择性将所述算力记录信息包括的至少一个算力节点的信息存储至所述第一算力节点本地存储的算力记录信息中。
其中,选择性存储至少一个算力节点的信息的方式包括:
根据所述算力节点的位置信息,选择性存储与所述第一算力节点之间的距离小于门限值的算力节点的信息;
或者,
根据所述算力节点的算力类型,选择性存储预设类型的算力节点的信息。
其中,所述处理器还用于执行以下操作:
根据所述算力请求,判断所述第一算力节点是否能够为所述终端提供服务;
在所述第一算力节点能够为所述终端提供服务的情况下,确定能够给所述终端提供算力的至少一个算力节点为所述第一算力节点;
或者,在所述第一算力节点不能够为所述终端提供服务的情况下,根据本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点。
其中,所述处理器还用于执行以下操作:
向本地存储的算力记录信息中的一个或多个算力节点发送所述算力请求;
接收至少一个算力节点发送的算力响应,所述算力响应用于指示所述算力节点能够为所述终端提供服务,或者,所述算力响应用于指示能够为所述 终端提供服务的算力节点的IP地址。
其中,所述算力反馈信息还包括:
算力节点的算力信息;和/或,
算力节点的位置信息。
其中,所述处理器还用于执行以下操作:
在第三算力节点离开所述目标算力网络的情况下,从本地存储的算力记录信息中删除所述第三算力节点的信息。
本公开实施例还提供一种算力节点,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序,所述处理器执行所述程序时实现如上所述的算力处理方法。
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上所述的算力处理方法中的步骤。
本公开的上述技术方案至少具有如下有益效果:
本公开实施例的算力处理方法、装置及算力节点中,目标算力网络中的算力节点之间通过应用层的互通来交互确定本地存储的算力记录信息,并根据算力记录信息帮助终端找到合适的算力节点,完成相关的算力任务;一方面,目标算力网络为去中心化的架构,可以规避集中化决策的相关问题;另一方面,各个算力节点之间基于应用层进行交互,也规避了基于网络层的分布式算力网络跨层设计比较复杂的问题。
附图说明
图1表示本公开实施例提供的算力处理方法的步骤流程图;
图2表示本公开实施例提供的算力处理方法中目标算力网络的一个示例图;
图3表示本公开实施例提供的算力处理方法中目标算力网络的另一个示例图;
图4表示本公开实施例提供的算力处理装置的结构示意图;
图5表示本公开实施例提供的算力节点的结构示意图。
具体实施方式
为使本公开要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。
如图1所示,本公开实施例提供一种算力处理方法,应用于第一算力节点,包括:
步骤101,接收终端发送的算力请求;所述终端按照任播的IP地址接入所述第一算力节点;
步骤102,根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点;其中,所述算力记录信息包括:多个算力节点的信息;
可选地,算力节点的信息包括:算力节点的标识、算力节点的网际互联协议(Internet Protocol,IP)地址、算力节点的负载、算力节点的位置、算力节点与第一算力节点的距离信息、算力节点支持的算力类型、算力节点的总算力、算力节点的可用算力、算力节点目前提供的算力服务的信息等。
步骤103,向所述终端发送算力反馈信息,所述算力反馈信息包括:确定的所述至少一个算力节点的IP地址。
本公开实施例提供的算力处理方法希望能为终端就近提供服务,同时提供服务的节点要有充足的算力,以便更充分的实现负载均衡(Load Balance,LB)。
例如,如图2所示,终端就近接入一个算力节点1,算力节点1中按需或按类型存储了算力记录信息(包括周围算力节点的信息),算力节点1根据终端的算力需求和自身存储的算力记录信息进行调度。例如,算力节点2的信息包括:距算力节点1的距离较近,且负载较小;算力节点3的信息包括:距算力节点1的距离较近;算力节点4的信息包括:距算力节点1的距离较近,且支持服务1或2或3;算力节点5的信息包括:距算力节点1的距离较远,且可用算力或总算力很大。
可选地,所述算力反馈信息还包括:
算力节点的算力信息(如算力节点的总的算力,算力节点的可用算力等);和/或,
算力节点的位置信息。
可选地,第一算力节点应确认反馈的算力节点目前支持的服务,能够与终端请求的服务相匹配。
本公开的至少一个可选实施例中,接收终端发送的算力请求之前,所述方法还包括:
与目标算力网络中的第二算力节点通过应用层的信息交互,构建所述算力记录信息。
本公开实施例中,目标算力网络也可以称为算力联盟,所有加入该算力联盟的算力节点,都会发布同一个任播(anycast)的身份标识(Identity Document,ID)或IP,以支持终端的接入。
本公开的至少一个实施例中,与目标算力网络中的第二算力节点通过应用层的信息交互,构建所述算力记录信息,包括:
通过任播的方式向所述第二算力节点发送请求加入所述目标算力网络的第一请求;
接收所述第二算力节点对所述第一请求验证通过后发送的第二算力节点存储的算力记录信息;
根据所述第二算力节点存储的算力记录信息,构建所述第一算力节点本地存储的算力记录信息,所述第一算力节点本地存储的算力记录信息包括选择性存储的至少一个算力节点的信息。
如图2所示,算力节点加入目标算力网络的流程包括:
一个新的算力节点(如算力节点2、算力节点3或算力节点4)加入,需要按照anycast,找到一个最近的算力节点(如算力节点1),并且,发送联盟加入请求,包括算力的信息(包括总的算力,可用算力),然后加入经纬度信息,或者是其他类型的地址信息,在此不做具体限定;
接收到加入请求的算力节点1,对加入请求进行验证,通过之后,向算力节点2、算力节点3或算力节点4发送自身收集的附近的算力节点的信息,即自身存储的算力记录信息;
接收到算力记录信息的算力节点2、算力节点3或算力节点4,对算力记录信息进行整理,选择性的记录算力节点的信息,例如优先记录离自身近的 算力节点的信息。
承接上例,本公开实施例中,所述方法还包括:
向选择性存储对应的至少一个算力节点发送信息查询请求;
接收所述算力节点发送的信息查询响应,所述信息查询响应包括:所述算力节点存储的算力记录信息;
根据所述信息查询响应,选择性将所述算力记录信息包括的至少一个算力节点的信息存储至所述第一算力节点本地存储的算力记录信息中。
例如,如图3所示,接收到算力记录信息的算力节点2选择性记录算力节点的信息后,若记录了算力节点3的信息,则算力节点2进一步向算力节点3发送信息查询请求,并基于算力节点3的响应选择性记录部分或全部算力节点的信息,如此循环,直到本地存储的算力记录信息中的算力节点的数量达到预设值,则完成本地存储的算力记录信息的构建。
需要说明的是,每个算力节点均按照上述操作(如发送请求加入所述目标算力网络的第一请求和/或发送信息查询请求),选择性的存储一些周围的算力节点的信息,从而得到一个分布式的算力记录信息(可选地该算力记录信息可通过算力记录表格来具体实现),以支持分布式的算力调度。其中,分布式的算力调度具体为:根据局部最优原则决策,每业务与算力节点交互;其可扩展性强。
在本公开的至少一个实施例中,选择性存储至少一个算力节点的信息的方式包括:
根据所述算力节点的位置信息,选择性存储与所述第一算力节点之间的距离小于门限值的算力节点的信息;算力联盟中的算力节点不需要存储算力联盟内所有的其他的节点的信息;仅需要存储部分的信息即可,算力的寻址支持递归寻找。
或者,选择性存储至少一个算力节点的信息的方式包括:
根据所述算力节点的算力类型,选择性存储预设类型的算力节点的信息;算力联盟中的算力节点可以按照其他的节点算力能力信息,按类型进行存储,以更好的进行业务匹配;例如,相关的类型包括了远端的超大算力、近端算力节点、支持人工智能(Artificial Intelligence,AI)推理的节点。算力节点的 算力类型,用与更好的匹配算力需求和算力供给,也不排斥有更多的类型标识。
又例如,根据所述算力节点支持的算力服务的ID,选择性的存储预设类型的算力节点的信息;例如,相关的服务为人脸识别的服务,或者车辆识别的服务。
本公开实施例中,每个算力节点按照一定的筛选方式,存储其他的几个算力节点的能力信息,如就近存储几个备选节点的信息,或者按照算力类型存储几个备选节点的信息。例如每个节点存储周围的3或4个节点的信息。
可选地,算力节点通过一定的算法,对于周围的算力节点进行排序后存储于算力记录信息中。排序靠前的,优先被推荐,排序可以取决于距离、算力空闲程度等,在此不做具体限定。
作为一个可选实施例,所述方法还包括:
在第三算力节点离开所述目标算力网络的情况下,从本地存储的算力记录信息中删除所述第三算力节点的信息。例如,第三算力节点离开网络,其他的算力节点会发现第三算力节点长时间没有更新自己的状态,从而从自身存储的周围节点的信息中删除该第三算力节点的信息。
在本公开的至少一个实施例中,步骤102包括:
根据所述算力请求,判断所述第一算力节点是否能够为所述终端提供服务;
在所述第一算力节点能够为所述终端提供服务的情况下,确定能够给所述终端提供算力的至少一个算力节点为所述第一算力节点;
或者,在所述第一算力节点不能够为所述终端提供服务的情况下,根据本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点。
其中,在所述第一算力节点不能够为所述终端提供服务的情况下,根据本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点,包括:
向本地存储的算力记录信息中的一个或多个算力节点发送所述算力请求;
接收至少一个算力节点发送的算力响应,所述算力响应用于指示所述算 力节点能够为所述终端提供服务,或者,所述算力响应用于指示能够为所述终端提供服务的算力节点的IP地址。
终端的算力调度流程的目的是找到一个距离较近(满足业务需要的时延)又有足够的算力的节点。例如,终端的算力调度流程包括:
步骤(1):一个新的算力请求,需要按照anycast,找到一个最近的算力节点1,并且,发起请求,其中,包括所需的算力的信息,可选携带位置信息;
步骤(2):收到请求的算力节点1,对算力请求进行验证,通过之后,判断自身是否能提供服务,或者根据自身了解到的算力的负载情况和位置情况,判断是否有更好的节点来提供服务;
步骤(3):如果判断希望别的节点来提供服务,则向周围节点发送一个或者多个请求;
步骤(4):收到请求的算力节点执行类似步骤(2)-(3)的流程,继续寻找合适的节点,直到有节点接受任务,反馈给算力节点1;
步骤(5):算力节点1反馈给终端提供算力的节点的IP,可以是自己或者别的节点的IP,可以是一个或者几个IP;如果是多个,可选的,携带各个节点的算力信息,位置信息等,用于终端的评估;
步骤(6):终端按照反馈的IP接入服务。
综上,本公开实施例构建了一个去中心化的分布式决策的算力联盟(目标算力网络),节点的加入和离开都是自由的;后续可以叠加相关的算力交易、激励、安全等机制。其中,去中心化的好处包括:支持跨运营商、跨厂商、跨管理实体的算力共享。且通过在应用层通告的方式,传播算力信息,每个节点记录附近的算力节点信息,或者特定类型的几个算力节点的信息,用于支持分布式的算力寻址;节点不需要维护所有的其他的节点的信息,仅仅需要按需维护用户附近的算力节点的信息。
如图4所示,本公开实施例还提供一种算力处理装置,应用于第一算力节点,包括:
第一接收模块401,用于接收终端发送的算力请求;所述终端按照任播的IP地址接入所述第一算力节点;
第一确定模块402,用于根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点;其中,所述算力记录信息包括:多个算力节点的信息;
第一发送模块403,用于向所述终端发送算力反馈信息,所述算力反馈信息包括:确定的所述至少一个算力节点的IP地址。
作为一个可选实施例,所述装置还包括:
构建模块,用于与目标算力网络中的第二算力节点通过应用层的信息交互,构建所述算力记录信息。
作为一个可选实施例,所述构建模块包括:
第一子模块,用于通过任播的方式向所述第二算力节点发送请求加入所述目标算力网络的第一请求;
第二子模块,用于接收所述第二算力节点对所述第一请求验证通过后发送的第二算力节点存储的算力记录信息;
第三子模块,用于根据所述第二算力节点存储的算力记录信息,构建所述第一算力节点本地存储的算力记录信息,所述第一算力节点本地存储的算力记录信息包括选择性存储的至少一个算力节点的信息。
作为一个可选实施例,所述装置还包括:
第二发送模块,用于向选择性存储对应的至少一个算力节点发送信息查询请求;
第二接收模块,用于接收所述算力节点发送的信息查询响应,所述信息查询响应包括:所述算力节点存储的算力记录信息;
存储模块,用于根据所述信息查询响应,选择性将所述算力记录信息包括的至少一个算力节点的信息存储至所述第一算力节点本地存储的算力记录信息中。
作为一个可选实施例,选择性存储至少一个算力节点的信息的方式包括:
根据所述算力节点的位置信息,选择性存储与所述第一算力节点之间的距离小于门限值的算力节点的信息;
或者,
根据所述算力节点的算力类型,选择性存储预设类型的算力节点的信息。
作为一个可选实施例,所述确定模块包括:
第四子模块,用于根据所述算力请求,判断所述第一算力节点是否能够为所述终端提供服务;
第五子模块,用于在所述第一算力节点能够为所述终端提供服务的情况下,确定能够给所述终端提供算力的至少一个算力节点为所述第一算力节点;
或者,第六子模块,用于在所述第一算力节点不能够为所述终端提供服务的情况下,根据本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点。
作为一个可选实施例,所述第六子模块进一步用于:
向本地存储的算力记录信息中的一个或多个算力节点发送所述算力请求;
接收至少一个算力节点发送的算力响应,所述算力响应用于指示所述算力节点能够为所述终端提供服务,或者,所述算力响应用于指示能够为所述终端提供服务的算力节点的IP地址。
作为一个可选实施例,所述算力反馈信息还包括:
算力节点的算力信息;和/或,
算力节点的位置信息。
作为一个可选实施例,所述装置还包括:
删除模块,用于在第三算力节点离开所述目标算力网络的情况下,从本地存储的算力记录信息中删除所述第三算力节点的信息。
本公开实施例中,目标算力网络中的算力节点之间通过应用层的互通来交互确定本地存储的算力记录信息,并根据算力记录信息帮助终端找到合适的算力节点,完成相关的算力任务;一方面,目标算力网络为去中心化的架构,可以规避集中化决策的相关问题;另一方面,各个算力节点之间基于应用层进行交互,也规避了跨层设计的问题。
需要说明的是,本公开实施例提供的算力处理装置是能够执行上述算力处理方法的装置,则上述算力处理方法的所有实施例均适用于该装置,且均能达到相同或相似的有益效果。
如图5所示,本公开实施例还提供一种算力节点,该算力节点为第一算力节点,该第一算力节点包括处理器500和收发器510,所述收发器510在 处理器500的控制下接收和发送数据,所述处理器500用于执行以下操作:
接收终端发送的算力请求;所述终端按照任播的IP地址接入所述第一算力节点;
根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点;其中,所述算力记录信息包括:多个算力节点的信息;
向所述终端发送算力反馈信息,所述算力反馈信息包括:确定的所述至少一个算力节点的IP地址。
作为一个可选实施例,所述处理器还用于执行以下操作:
与目标算力网络中的第二算力节点通过应用层的信息交互,构建所述算力记录信息。
作为一个可选实施例,所述处理器还用于执行以下操作:
通过任播的方式向所述第二算力节点发送请求加入所述目标算力网络的第一请求;
接收所述第二算力节点对所述第一请求验证通过后发送的第二算力节点存储的算力记录信息;
根据所述第二算力节点存储的算力记录信息,构建所述第一算力节点本地存储的算力记录信息,所述第一算力节点本地存储的算力记录信息包括选择性存储的至少一个算力节点的信息。
作为一个可选实施例,所述处理器还用于执行以下操作:
向选择性存储对应的至少一个算力节点发送信息查询请求;
接收所述算力节点发送的信息查询响应,所述信息查询响应包括:所述算力节点存储的算力记录信息;
根据所述信息查询响应,选择性将所述算力记录信息包括的至少一个算力节点的信息存储至所述第一算力节点本地存储的算力记录信息中。
作为一个可选实施例,选择性存储至少一个算力节点的信息的方式包括:
根据所述算力节点的位置信息,选择性存储与所述第一算力节点之间的距离小于门限值的算力节点的信息;
或者,
根据所述算力节点的算力类型,选择性存储预设类型的算力节点的信息。
作为一个可选实施例,所述处理器还用于执行以下操作:
根据所述算力请求,判断所述第一算力节点是否能够为所述终端提供服务;
在所述第一算力节点能够为所述终端提供服务的情况下,确定能够给所述终端提供算力的至少一个算力节点为所述第一算力节点;
或者,在所述第一算力节点不能够为所述终端提供服务的情况下,根据本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点。
作为一个可选实施例,所述处理器还用于执行以下操作:
向本地存储的算力记录信息中的一个或多个算力节点发送所述算力请求;
接收至少一个算力节点发送的算力响应,所述算力响应用于指示所述算力节点能够为所述终端提供服务,或者,所述算力响应用于指示能够为所述终端提供服务的算力节点的IP地址。
作为一个可选实施例,所述算力反馈信息还包括:
算力节点的算力信息;和/或,
算力节点的位置信息。
作为一个可选实施例,所述处理器还用于执行以下操作:
在第三算力节点离开所述目标算力网络的情况下,从本地存储的算力记录信息中删除所述第三算力节点的信息。
本公开实施例中,目标算力网络中的算力节点之间通过应用层的互通来交互确定本地存储的算力记录信息,并根据算力记录信息帮助终端找到合适的算力节点,完成相关的算力任务;一方面,目标算力网络为去中心化的架构,可以规避集中化决策的相关问题;另一方面,各个算力节点之间基于应用层进行交互,也规避了跨层设计的问题。
需要说明的是,本公开实施例提供的算力节点是能够执行上述算力处理方法的节点,则上述算力处理方法的所有实施例均适用于该算力节点,且均能达到相同或相似的有益效果。
本公开实施例还提供一种算力节点,包括存储器、处理器及存储在所述 存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现如上所述的算力处理方法实施例中的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上所述的算力处理方法实施例中的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本领域内的技术人员应明白,本公开的实施例可提供为方法、系统或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可读存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(系统)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其它可编程数据处理设备以特定方式工作的计算机可读存储介质中,使得存储在该计算机可读存储介质中的指令产生包括指令装置的纸制品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其它可编程数据处理设备上,使得计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他科编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述是本公开的优选实施方式,应当指出,对于本技术领域的普通 技术人员来说,在不脱离本公开所述原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。

Claims (21)

  1. 一种算力处理方法,应用于第一算力节点,所述方法包括:
    接收终端发送的算力请求;所述终端按照任播的IP地址接入所述第一算力节点;
    根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点;其中,所述算力记录信息包括:多个算力节点的信息;
    向所述终端发送算力反馈信息,所述算力反馈信息包括:确定的所述至少一个算力节点的IP地址。
  2. 根据权利要求1所述的方法,其中,接收终端发送的算力请求之前,所述方法还包括:
    与目标算力网络中的第二算力节点通过应用层的信息交互,构建所述算力记录信息。
  3. 根据权利要求2所述的方法,其中,与目标算力网络中的第二算力节点通过应用层的信息交互,构建所述算力记录信息,包括:
    通过任播的方式向所述第二算力节点发送请求加入所述目标算力网络的第一请求;
    接收所述第二算力节点对所述第一请求验证通过后发送的第二算力节点存储的算力记录信息;
    根据所述第二算力节点存储的算力记录信息,构建所述第一算力节点本地存储的算力记录信息,所述第一算力节点本地存储的算力记录信息包括选择性存储的至少一个算力节点的信息。
  4. 根据权利要求3所述的方法,所述方法还包括:
    向选择性存储对应的至少一个算力节点发送信息查询请求;
    接收所述算力节点发送的信息查询响应,所述信息查询响应包括:所述算力节点存储的算力记录信息;
    根据所述信息查询响应,选择性将所述算力记录信息包括的至少一个算力节点的信息存储至所述第一算力节点本地存储的算力记录信息中。
  5. 根据权利要求3或4所述的方法,其中,选择性存储至少一个算力节点的信息的方式包括:
    根据所述算力节点的位置信息,选择性存储与所述第一算力节点之间的距离小于门限值的算力节点的信息;
    或者,
    根据所述算力节点的算力类型,选择性存储预设类型的算力节点的信息。
  6. 根据权利要求1所述的方法,其中,根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点,包括:
    根据所述算力请求,判断所述第一算力节点是否能够为所述终端提供服务;
    在所述第一算力节点能够为所述终端提供服务的情况下,确定能够给所述终端提供算力的至少一个算力节点为所述第一算力节点;
    或者,在所述第一算力节点不能够为所述终端提供服务的情况下,根据本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点。
  7. 根据权利要求6所述的方法,其中,在所述第一算力节点不能够为所述终端提供服务的情况下,根据本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点,包括:
    向本地存储的算力记录信息中的一个或多个算力节点发送所述算力请求;
    接收至少一个算力节点发送的算力响应,所述算力响应用于指示所述算力节点能够为所述终端提供服务,或者,所述算力响应用于指示能够为所述终端提供服务的算力节点的IP地址。
  8. 根据权利要求1所述的方法,其中,所述算力反馈信息还包括:
    算力节点的算力信息;和/或,
    算力节点的位置信息。
  9. 根据权利要求2所述的方法,所述方法还包括:
    在第三算力节点离开所述目标算力网络的情况下,从本地存储的算力记录信息中删除所述第三算力节点的信息。
  10. 一种算力处理装置,应用于第一算力节点,所述装置包括:
    第一接收模块,用于接收终端发送的算力请求;所述终端按照任播的IP地址接入所述第一算力节点;
    第一确定模块,用于根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点;其中,所述算力记录信息包括:多个算力节点的信息;
    第一发送模块,用于向所述终端发送算力反馈信息,所述算力反馈信息包括:确定的所述至少一个算力节点的IP地址。
  11. 一种算力节点,该算力节点为第一算力节点,该第一算力节点包括处理器和收发器,所述收发器在处理器的控制下接收和发送数据,其中,所述处理器用于执行以下操作:
    接收终端发送的算力请求;所述终端按照任播的IP地址接入所述第一算力节点;
    根据所述算力请求以及本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点;其中,所述算力记录信息包括:多个算力节点的信息;
    向所述终端发送算力反馈信息,所述算力反馈信息包括:确定的所述至少一个算力节点的IP地址。
  12. 根据权利要求11所述的算力节点,其中,所述处理器还用于执行以下操作:
    与目标算力网络中的第二算力节点通过应用层的信息交互,构建所述算力记录信息。
  13. 根据权利要求12所述的算力节点,其中,所述处理器还用于执行以下操作:
    通过任播的方式向所述第二算力节点发送请求加入所述目标算力网络的第一请求;
    接收所述第二算力节点对所述第一请求验证通过后发送的第二算力节点存储的算力记录信息;
    根据所述第二算力节点存储的算力记录信息,构建所述第一算力节点本地存储的算力记录信息,所述第一算力节点本地存储的算力记录信息包括选 择性存储的至少一个算力节点的信息。
  14. 根据权利要求13所述的算力节点,其中,所述处理器还用于执行以下操作:
    向选择性存储对应的至少一个算力节点发送信息查询请求;
    接收所述算力节点发送的信息查询响应,所述信息查询响应包括:所述算力节点存储的算力记录信息;
    根据所述信息查询响应,选择性将所述算力记录信息包括的至少一个算力节点的信息存储至所述第一算力节点本地存储的算力记录信息中。
  15. 根据权利要求13或14所述的算力节点,其中,选择性存储至少一个算力节点的信息的方式包括:
    根据所述算力节点的位置信息,选择性存储与所述第一算力节点之间的距离小于门限值的算力节点的信息;
    或者,
    根据所述算力节点的算力类型,选择性存储预设类型的算力节点的信息。
  16. 根据权利要求11所述的算力节点,其中,所述处理器还用于执行以下操作:
    根据所述算力请求,判断所述第一算力节点是否能够为所述终端提供服务;
    在所述第一算力节点能够为所述终端提供服务的情况下,确定能够给所述终端提供算力的至少一个算力节点为所述第一算力节点;
    或者,在所述第一算力节点不能够为所述终端提供服务的情况下,根据本地存储的算力记录信息,确定能够给所述终端提供算力的至少一个算力节点。
  17. 根据权利要求16所述的算力节点,其中,所述处理器还用于执行以下操作:
    向本地存储的算力记录信息中的一个或多个算力节点发送所述算力请求;
    接收至少一个算力节点发送的算力响应,所述算力响应用于指示所述算力节点能够为所述终端提供服务,或者,所述算力响应用于指示能够为所述终端提供服务的算力节点的IP地址。
  18. 根据权利要求11所述的算力节点,其中,所述算力反馈信息还包括:
    算力节点的算力信息;和/或,
    算力节点的位置信息。
  19. 根据权利要求12所述的算力节点,其中,所述处理器还用于执行以下操作:
    在第三算力节点离开所述目标算力网络的情况下,从本地存储的算力记录信息中删除所述第三算力节点的信息。
  20. 一种算力节点,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序;其中,所述处理器执行所述程序时实现如权利要求1-9任一项所述的算力处理方法。
  21. 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现如权利要求1-9任一项所述的算力处理方法中的步骤。
PCT/CN2023/084951 2022-03-31 2023-03-30 算力处理方法、装置及算力节点 WO2023185976A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210337151.0A CN116932185A (zh) 2022-03-31 2022-03-31 算力处理方法、装置及算力节点
CN202210337151.0 2022-03-31

Publications (1)

Publication Number Publication Date
WO2023185976A1 true WO2023185976A1 (zh) 2023-10-05

Family

ID=88199364

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/084951 WO2023185976A1 (zh) 2022-03-31 2023-03-30 算力处理方法、装置及算力节点

Country Status (2)

Country Link
CN (1) CN116932185A (zh)
WO (1) WO2023185976A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188547A (zh) * 2020-09-09 2021-01-05 中国联合网络通信集团有限公司 一种业务处理方法及装置
CN113810977A (zh) * 2020-06-11 2021-12-17 中国移动通信有限公司研究院 一种生成算力拓扑的方法、系统、节点及介质
CN114048857A (zh) * 2021-10-22 2022-02-15 天工量信(苏州)科技发展有限公司 算力分配方法、装置及算力服务器
CN114095577A (zh) * 2020-07-31 2022-02-25 中国移动通信有限公司研究院 资源请求方法、装置、算力网元节点及算力应用设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810977A (zh) * 2020-06-11 2021-12-17 中国移动通信有限公司研究院 一种生成算力拓扑的方法、系统、节点及介质
CN114095577A (zh) * 2020-07-31 2022-02-25 中国移动通信有限公司研究院 资源请求方法、装置、算力网元节点及算力应用设备
CN112188547A (zh) * 2020-09-09 2021-01-05 中国联合网络通信集团有限公司 一种业务处理方法及装置
CN114048857A (zh) * 2021-10-22 2022-02-15 天工量信(苏州)科技发展有限公司 算力分配方法、装置及算力服务器

Also Published As

Publication number Publication date
CN116932185A (zh) 2023-10-24

Similar Documents

Publication Publication Date Title
JP7317984B2 (ja) 異なるエンドポイントへの動的な通信ルーティング
Kar et al. Offloading using traditional optimization and machine learning in federated cloud–edge–fog systems: A survey
US11589300B2 (en) Intent-based service engine for a 5G or other next generation mobile core network
CN102739411B (zh) 提供证明服务
US9438601B2 (en) Operating group resources in sub-groups and nested groups
US11070639B2 (en) Network infrastructure system and method for data processing and data sharing using the same
WO2020258967A1 (zh) 工业应用服务处理方法和系统
Alnawayseh et al. Smart congestion control in 5g/6g networks using hybrid deep learning techniques
JP2022536503A (ja) 外部システム統合のためのシステムおよび方法
WO2022001941A1 (zh) 网元管理方法、网管系统、独立计算节点、计算机设备、存储介质
US20220270055A1 (en) Verifying meeting attendance via a meeting expense and verification controller
AU2023229593A1 (en) Systems and methods for managing interaction invitations
CN102484655A (zh) 专用网络中的公用机器人管理
JP2022553788A (ja) 異種エンドポイントへの動的な通信ルーティング
Mostafavi et al. Edge computing for IoT: challenges and solutions
Jin et al. A congestion control method of SDN data center based on reinforcement learning
WO2023185976A1 (zh) 算力处理方法、装置及算力节点
US20240064385A1 (en) Systems & methods for smart content streaming
KR102435830B1 (ko) 네트워크 인프라 시스템 및 이를 이용한 데이터 공유 및 서비스 최적화를 위한 데이터 처리 방법
EP4320835B1 (en) Control network for mobile robots
US11153388B2 (en) Workflow engine framework for cross-domain extension
CN112100238B (zh) 一种船舶远程维修系统以及远程维修管理方法
CN108848156B (zh) 接入网关处理方法、装置及存储介质
US12047461B2 (en) Server initiated communication session join after endpoint status check failure
US20220270054A1 (en) Inferring meeting expense via a meeting expense and verification controller

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23778341

Country of ref document: EP

Kind code of ref document: A1