WO2021190482A1 - Computing power processing network system and computing power processing method - Google Patents

Computing power processing network system and computing power processing method Download PDF

Info

Publication number
WO2021190482A1
WO2021190482A1 PCT/CN2021/082304 CN2021082304W WO2021190482A1 WO 2021190482 A1 WO2021190482 A1 WO 2021190482A1 CN 2021082304 W CN2021082304 W CN 2021082304W WO 2021190482 A1 WO2021190482 A1 WO 2021190482A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing power
information
processing layer
processing
service
Prior art date
Application number
PCT/CN2021/082304
Other languages
French (fr)
Chinese (zh)
Inventor
姚惠娟
耿亮
杜宗鹏
张晓秋
Original Assignee
中国移动通信有限公司研究院
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国移动通信有限公司研究院, 中国移动通信集团有限公司 filed Critical 中国移动通信有限公司研究院
Publication of WO2021190482A1 publication Critical patent/WO2021190482A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present disclosure relates to the field of communication technology, in particular to a computing power processing network system and computing power processing method, or to a computing power perception network or computing power network.
  • the cloud data center DC forms a massive ubiquitous computing power to connect to the Internet from everywhere, forming a development trend of deep integration of computing and networks.
  • the computing resources in the network are integrated into all corners of the network, so that each network node can become a resource provider.
  • the user's request can be satisfied by calling the nearest node resource, and it is no longer limited to a specific node to avoid connecting And the waste of network scheduling resources.
  • traditional networks only provide channels for data communication, based on connections, subject to fixed network addressing mechanisms, and are often unable to meet the higher and more stringent QoE requirements.
  • the traditional client-server model is deconstructed, and the application deconstruction function components on the server side are deployed on the cloud platform, and they are uniformly scheduled by the API gateway, which can be implemented on demand
  • the business logic in the server is transferred to the client side, and the client only needs to care about the computing function itself, and does not need to care about the computing resources such as the server, virtual machine, container, etc., so as to realize the service function.
  • the new-generation network architecture facing the future network needs to consider the needs of network and computing integration evolution to realize the global optimization of the network in the ubiquitous connection and computing power architecture, the flexible scheduling of computing power, and the reasonable distribution of services; however, traditional networks cannot Realize the allocation of computing resources.
  • the purpose of the embodiments of the present disclosure is to provide a computing power processing network system and computing power processing method, so as to solve the problem that the network in the related technology cannot realize the computing power resource allocation.
  • a computing power processing network system including:
  • the first processing layer, the second processing layer, and the third processing layer are The first processing layer, the second processing layer, and the third processing layer;
  • the first processing layer is used to obtain a service request for a business, and send the service request to the second processing layer;
  • the second processing layer is configured to obtain a computing power configuration strategy at least according to the service request, and send the computing power configuration strategy to the third processing layer;
  • the third processing layer is used to select a corresponding network path at least according to the computing power configuration strategy, and dispatch the service to the corresponding computing power network element node for processing;
  • the third processing layer is also used for:
  • At least the corresponding network path is selected according to the computing power configuration strategy and the network resource information, and the service is scheduled to the corresponding computing power network element node for processing.
  • the network system for computing power processing further includes: a fourth processing layer;
  • the fourth processing layer is used to obtain the state information of the computing power resource of the computing power network element node, and send the state information of the computing power resource to the second processing layer;
  • the second processing layer obtains the computing power configuration strategy at least according to the service request and the computing power resource status information.
  • the second processing layer is also used for:
  • the second processing layer is also used for:
  • the second processing layer is also used for:
  • Perform performance monitoring of the computing power resource status information of the computing power network element node and send at least one of the computing power resource performance, computing power billing management information, and computing power resource failure information to the corresponding computing power network element node .
  • the computing resource status information includes at least one of the following:
  • the fourth processing layer sending the computing resource state information to the second processing layer includes:
  • the second processing layer sends a computing power measurement request message to the fourth processing layer, and receives a computing power status information response message fed back by the fourth processing layer, and the computing power status information response message carries a computing power network.
  • the state information of the computing resources of the meta node is not limited to the state in the computing resources of the meta node.
  • the fourth processing layer sending the computing resource state information to the second processing layer includes:
  • the fourth processing layer periodically or non-periodically reports the state information of the computing power resources of the computing power network element node to the second processing layer.
  • the computing power measurement request message includes at least one of the following:
  • the second processing layer carries the computing power measurement request message through operation and maintenance management OAM telemetry information.
  • the embodiment of the present disclosure also provides a computing power processing method, which is applied to a computing power processing network system, including:
  • At least the corresponding network path is selected according to the computing power configuration strategy, and the service is scheduled to the corresponding computing power network element node for processing.
  • the method further includes:
  • At least the corresponding network path is selected according to the computing power configuration strategy and the network resource information, and the service is scheduled to the corresponding computing power network element node for processing.
  • the method further includes:
  • the method further includes:
  • the method further includes:
  • the method further includes:
  • Perform performance monitoring of the computing power resource status information of the computing power network element node and send at least one of the computing power resource performance, computing power billing management information, and computing power resource failure information to the corresponding computing power network element node .
  • the computing resource status information includes at least one of the following:
  • the embodiments of the present disclosure also provide a network system for computing power processing, including a memory, a processor, and a program stored on the memory and capable of running on the processor.
  • a network system for computing power processing including a memory, a processor, and a program stored on the memory and capable of running on the processor.
  • the processor executes the program, the implementation is as described above.
  • the described computing power processing method is as described above.
  • the embodiments of the present disclosure also provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps in the computing power processing method described above are realized.
  • the computing power processing network system and computing power processing method of the embodiments of the present disclosure interconnect dynamically distributed computing resources based on ubiquitous network connections. Through the unified coordinated scheduling of multi-dimensional resources such as network, storage, and computing power, massive The business can call computing resources in different places in real time on demand, realize the global optimization of connection and computing power in the network, and provide a consistent user experience.
  • FIG. 1 shows a schematic structural diagram of a network system for computing power processing provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of obtaining state information of computing power resources in a network system for computing power processing provided by an embodiment of the present disclosure
  • FIG. 3 shows an example diagram of OAM telemetry information in a network system for computing power processing provided by an embodiment of the present disclosure
  • FIG. 4 shows a flowchart of the steps of a computing power processing method provided by an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides a network system for computing power processing, including:
  • the first processing layer also called the computing power service layer
  • the second processing layer also called the computing power platform layer
  • the third processing layer also called the computing power routing layer
  • the first processing layer is used to obtain a service request for a business, and send the service request to the second processing layer;
  • the second processing layer is used to obtain a computing power configuration strategy at least according to the service request, and send the computing power configuration strategy to the third processing layer; for example, the service request includes a service ID, a service type, and a service level , Time delay and other parameters. Mapping is the corresponding computing power configuration strategy: CPU/GPU resource configuration requirements, storage configuration requirements, etc.
  • the third processing layer is used to select the corresponding network path at least according to the computing power configuration strategy, and dispatch the service to the corresponding computing power network element node for processing.
  • the third processing layer is also used for:
  • At least the corresponding network path is selected according to the computing power configuration strategy and the network resource information, and the service is scheduled to the corresponding computing power network element node for processing.
  • the third processing layer can periodically or dynamically obtain the above network resource information from the network resource layer in Figure 1, or the network resource layer actively reports the network resource information, so that the third processing layer can be configured according to the computing power
  • the strategy and current network resource information select the corresponding network path, and dispatch the service to the corresponding computing power network element node for processing.
  • the current network resource information can be understood as the latest network resource information obtained by the third processing layer, or the latest network resource information reported by the network resource layer.
  • the network path selected by the third processing layer is the current optimal network path
  • its scheduled computing power network element node is the current optimal computing power network element node, which is not specifically limited here.
  • the computing power network element node refers to a network device with computing power; it can further include a computing power routing node (located at the computing power routing layer, responsible for the notification and transmission of computing resource information in the network, and can be computing power aware Capable router devices) and computing nodes (devices with only computing capabilities, which are equivalent to devices that process computing tasks in the network, such as server devices in data centers).
  • a computing power routing node located at the computing power routing layer, responsible for the notification and transmission of computing resource information in the network, and can be computing power aware Capable router devices
  • computing nodes devices with only computing capabilities, which are equivalent to devices that process computing tasks in the network, such as server devices in data centers.
  • the computing power network element node is a network node set at the network resource layer as shown in FIG. 1.
  • the network resource layer is used to provide network infrastructure for information transmission, including access networks, metropolitan area networks, and backbone networks.
  • computing power processing network system provided in the foregoing embodiments of the present disclosure may also be referred to as a computing network fusion-oriented system, computing power perception network system, or computing power network system, etc., which are not specifically limited here.
  • the network oriented to the integration of computing networks is logically divided into a first processing layer, a second processing layer, and a third processing layer.
  • the first processing layer, the second processing layer, and the third processing layer are divided according to logical functions.
  • the above-mentioned processing layers can be deployed on one device or On multiple devices; if deployed on one device, each processing layer can transmit information through internal interfaces; if deployed on multiple devices, each processing layer can achieve information transmission through signaling interaction.
  • the embodiments of the present disclosure do not limit the specific names of the first processing layer, the second processing layer, and the third processing layer, and the names of all layers capable of realizing their corresponding functions are applicable to the embodiments of the present disclosure.
  • the second processing layer may also be referred to as the computing power platform layer, computing power management equipment, computing power management node, computing power management layer, etc., which will not be enumerated here.
  • the system for computing network integration provided by the embodiments of the present disclosure is based on the ubiquitous computing power resources of the network.
  • the computing power platform layer completes the abstraction, modeling, control and management of computing power resources, and notifies the computing power routing layer.
  • the computing power routing layer comprehensively considers user needs, network resource status, and computing power resource status, and dispatches services to appropriate computing power network element nodes to achieve optimal resource utilization and ensure the ultimate user experience.
  • Method 1 Computing power resource scheduling (that is, selecting the corresponding network path according to the computing power configuration strategy, and dispatching the service to the corresponding computing power network element node for processing), which can match the business and computing power to schedule the service Appropriate computing nodes for business processing. That is to say, scheduling based on computing power can realize finding the best purpose to serve computing nodes.
  • Method 2 Computing power resource scheduling + network resource scheduling (that is, selecting the corresponding network path according to the computing power configuration strategy and network resource information, and scheduling the service to the corresponding computing power network element node for processing); combination of computing power resource scheduling
  • the scheduling of the existing network resource information in the network for example, the network resource information includes: bandwidth, delay, delay jitter, etc.
  • the computing power resource scheduling realizes the scheduling of the business to the appropriate computing power node, and the network resource scheduling can be realized Find the optimal network path of the target service computing node. That is, network-based scheduling + computing power scheduling (joint computing power resource scheduling and network resources), through the best network path, and the best computing power business processing, to provide the best user experience.
  • the computing power service layer supports application deconstruction into atomized functional components and an algorithm library, which is uniformly scheduled by the API gateway to realize the on-demand instantiation of atomized algorithms in ubiquitous computing resources.
  • the computing power service layer transfers the service request of the business or application to the computing power platform layer.
  • the computing power platform layer needs to complete the perception, measurement and OAM management of computing power resources to support the perceivable, measurable, manageable and controllable computing power resources of the network, which is conducive to the realization of the joint scheduling of computing networks and improves the network performance of operators. Resource interest rate.
  • the computing power routing layer based on the abstracted computing resource discovery, comprehensively considers the network status and computing resource status, and dispatches services to different computing power network element nodes flexibly on demand.
  • Specific functions mainly include computing power routing identification, computing power routing control, computing power status network notification, computing power routing addressing, computing power routing forwarding, etc.
  • the network system for computing power processing further includes: a fourth processing layer (also referred to as a computing power resource layer);
  • the fourth processing layer is used to obtain computing resource status information of a computing power network element node, and send the computing resource status information to the second processing layer;
  • the second processing layer obtains the computing power configuration strategy at least according to the service request and the computing power resource status information.
  • the computing power resource status information is used to reflect the ubiquitous deployment computing power status and deployment location information in the network. It can refer to the number of service connections, CPU/GPU computing power, deployment form (physical, virtual), deployment location (corresponding Capabilities such as IP address), storage capacity, storage form, etc., can also refer to computing power abstracted based on the above-mentioned basic computing resources, which are used to reflect the currently available computing power and distribution location and form of each node of the network.
  • each of the above-mentioned first processing, second, third, and fourth processing layers can be deployed on one device or multiple If deployed on one device, each processing layer can transmit information through internal interfaces; if deployed on multiple devices, each processing layer can achieve information transmission through signaling interaction.
  • the second processing layer (that is, the computing power platform layer) is also used for:
  • the computing power platform layer includes the “computing power modeling” sub-module.
  • the module forms the corresponding computing power template through information such as general algorithm or idiom requirements.
  • a number of computing power capability templates and business service requests are combined into a computing power service contract to meet the computing power demand of the business.
  • the second processing layer (that is, the computing power platform layer) is also used for:
  • the second processing layer first sends the computing power template and/or the computing power service contract to the third processing layer, and the third processing layer forwards it to the corresponding computing power network element node, or the second processing layer It is directly sent to the node of the computing power network element, and its specific sending path is not limited here.
  • the computing power template is mainly used for the unification of some request information modes between the computing power processing network system and the user equipment.
  • the template converts the service request into information conforming to the system processing format for subsequent processing; or, after the user equipment receives the computing power template (which can be obtained through the computing power network element node), before sending the service request, pass the computing power first
  • the template converts the relevant information of the service request into information that conforms to the system processing format, which facilitates the subsequent processing of the system and reduces the processing burden of the system.
  • the computing power service contract is mainly used to generate the corresponding computing power service contract based on the user's contract information.
  • the network needs to provide the corresponding computing power service according to the computing power service contract.
  • the user equipment learns the computing power service contract, it can learn which computing power network element node it can communicate with, charging rules, etc., which is not specifically limited here.
  • the computing power platform layer includes the "power notification” sub-module.
  • the "power notification” sub-module is responsible for abstracting the actually deployed computing power resources through the computing power template, together with information such as computing power service contracts. , Notify the corresponding computing power network element node.
  • This sub-module includes sub-function modules such as notification of computing power service contracts, notification of computing power capabilities, and notification of computing power status.
  • Announcement of computing power service contract refers to generating computing power service demand based on the service request of the computing power service layer, and notifying the corresponding computing power network element node.
  • Announcement of computing power refers to the fact that the actual deployed computing resources are abstractly expressed through computing power templates and then notified to the corresponding computing power network element nodes.
  • the computing power status notification informs the real-time status of computing power resources to the corresponding network nodes through the I4 interface.
  • the second processing layer is also used for:
  • Perform performance monitoring of the computing power resource status information of the computing power network element node and send at least one of the computing power resource performance, computing power billing management information, and computing power resource failure information to the corresponding computing power network element node .
  • the second processing layer (that is, the computing power platform layer) includes the "computing power OAM" sub-module, which includes the computing power performance monitoring of the computing power resource layer, computing power billing management, and computing power resources Fault management.
  • the computing power OAM submodule mainly contains the real-time status of computing power network element nodes, including capacity expansion, capacity reduction, and fault status. Based on this information, the computing power platform layer can update the current available computing power status in time on the one hand, and on the other Some operations of failure recovery can be performed: for example, the computing power platform layer, sending some operation instructions such as restarting, configuration, etc., to recover and deal with the failure.
  • the computing resource status information includes at least one of the following:
  • the fourth processing layer sending the computing resource state information to the second processing layer includes:
  • the second processing layer sends a computing power measurement request message to the fourth processing layer, and receives a computing power status information response message fed back by the fourth processing layer, and the computing power status information response message carries a computing power network.
  • the state information of the computing resources of the meta node is not limited to the state in the computing resources of the meta node.
  • the second processing layer is deployed on the computing power management device
  • the fourth processing layer is deployed on the computing power node device.
  • the computing power management device actively sends the computing power to the computing power node device.
  • the computing power node device sends a computing power status information response message to the computing power management device according to the computing power measurement request message.
  • the fourth processing layer sending the computing resource state information to the second processing layer includes:
  • the fourth processing layer periodically or non-periodically reports the state information of the computing power resources of the computing power network element node to the second processing layer.
  • the computing power measurement request message includes at least one of the following:
  • the second processing layer carries the computing power measurement request message through the telemetry (Telemetry) information of the operation and maintenance management OAM, so as to realize the computing power sensing workflow.
  • Telemetry Telemetry
  • bits used in OAM telemetry information (such as OAM-trace-type): bit4-7;
  • Bit 7 The bit defined as whether the computing power perception function is enabled or not.
  • Bit 4 Defined as the bit of the hashrate measurement request/response to the hashrate status information.
  • Bit 5 Defined as the bit of the hashrate measurement request/response to the hashrate status information.
  • a node data list (Node data list) is used as a variable-length list to carry state information such as the computing power resources and network resources of the own computing power network element node.
  • computing resources refer to the supply capabilities of servers, processing, storage, storage devices, virtual machines, etc.
  • Network resources include network bandwidth, delay, jitter and other requirements.
  • the system for computing network integration not only defines functional modules such as a computing power resource layer, a computing power platform layer, and a computing power routing layer, but also defines interfaces between some functional modules. As shown in Figure 1:
  • I1 interface The interface defined between the computing power service layer and the computing power platform layer, used to transfer SLA (Service Level Agreement) requirements, computing power service deployment configuration information, etc.
  • SLA Service Level Agreement
  • I2 interface used for computing power modeling sub-module to transfer computing power service contract, computing power capability template and other information to computing power notification sub-module.
  • I3 interface used for the computing power OAM sub-module to transfer computing power resource performance monitoring, computing power billing management, computing power resource failure and other information to the computing power notification sub-module.
  • I4 interface used for the computing power platform layer to transfer computing power service contract information and computing power resource status notifications to the computing power routing layer.
  • I5 interface refers to the interface between the computing power resource layer and the computing power platform layer, which is mainly used for computing power resource registration management, computing power resource performance status and fault information transmission, etc.
  • the system for computing network convergence is a new type of network architecture, based on ubiquitous network connections, based on highly distributed computing nodes, through automated deployment of services, optimal routing, and load balancing.
  • Build a new network infrastructure with computing power perception truly realize the network is omnipresent, computing power is everywhere, and intelligence is omnipresent.
  • Massive applications, massive functional functions, and massive computing resources form an open ecology. Among them, massive applications can call computing resources in different places in real time on demand, improve computing resource utilization efficiency, and ultimately achieve the optimization of user experience and computing resource utilization. Optimized and optimized network efficiency.
  • an embodiment of the present disclosure also provides a computing power processing method, which is applied to a computing power processing network system, including:
  • Step 41 Obtain the service request of the business
  • Step 42 Obtain a computing power configuration strategy at least according to the service request mapping; for example, the service request includes parameters such as service ID, service type, service level, and delay. Mapping is the corresponding computing power configuration strategy: CPU/GPU resource configuration requirements, storage configuration requirements, etc.
  • Step 43 Select the corresponding network path at least according to the computing power configuration strategy, and dispatch the service to the corresponding computing power network element node for processing.
  • the method further includes:
  • At least the corresponding network path is selected according to the computing power configuration strategy and the network resource information, and the service is scheduled to the corresponding computing power network element node for processing.
  • the third processing layer can periodically or dynamically obtain the above network resource information from the network resource layer in Figure 1, or the network resource layer actively reports the network resource information, so that the third processing layer can be configured according to the computing power
  • the strategy and current network resource information select the corresponding network path, and dispatch the service to the corresponding computing power network element node for processing.
  • the current network resource information can be understood as the latest network resource information obtained by the third processing layer, or the latest network resource information reported by the network resource layer.
  • the network path selected by the third processing layer is the current optimal network path
  • its scheduled computing power network element node is the current optimal computing power network element node, which is not specifically limited here.
  • the computing power processing method is applied to the computing power processing network system as shown in Figures 1-3, and the execution subjects of the above-mentioned steps 41, 42 and 43 may be each of the above computing power processing network systems.
  • the corresponding processing layer for example, step 41 is executed by the first processing layer, step 42 is executed by the second processing layer, and step 43 is executed by the third processing layer.
  • first processing layer, second processing layer, and third processing layer are divided according to logical functions.
  • each of the above-mentioned processing layers can be deployed on one device or multiple devices. ; If it is deployed on one device, information can be transmitted between each processing layer through internal interfaces; if it is deployed on multiple devices, information can be transmitted between each processing layer through signaling interaction.
  • the embodiments of the present disclosure do not limit the specific names of the first processing layer, the second processing layer, and the third processing layer, and the names of all layers capable of realizing their corresponding functions are applicable to the embodiments of the present disclosure.
  • Method 1 Computing power resource scheduling (that is, selecting the corresponding network path according to the computing power configuration strategy, and dispatching the service to the corresponding computing power network element node for processing), which can match the business and computing power to schedule the service Appropriate computing nodes for business processing. That is to say, scheduling based on computing power can realize finding the best purpose to serve computing nodes.
  • Method 2 Computing power resource scheduling + network resource scheduling (that is, selecting the corresponding network path according to the computing power configuration strategy and network resource information, and scheduling the service to the corresponding computing power network element node for processing); combination of computing power resource scheduling
  • the scheduling of the existing network resource information in the network for example, the network resource information includes: bandwidth, delay, delay jitter, etc.
  • the computing power resource scheduling realizes the scheduling of the business to the appropriate computing power node, and the network resource scheduling can be realized Find the optimal network path of the target service computing node. That is, network-based scheduling + computing power scheduling (joint computing power resource scheduling and network resources), through the best network path, and the best computing power business processing, to provide the best user experience.
  • the method further includes:
  • the computing power resource status information is used to reflect the ubiquitous deployment computing power status and deployment location information in the network. It can refer to the number of service connections, CPU/GPU computing power, deployment form (physical, virtual), deployment location (corresponding Capabilities such as IP address), storage capacity, storage form, etc., can also refer to computing power abstracted based on the above-mentioned basic computing resources, which are used to reflect the currently available computing power and distribution location and form of each node of the network.
  • the method further includes:
  • the computing power processing network system in the embodiments of the present disclosure further includes: a fourth processing layer (also referred to as a computing power resource layer), and the fourth processing layer is used to implement computing power Collection and reporting of resource status information (reported to the second processing layer).
  • the second processing layer abstractly describes and expresses the state information of the computing power resources to generate a computing power capability template; and generates a computing power service contract according to the computing power capability template and the service request of the business.
  • the second processing layer first sends the computing power template and/or the computing power service contract to the third processing layer, and the third processing layer forwards it to the corresponding computing power network element node, or the second processing layer It is directly sent to the node of the computing power network element, and its specific sending path is not limited here.
  • the computing power template is mainly used for the unification of some request information modes between the computing power processing network system and the user equipment.
  • the template converts the service request into information conforming to the system processing format for subsequent processing; or, after the user equipment receives the computing power template (which can be obtained through the computing power network element node), before sending the service request, pass the computing power first
  • the template converts the relevant information of the service request into information that conforms to the system processing format, which facilitates the subsequent processing of the system and reduces the processing burden of the system.
  • the computing power service contract is mainly used to generate the corresponding computing power service contract based on the user's contract information.
  • the network needs to provide the corresponding computing power service according to the computing power service contract.
  • the user equipment learns the computing power service contract, it can learn which computing power network element node it can communicate with, charging rules, etc., which is not specifically limited here.
  • the method further includes:
  • Send the computing power capability template and/or the computing power service contract to the corresponding computing power network element node For example, sending the computing power template and/or the computing power service contract to a third processing layer, and the third processing layer sends the computing power template and/or the computing power service contract to Corresponding computing power network element node.
  • the method further includes:
  • Perform performance monitoring of the computing power resource status information of the computing power network element node and send at least one of the computing power resource performance, computing power billing management information, and computing power resource failure information to the corresponding computing power network element node .
  • the computing resource status information includes at least one of the following:
  • the computing power processing method provided by the embodiments of the present disclosure is based on ubiquitous network connections, based on highly distributed computing nodes, through automated deployment of services, optimal routing, and load balancing, to build a new computing power-aware network
  • the infrastructure truly realizes the ubiquity of the network, the ubiquity of computing power, and the ubiquity of intelligence.
  • Massive applications, massive functional functions, and massive computing resources form an open ecology. Among them, massive applications can call computing resources in different places in real time on demand, improve computing resource utilization efficiency, and ultimately achieve the optimization of user experience and computing resource utilization. Optimized and optimized network efficiency.
  • the embodiment of the present disclosure also provides a network system for computing power processing, including a memory, a processor, and a computer program stored on the memory and running on the processor.
  • the processor executes the program when the program is executed.
  • the embodiment of the present disclosure also provides a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the various processes in the foregoing embodiment of the computing power processing method are realized, and the same technology can be achieved. The effect, in order to avoid repetition, will not be repeated here.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM for short), magnetic disk, or optical disk, etc.
  • this application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-readable storage media (including but not limited to disk storage, optical storage, etc.) containing computer-usable program codes.
  • These computer program instructions can also be stored in a computer-readable storage medium that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable storage medium produce paper products that include the instruction device,
  • the instruction device realizes the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that the computer or other programmable equipment executes a series of operation steps to produce computer-implemented processing, thereby executing instructions on the computer or other scientific programming equipment Provides steps for realizing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • each of the above modules is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
  • these modules can all be implemented in the form of software called by processing elements; they can also be implemented in the form of hardware; some modules can be implemented in the form of calling software by processing elements, and some of the modules can be implemented in the form of hardware.
  • the determining module may be a separately established processing element, or it may be integrated in a chip of the above-mentioned device for implementation.
  • it may also be stored in the memory of the above-mentioned device in the form of program code, which is determined by a certain processing element of the above-mentioned device.
  • each step of the above method or each of the above modules can be completed by an integrated logic circuit of hardware in the processor element or instructions in the form of software.
  • each module, unit, sub-unit or sub-module may be one or more integrated circuits configured to implement the above method, for example: one or more application specific integrated circuits (ASIC), or one or Multiple microprocessors (digital signal processor, DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, FPGA), etc.
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • FPGA Field Programmable Gate Array
  • the processing element may be a general-purpose processor, such as a central processing unit (CPU) or other processors that can call program codes.
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • SOC system-on-a-chip

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Power Sources (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Provided in embodiments of the present disclosure are a computing power processing network system and a computing power processing method. The system comprises a first processing layer, a second processing layer and a third processing layer. The first processing layer is used to acquire a service request for a service. The second processing layer is used to acquire a computing power allocation policy at least according to the service request. The third processing layer is used to select a corresponding network path according to the computing power allocation policy, and to schedule the service to a corresponding computing power network element node for processing.

Description

算力处理的网络系统及算力处理方法Computing power processing network system and computing power processing method
相关申请的交叉引用Cross-references to related applications
本申请主张在2020年3月27日在中国提交的中国专利申请号No.202010232182.0的优先权,其全部内容通过引用包含于此。This application claims the priority of Chinese Patent Application No. 202010232182.0 filed in China on March 27, 2020, the entire content of which is incorporated herein by reference.
技术领域Technical field
本公开涉及通信技术领域,尤其是指一种算力处理的网络系统及算力处理方法,或者是指一种算力感知网络或者算力网络。The present disclosure relates to the field of communication technology, in particular to a computing power processing network system and computing power processing method, or to a computing power perception network or computing power network.
背景技术Background technique
从云计算、边缘计算的发展大趋势下,未来社会中会在靠近用户的不同距离遍布许多不同规模的算力,通过全球网络为用户提供各类个性化的服务。从百亿量级的智能终端,到全球十亿量级的家庭网关,再到每个城市中未来边缘计算带来的数千个具备计算能力的边缘云,以及每个国家数十个大型的云数据中心DC,形成海量的泛在算力从各处接入互联网,形成计算和网络深度融合的发展趋势。网络中的计算资源融入到网络的各个角落,使每一个网络节点都可以成为资源的提供者,用户的请求可以通过调用最近的节点资源来满足,不再局限于某一特定节点,避免造成连接和网络调度资源的浪费。而传统的网络只是提供数据通信的管道,以连接为基础,受制于固定的网络寻址机制,在更高更苛刻的QoE要求下往往无法满足。此外随着微服务的发展,传统的客户机(client)-服务器(server)模式被解构,服务器侧的应用解构成功能组件布放在云平台上,由API网关统一调度,可以做到按需动态实例化,服务器中的业务了逻辑转移到客户机侧,客户机只需要关心计算功能本身,而无需关心服务器、虚拟机、容器等计算资源,从而实现服务功能。Under the general development trend of cloud computing and edge computing, in the future society, there will be many different scales of computing power at different distances close to users, and various personalized services will be provided to users through global networks. From tens of billions of smart terminals, to global billions of home gateways, to thousands of edge clouds with computing capabilities brought about by future edge computing in each city, and dozens of large-scale edge clouds in each country. The cloud data center DC forms a massive ubiquitous computing power to connect to the Internet from everywhere, forming a development trend of deep integration of computing and networks. The computing resources in the network are integrated into all corners of the network, so that each network node can become a resource provider. The user's request can be satisfied by calling the nearest node resource, and it is no longer limited to a specific node to avoid connecting And the waste of network scheduling resources. However, traditional networks only provide channels for data communication, based on connections, subject to fixed network addressing mechanisms, and are often unable to meet the higher and more stringent QoE requirements. In addition, with the development of microservices, the traditional client-server model is deconstructed, and the application deconstruction function components on the server side are deployed on the cloud platform, and they are uniformly scheduled by the API gateway, which can be implemented on demand With dynamic instantiation, the business logic in the server is transferred to the client side, and the client only needs to care about the computing function itself, and does not need to care about the computing resources such as the server, virtual machine, container, etc., so as to realize the service function.
所以面向未来网络的新一代网络架构需协同考虑网络和计算融合演进的需求,实现泛在连接和算力架构中网络的全局优化,算力的灵活调度,业务的合理分布;然而传统的网络无法实现算力资源配置。Therefore, the new-generation network architecture facing the future network needs to consider the needs of network and computing integration evolution to realize the global optimization of the network in the ubiquitous connection and computing power architecture, the flexible scheduling of computing power, and the reasonable distribution of services; however, traditional networks cannot Realize the allocation of computing resources.
发明内容Summary of the invention
本公开实施例的目的在于提供一种算力处理的网络系统及算力处理方法,以解决相关技术中的网络无法实现算力资源配置的问题。The purpose of the embodiments of the present disclosure is to provide a computing power processing network system and computing power processing method, so as to solve the problem that the network in the related technology cannot realize the computing power resource allocation.
为了解决上述问题,本公开实施例提供一种算力处理的网络系统,包括:In order to solve the foregoing problems, embodiments of the present disclosure provide a computing power processing network system, including:
第一处理层、第二处理层以及第三处理层;The first processing layer, the second processing layer, and the third processing layer;
其中,所述第一处理层用于获取业务的服务请求,并将所述服务请求发送至所述第二处理层;Wherein, the first processing layer is used to obtain a service request for a business, and send the service request to the second processing layer;
所述第二处理层用于至少根据所述服务请求获得算力配置策略,并将所述算力配置策略发送至所述第三处理层;The second processing layer is configured to obtain a computing power configuration strategy at least according to the service request, and send the computing power configuration strategy to the third processing layer;
所述第三处理层用于至少根据所述算力配置策略选择对应的网络路径,将业务调度到对应的算力网元节点进行处理;The third processing layer is used to select a corresponding network path at least according to the computing power configuration strategy, and dispatch the service to the corresponding computing power network element node for processing;
其中,所述第三处理层还用于:Wherein, the third processing layer is also used for:
获取网络资源信息;Obtain network resource information;
至少根据所述算力配置策略和网络资源信息选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。At least the corresponding network path is selected according to the computing power configuration strategy and the network resource information, and the service is scheduled to the corresponding computing power network element node for processing.
其中,所述算力处理的网络系统还包括:第四处理层;Wherein, the network system for computing power processing further includes: a fourth processing layer;
所述第四处理层用于获取算力网元节点的算力资源状态信息,并将所述算力资源状态信息发送至所述第二处理层;The fourth processing layer is used to obtain the state information of the computing power resource of the computing power network element node, and send the state information of the computing power resource to the second processing layer;
第二处理层至少根据所述服务请求以及所述算力资源状态信息获得所述算力配置策略。The second processing layer obtains the computing power configuration strategy at least according to the service request and the computing power resource status information.
其中,所述第二处理层还用于:Wherein, the second processing layer is also used for:
对所述算力资源状态信息进行抽象描述和表示,生成算力能力模板;Abstract description and representation of the state information of the computing power resources to generate a computing power capability template;
至少根据所述算力能力模板和所述服务请求,生成算力服务合约。Generate a computing power service contract at least according to the computing power capability template and the service request.
其中,所述第二处理层还用于:Wherein, the second processing layer is also used for:
将所述算力能力模板和/或所述算力服务合约发送给对应的算力网元节点。Send the computing power capability template and/or the computing power service contract to the corresponding computing power network element node.
其中,所述第二处理层还用于:Wherein, the second processing layer is also used for:
对算力网元节点的算力资源状态信息进行性能监控,并将算力资源的性能、算力计费管理信息以及算力资源故障信息中的至少一项发送给对应的算 力网元节点。Perform performance monitoring of the computing power resource status information of the computing power network element node, and send at least one of the computing power resource performance, computing power billing management information, and computing power resource failure information to the corresponding computing power network element node .
其中,所述算力资源状态信息包括下述至少一项:Wherein, the computing resource status information includes at least one of the following:
服务ID;Service ID;
中央处理器CPU的信息;Information of the central processing unit CPU;
服务链接数;Number of service links;
内存的信息;Memory information;
图像处理器GPU的信息;Information of the image processor GPU;
硬盘的信息。Hard disk information.
其中,所述第四处理层将所述算力资源状态信息发送至所述第二处理层,包括:Wherein, the fourth processing layer sending the computing resource state information to the second processing layer includes:
所述第二处理层向所述第四处理层发送算力测量请求消息,并接收所述第四处理层反馈的算力状态信息响应消息,所述算力状态信息响应消息中携带算力网元节点的算力资源状态信息。The second processing layer sends a computing power measurement request message to the fourth processing layer, and receives a computing power status information response message fed back by the fourth processing layer, and the computing power status information response message carries a computing power network. The state information of the computing resources of the meta node.
其中,所述第四处理层将所述算力资源状态信息发送至所述第二处理层,包括:Wherein, the fourth processing layer sending the computing resource state information to the second processing layer includes:
所述第四处理层向所述第二处理层周期或非周期上报所述算力网元节点的算力资源状态信息。The fourth processing layer periodically or non-periodically reports the state information of the computing power resources of the computing power network element node to the second processing layer.
其中,所述算力测量请求消息包括下述至少一项:Wherein, the computing power measurement request message includes at least one of the following:
服务ID;Service ID;
中央处理器CPU的信息;Information of the central processing unit CPU;
服务链接数;Number of service links;
内存的信息;Memory information;
图像处理器GPU的信息;Information of the image processor GPU;
硬盘的信息。Hard disk information.
其中,所述第二处理层通过操作维护管理OAM的遥测信息携带所述算力测量请求消息。Wherein, the second processing layer carries the computing power measurement request message through operation and maintenance management OAM telemetry information.
本公开实施例还提供一种算力处理方法,应用于算力处理的网络系统,包括:The embodiment of the present disclosure also provides a computing power processing method, which is applied to a computing power processing network system, including:
获取业务的服务请求;Service request for obtaining business;
至少根据所述服务请求映射获得算力配置策略;Obtaining a computing power allocation strategy at least according to the service request mapping;
至少根据所述算力配置策略选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。At least the corresponding network path is selected according to the computing power configuration strategy, and the service is scheduled to the corresponding computing power network element node for processing.
其中,所述方法还包括:Wherein, the method further includes:
获取网络资源信息;Obtain network resource information;
至少根据所述算力配置策略和网络资源信息选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。At least the corresponding network path is selected according to the computing power configuration strategy and the network resource information, and the service is scheduled to the corresponding computing power network element node for processing.
其中,所述方法还包括:Wherein, the method further includes:
获取算力网元节点的算力资源状态信息;Obtain the state information of the computing power resources of the node of the computing power network element;
至少根据所述服务请求以及所述算力资源状态信息获得所述算力配置策略。Obtain the computing power allocation strategy at least according to the service request and the computing power resource status information.
其中,所述方法还包括:Wherein, the method further includes:
对所述算力资源状态信息进行抽象描述和表示,生成算力能力模板;Abstract description and representation of the state information of the computing power resources to generate a computing power capability template;
至少根据所述算力能力模板和所述服务请求,生成算力服务合约。Generate a computing power service contract at least according to the computing power capability template and the service request.
其中,所述方法还包括:Wherein, the method further includes:
将所述算力能力模板和/或所述算力服务合约发送至对应的算力网元节点。Send the computing power capability template and/or the computing power service contract to the corresponding computing power network element node.
其中,所述方法还包括:Wherein, the method further includes:
对算力网元节点的算力资源状态信息进行性能监控,并将算力资源的性能、算力计费管理信息以及算力资源故障信息中的至少一项发送给对应的算力网元节点。Perform performance monitoring of the computing power resource status information of the computing power network element node, and send at least one of the computing power resource performance, computing power billing management information, and computing power resource failure information to the corresponding computing power network element node .
其中,所述算力资源状态信息包括下述至少一项:Wherein, the computing resource status information includes at least one of the following:
服务ID;Service ID;
中央处理器CPU的信息;Information of the central processing unit CPU;
服务链接数;Number of service links;
内存的信息;Memory information;
图像处理器GPU的信息;Information of the image processor GPU;
硬盘的信息。Hard disk information.
本公开实施例还提供一种算力处理的网络系统,包括存储器、处理器及 存储在所述存储器上并可在所述处理器上运行的程序,所述处理器执行所述程序时实现如上所述的算力处理方法。The embodiments of the present disclosure also provide a network system for computing power processing, including a memory, a processor, and a program stored on the memory and capable of running on the processor. When the processor executes the program, the implementation is as described above. The described computing power processing method.
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上所述的算力处理方法中的步骤。The embodiments of the present disclosure also provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps in the computing power processing method described above are realized.
本公开的上述技术方案至少具有如下有益效果:The above technical solutions of the present disclosure have at least the following beneficial effects:
本公开实施例的算力处理的网络系统及算力处理方法,基于无处不在的网络连接将动态分布的计算资源互联,通过网络、存储、算力等多维度资源的统一协同调度,使海量的业务能够按需、实时调用不同地方的计算资源,实现连接和算力在网络的全局优化,提供一致的用户体验。The computing power processing network system and computing power processing method of the embodiments of the present disclosure interconnect dynamically distributed computing resources based on ubiquitous network connections. Through the unified coordinated scheduling of multi-dimensional resources such as network, storage, and computing power, massive The business can call computing resources in different places in real time on demand, realize the global optimization of connection and computing power in the network, and provide a consistent user experience.
附图说明Description of the drawings
图1表示本公开实施例提供的算力处理的网络系统的结构示意图;FIG. 1 shows a schematic structural diagram of a network system for computing power processing provided by an embodiment of the present disclosure;
图2表示本公开实施例提供的算力处理的网络系统中算力资源状态信息的获取示意图;FIG. 2 shows a schematic diagram of obtaining state information of computing power resources in a network system for computing power processing provided by an embodiment of the present disclosure;
图3表示本公开实施例提供的算力处理的网络系统中OAM的遥测信息的示例图;3 shows an example diagram of OAM telemetry information in a network system for computing power processing provided by an embodiment of the present disclosure;
图4表示本公开实施例提供的算力处理方法的步骤流程图。FIG. 4 shows a flowchart of the steps of a computing power processing method provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本公开要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。In order to make the technical problems, technical solutions, and advantages to be solved by the present disclosure clearer, a detailed description will be given below with reference to the accompanying drawings and specific embodiments.
如图1所示,本公开实施例提供一种算力处理的网络系统,包括:As shown in FIG. 1, an embodiment of the present disclosure provides a network system for computing power processing, including:
第一处理层(也可称为算力服务层)、第二处理层(也可称为算力平台层)以及第三处理层(也可称为算力路由层);The first processing layer (also called the computing power service layer), the second processing layer (also called the computing power platform layer), and the third processing layer (also called the computing power routing layer);
其中,所述第一处理层用于获取业务的服务请求,并将所述服务请求发送至所述第二处理层;Wherein, the first processing layer is used to obtain a service request for a business, and send the service request to the second processing layer;
所述第二处理层用于至少根据所述服务请求获得算力配置策略,并将所述算力配置策略发送至所述第三处理层;例如,服务请求包括服务ID、服务类型,服务级别、时延等参数。映射为相应的算力配置策略:CPU/GPU资 源配置需求,存储配置需求等。The second processing layer is used to obtain a computing power configuration strategy at least according to the service request, and send the computing power configuration strategy to the third processing layer; for example, the service request includes a service ID, a service type, and a service level , Time delay and other parameters. Mapping is the corresponding computing power configuration strategy: CPU/GPU resource configuration requirements, storage configuration requirements, etc.
所述第三处理层用于至少根据所述算力配置策略选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。The third processing layer is used to select the corresponding network path at least according to the computing power configuration strategy, and dispatch the service to the corresponding computing power network element node for processing.
其中,所述第三处理层还用于:Wherein, the third processing layer is also used for:
获取网络资源信息;Obtain network resource information;
至少根据所述算力配置策略和网络资源信息选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。At least the corresponding network path is selected according to the computing power configuration strategy and the network resource information, and the service is scheduled to the corresponding computing power network element node for processing.
需要说明的是,第三处理层可以周期或动态的从图1中的网络资源层获取上述网络资源信息,或者,由网络资源层主动上报网络资源信息,使得第三处理层能够根据算力配置策略和当前的网络资源信息选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。其中,当前的网络资源信息可以理解为第三处理层最新获取的网络资源信息,或者网络资源层最新上报的网络资源信息。It should be noted that the third processing layer can periodically or dynamically obtain the above network resource information from the network resource layer in Figure 1, or the network resource layer actively reports the network resource information, so that the third processing layer can be configured according to the computing power The strategy and current network resource information select the corresponding network path, and dispatch the service to the corresponding computing power network element node for processing. Among them, the current network resource information can be understood as the latest network resource information obtained by the third processing layer, or the latest network resource information reported by the network resource layer.
可选地,第三处理层选择的网络路径为当前最优网络路径,其调度到的算力网元节点为当前最优算力网元节点,在此不做具体限定。Optionally, the network path selected by the third processing layer is the current optimal network path, and its scheduled computing power network element node is the current optimal computing power network element node, which is not specifically limited here.
其中,算力网元节点指具有算力的网络设备;进一步可以包括算力路由节点(位于算力路由层,负责算力资源信息在网络中的通告传输的网络设备,可以为具有算力感知能力的路由器设备)和算力节点(仅仅具备计算能力的设备,相当于网络中处理计算任务的设备,例如数据中心的服务器设备)。Among them, the computing power network element node refers to a network device with computing power; it can further include a computing power routing node (located at the computing power routing layer, responsible for the notification and transmission of computing resource information in the network, and can be computing power aware Capable router devices) and computing nodes (devices with only computing capabilities, which are equivalent to devices that process computing tasks in the network, such as server devices in data centers).
本公开实施例中,算力网元节点为设置于如图1所示的网络资源层的网络节点。其中,网络资源层用于提供信息传输的网络基础设施,包括接入网、城域网和骨干网。In the embodiment of the present disclosure, the computing power network element node is a network node set at the network resource layer as shown in FIG. 1. Among them, the network resource layer is used to provide network infrastructure for information transmission, including access networks, metropolitan area networks, and backbone networks.
需要说明的是,本公开的上述实施例提供的算力处理的网络系统,也可以称为面向计算网络融合的系统、算力感知网络系统或算力网络系统等,在此不过具体限定。It should be noted that the computing power processing network system provided in the foregoing embodiments of the present disclosure may also be referred to as a computing network fusion-oriented system, computing power perception network system, or computing power network system, etc., which are not specifically limited here.
为了实现对泛在的计算和服务的感知、互联和协同调度,面向计算网络融合的网络从逻辑功能上划分为第一处理层、第二处理层以及第三处理层。需要说明的是,本公开实施例中,第一处理层、第二处理层以及第三处理层是根据逻辑功能划分的,实际部署中,上述各个处理层可以部署在一个设备 上,也可以部署在多个设备上;若部署在一个设备上,各个处理层之间可通过内部接口进行信息传输;若部署在多个设备上,各个处理层之间可通过信令交互实现信息传输。可选地,本公开实施例不对第一处理层、第二处理层、第三处理层的具体名称进行限定,所有能够实现其对应功能的层命名均适用于本公开实施例。例如,第二处理层,也可以称为、算力平台层、算力管理设备、算力管理节点、算力管理层等,在此不一一枚举。In order to realize the perception, interconnection and coordinated scheduling of ubiquitous computing and services, the network oriented to the integration of computing networks is logically divided into a first processing layer, a second processing layer, and a third processing layer. It should be noted that in the embodiments of the present disclosure, the first processing layer, the second processing layer, and the third processing layer are divided according to logical functions. In actual deployment, the above-mentioned processing layers can be deployed on one device or On multiple devices; if deployed on one device, each processing layer can transmit information through internal interfaces; if deployed on multiple devices, each processing layer can achieve information transmission through signaling interaction. Optionally, the embodiments of the present disclosure do not limit the specific names of the first processing layer, the second processing layer, and the third processing layer, and the names of all layers capable of realizing their corresponding functions are applicable to the embodiments of the present disclosure. For example, the second processing layer may also be referred to as the computing power platform layer, computing power management equipment, computing power management node, computing power management layer, etc., which will not be enumerated here.
本公开实施例提供的面向计算网络融合的系统,基于网络无处不在的算力资源,算力平台层完成对算力资源的抽象、建模、控制和管理,并通知到算力路由层,由算力路由层综合考虑用户需求、网络资源状况和算力资源状况,将业务调度到合适的算力网元节点,以实现资源利用率最优并保证极致的用户体验。The system for computing network integration provided by the embodiments of the present disclosure is based on the ubiquitous computing power resources of the network. The computing power platform layer completes the abstraction, modeling, control and management of computing power resources, and notifies the computing power routing layer. The computing power routing layer comprehensively considers user needs, network resource status, and computing power resource status, and dispatches services to appropriate computing power network element nodes to achieve optimal resource utilization and ensure the ultimate user experience.
本公开实施例中对于将业务调度到对应的算力网元节点至少包括两种方式:In the embodiments of the present disclosure, there are at least two methods for scheduling services to the corresponding computing power network element nodes:
方式一:算力资源调度(即根据所述算力配置策略选择对应的网络路径,将业务调度到对应的算力网元节点进行处理),可以做业务和算力的匹配,为业务调度到合适的算力节点进行业务处理。即基于算力的调度能够实现寻找最佳目的服务计算节点。Method 1: Computing power resource scheduling (that is, selecting the corresponding network path according to the computing power configuration strategy, and dispatching the service to the corresponding computing power network element node for processing), which can match the business and computing power to schedule the service Appropriate computing nodes for business processing. That is to say, scheduling based on computing power can realize finding the best purpose to serve computing nodes.
方式二:算力资源调度+网络资源调度(即根据所述算力配置策略和网络资源信息选择对应的网络路径,将业务调度到对应的算力网元节点进行处理);算力资源调度结合网络中已有网络资源信息(例如,网络资源信息包括:带宽、时延、时延抖动等)的调度;其中,算力资源调度实现将业务调度到合适的算力节点,网络资源调度可以实现寻找目的服务算力节点的最优的网络路径。即基于网络的调度+算力的调度(联合算力资源调度和网络资源),通过最佳网络路径,并进行最佳算力的业务处理,提供最佳用户体验。Method 2: Computing power resource scheduling + network resource scheduling (that is, selecting the corresponding network path according to the computing power configuration strategy and network resource information, and scheduling the service to the corresponding computing power network element node for processing); combination of computing power resource scheduling The scheduling of the existing network resource information in the network (for example, the network resource information includes: bandwidth, delay, delay jitter, etc.); among them, the computing power resource scheduling realizes the scheduling of the business to the appropriate computing power node, and the network resource scheduling can be realized Find the optimal network path of the target service computing node. That is, network-based scheduling + computing power scheduling (joint computing power resource scheduling and network resources), through the best network path, and the best computing power business processing, to provide the best user experience.
如图1所示,算力服务层支持应用解构成原子化功能组件并组成算法库,由API网关统一调度,实现原子化算法在泛在的算力资源中按需实例化。通过I1接口,算力服务层将业务或应用的服务请求传递给算力平台层。As shown in Figure 1, the computing power service layer supports application deconstruction into atomized functional components and an algorithm library, which is uniformly scheduled by the API gateway to realize the on-demand instantiation of atomized algorithms in ubiquitous computing resources. Through the I1 interface, the computing power service layer transfers the service request of the business or application to the computing power platform layer.
算力平台层需要完成对算力资源的感知、度量和OAM管理,以支持网络对算力资源的可感知、可度量、可管理和可控制,利于实现计算网络联合 调度,提高运营商网络的资源利率。The computing power platform layer needs to complete the perception, measurement and OAM management of computing power resources to support the perceivable, measurable, manageable and controllable computing power resources of the network, which is conducive to the realization of the joint scheduling of computing networks and improves the network performance of operators. Resource interest rate.
算力路由层,基于抽象后的计算资源发现,综合考虑网络状况和计算资源状况,将业务灵活按需调度到不同的算力网元节点中。具体功能主要包括算力路由标识、算力路由控制、算力状态网络通告、算力路由寻址、算力路由转发等。The computing power routing layer, based on the abstracted computing resource discovery, comprehensively considers the network status and computing resource status, and dispatches services to different computing power network element nodes flexibly on demand. Specific functions mainly include computing power routing identification, computing power routing control, computing power status network notification, computing power routing addressing, computing power routing forwarding, etc.
作为一个可选实施例,所述算力处理的网络系统还包括:第四处理层(也可称为算力资源层);As an optional embodiment, the network system for computing power processing further includes: a fourth processing layer (also referred to as a computing power resource layer);
面向网络泛在部署的异构计算资源,所述第四处理层用于获取算力网元节点的算力资源状态信息,并将所述算力资源状态信息发送至所述第二处理层;For heterogeneous computing resources deployed ubiquitously on the network, the fourth processing layer is used to obtain computing resource status information of a computing power network element node, and send the computing resource status information to the second processing layer;
第二处理层至少根据所述服务请求以及所述算力资源状态信息获得所述算力配置策略。The second processing layer obtains the computing power configuration strategy at least according to the service request and the computing power resource status information.
其中,算力资源状态信息用于反应网络中的泛在部署计算能力状态以及部署位置等信息,可以指服务连接数、CPU/GPU计算力、部署形态(物理、虚拟)、部署位置(相应的IP地址)、存储容量、存储形态等的能力,也可以指基于上述基本的计算资源之上抽象出来的计算能力,用于反映网各个节点的当前可用的计算能力与分布的位置以及形态。Among them, the computing power resource status information is used to reflect the ubiquitous deployment computing power status and deployment location information in the network. It can refer to the number of service connections, CPU/GPU computing power, deployment form (physical, virtual), deployment location (corresponding Capabilities such as IP address), storage capacity, storage form, etc., can also refer to computing power abstracted based on the above-mentioned basic computing resources, which are used to reflect the currently available computing power and distribution location and form of each node of the network.
同样的,第四处理层也是根据逻辑功能划分的,实际部署中,上述各个第一处理、第二处理层、第三处理层、第四处理层可以部署在一个设备上,也可以部署在多个设备上;若部署在一个设备上,各个处理层之间可通过内部接口进行信息传输;若部署在多个设备上,各个处理层之间可通过信令交互实现信息传输。Similarly, the fourth processing layer is also divided according to logical functions. In actual deployment, each of the above-mentioned first processing, second, third, and fourth processing layers can be deployed on one device or multiple If deployed on one device, each processing layer can transmit information through internal interfaces; if deployed on multiple devices, each processing layer can achieve information transmission through signaling interaction.
为满足边缘计算领域多样性计算需求,面向不同应用,通过从单核CPU(中央处理器)到多核CPU、到CPU+GPU(图像处理器)+FPGA(现场可编程逻辑门阵列)等多种算力组合,在系统级恢复摩尔定律,推动计算创新。面对网络中分布的多种异构计算资源,第四处理层需要实现算力资源状态信息的收集及上报。In order to meet the diverse computing needs in the field of edge computing, for different applications, from single-core CPU (central processing unit) to multi-core CPU, to CPU + GPU (image processor) + FPGA (field programmable logic gate array), etc. The combination of computing power restores Moore's Law at the system level and promotes computing innovation. In the face of multiple heterogeneous computing resources distributed in the network, the fourth processing layer needs to collect and report the state information of computing power resources.
作为另一个可选实施例,所述第二处理层(即算力平台层)还用于:As another optional embodiment, the second processing layer (that is, the computing power platform layer) is also used for:
对所述算力资源状态信息进行抽象描述和表示,生成算力能力模板;Abstract description and representation of the state information of the computing power resources to generate a computing power capability template;
至少根据所述算力能力模板和所述服务请求,生成算力服务合约。Generate a computing power service contract at least according to the computing power capability template and the service request.
面对异构的计算资源,如图1所示,算力平台层包括“算力建模”子模块,“算力建模”子模块首先需要研究算力资源度量维度和度量衡体系,该子模块通过通用算法或者习惯用法的需求等信息,形成相应的算力能力模版。若干个算力能力模版和业务的服务请求组合成算力服务合约,用于满足业务的算力需求。In the face of heterogeneous computing resources, as shown in Figure 1, the computing power platform layer includes the “computing power modeling” sub-module. The module forms the corresponding computing power template through information such as general algorithm or idiom requirements. A number of computing power capability templates and business service requests are combined into a computing power service contract to meet the computing power demand of the business.
需要说明的是,本公开的上述实施例中提及的“至少根据”可以理解为:为了使得其获得的结果的精确性,本领域技术人员可以基于本领域的常用手段参考其他与算力相关的其他参数来获得更优的参数,在此不一一枚举。It should be noted that the “at least based on” mentioned in the above-mentioned embodiments of the present disclosure can be understood as: in order to make the obtained results accurate, those skilled in the art can refer to other computing power related methods based on common methods in this field. The other parameters to get better parameters, we will not enumerate one by one here.
进一步的,本公开的上述实施例中,所述第二处理层(即算力平台层)还用于:Further, in the above-mentioned embodiment of the present disclosure, the second processing layer (that is, the computing power platform layer) is also used for:
将所述算力能力模板和/或所述算力服务合约发送给对应的算力网元节点。例如,第二处理层先将算力能力模板和/或所述算力服务合约发送给第三处理层,由第三处理层转发给对应的算力网元节点,或者,由第二处理层直接发送给算力网元节点,在此不限定其具体发送路径。Send the computing power capability template and/or the computing power service contract to the corresponding computing power network element node. For example, the second processing layer first sends the computing power template and/or the computing power service contract to the third processing layer, and the third processing layer forwards it to the corresponding computing power network element node, or the second processing layer It is directly sent to the node of the computing power network element, and its specific sending path is not limited here.
可选地,算力能力模板主要用于算力处理的网络系统与用户设备之间一些请求信息的模式的统一,例如,算力处理的网络系统接收到业务的服务请求后,根据算力能力模板将服务请求转换为符合系统处理格式的信息,便于后续处理;或者,用户设备接收到算力能力模板(可通过算力网元节点获取)之后,在发送服务请求之前,先通过算力能力模板将服务请求的相关信息转换为符合系统处理格式的信息,便于系统后续处理,并减轻系统处理负担。Optionally, the computing power template is mainly used for the unification of some request information modes between the computing power processing network system and the user equipment. The template converts the service request into information conforming to the system processing format for subsequent processing; or, after the user equipment receives the computing power template (which can be obtained through the computing power network element node), before sending the service request, pass the computing power first The template converts the relevant information of the service request into information that conforms to the system processing format, which facilitates the subsequent processing of the system and reduces the processing burden of the system.
可选地,算力服务合约主要用来根据用户签约信息,生成相应的算力服务合约,一旦用户业务进来,网络需要根据算力服务合约提供相应的算力服务。另外,用户设备获知算力服务合约后可以获知其与哪个算力网元节点之间能够进行通信、计费规则等,在此不做具体限定。Optionally, the computing power service contract is mainly used to generate the corresponding computing power service contract based on the user's contract information. Once the user's business comes in, the network needs to provide the corresponding computing power service according to the computing power service contract. In addition, after the user equipment learns the computing power service contract, it can learn which computing power network element node it can communicate with, charging rules, etc., which is not specifically limited here.
如图1所示,算力平台层包括“算力通告”子模块,“算力通告”子模块负责将实际部署的算力资源通过算力模版抽象表示后,和算力服务合约等信息一起,通告给相应的算力网元节点。该子模块包括了算力服务合约通告,算力能力通告和算力状态通告等子功能模块。算力服务合约通告指根据算力 服务层的服务请求生成算力服务需求,并通知给相应的算力网元节点。算力能力通告指实际部署算力资源通过算力模版抽象表示后,通告给相应的算力网元节点。算力状态通告将算力资源的实时状态,通过I4接口通告给相应的网络节点。As shown in Figure 1, the computing power platform layer includes the "power notification" sub-module. The "power notification" sub-module is responsible for abstracting the actually deployed computing power resources through the computing power template, together with information such as computing power service contracts. , Notify the corresponding computing power network element node. This sub-module includes sub-function modules such as notification of computing power service contracts, notification of computing power capabilities, and notification of computing power status. Announcement of computing power service contract refers to generating computing power service demand based on the service request of the computing power service layer, and notifying the corresponding computing power network element node. Announcement of computing power refers to the fact that the actual deployed computing resources are abstractly expressed through computing power templates and then notified to the corresponding computing power network element nodes. The computing power status notification informs the real-time status of computing power resources to the corresponding network nodes through the I4 interface.
进一步的,所述第二处理层还用于:Further, the second processing layer is also used for:
对算力网元节点的算力资源状态信息进行性能监控,并将算力资源的性能、算力计费管理信息以及算力资源故障信息中的至少一项发送给对应的算力网元节点。Perform performance monitoring of the computing power resource status information of the computing power network element node, and send at least one of the computing power resource performance, computing power billing management information, and computing power resource failure information to the corresponding computing power network element node .
如图1所示,第二处理层(即算力平台层)包括“算力OAM”子模块,该子模块包括对算力资源层的算力性能监控、算力计费管理,算力资源的故障管理。换言之,算力OAM子模块主要包含算力网元节点的实时状态,包括扩容、缩容以及故障状态,根据这些信息,算力平台层一方面可以及时更新当前的可用的算力状态,另一方面可以进行故障恢复的一些的操作:比如算力平台层,发送一些重启,配置等操作指令,进行故障的恢复和处理。As shown in Figure 1, the second processing layer (that is, the computing power platform layer) includes the "computing power OAM" sub-module, which includes the computing power performance monitoring of the computing power resource layer, computing power billing management, and computing power resources Fault management. In other words, the computing power OAM submodule mainly contains the real-time status of computing power network element nodes, including capacity expansion, capacity reduction, and fault status. Based on this information, the computing power platform layer can update the current available computing power status in time on the one hand, and on the other Some operations of failure recovery can be performed: for example, the computing power platform layer, sending some operation instructions such as restarting, configuration, etc., to recover and deal with the failure.
作为一个可选实施例,所述算力资源状态信息包括下述至少一项:As an optional embodiment, the computing resource status information includes at least one of the following:
服务ID;Service ID;
中央处理器CPU的信息;Information of the central processing unit CPU;
服务链接数;Number of service links;
内存的信息;Memory information;
图像处理器GPU的信息;Information of the image processor GPU;
硬盘的信息。Hard disk information.
作为另一个可选实施例,所述第四处理层将所述算力资源状态信息发送至所述第二处理层,包括:As another optional embodiment, the fourth processing layer sending the computing resource state information to the second processing layer includes:
所述第二处理层向所述第四处理层发送算力测量请求消息,并接收所述第四处理层反馈的算力状态信息响应消息,所述算力状态信息响应消息中携带算力网元节点的算力资源状态信息。The second processing layer sends a computing power measurement request message to the fourth processing layer, and receives a computing power status information response message fed back by the fourth processing layer, and the computing power status information response message carries a computing power network. The state information of the computing resources of the meta node.
例如,如图2所示,实际部署中,第二处理层部署于算力管理设备上,第四处理层部署于算力节点设备上,则算力管理设备主动向算力节点设备发送算力测量请求消息,算力节点设备根据所述算力测量请求消息向算力管理 设备发送算力状态信息响应消息。For example, as shown in Figure 2, in actual deployment, the second processing layer is deployed on the computing power management device, and the fourth processing layer is deployed on the computing power node device. The computing power management device actively sends the computing power to the computing power node device. In a measurement request message, the computing power node device sends a computing power status information response message to the computing power management device according to the computing power measurement request message.
或者,所述第四处理层将所述算力资源状态信息发送至所述第二处理层,包括:Alternatively, the fourth processing layer sending the computing resource state information to the second processing layer includes:
所述第四处理层向所述第二处理层周期或非周期上报所述算力网元节点的算力资源状态信息。The fourth processing layer periodically or non-periodically reports the state information of the computing power resources of the computing power network element node to the second processing layer.
可选地,所述算力测量请求消息包括下述至少一项:Optionally, the computing power measurement request message includes at least one of the following:
服务ID;Service ID;
中央处理器CPU的信息;Information of the central processing unit CPU;
服务链接数;Number of service links;
内存的信息;Memory information;
图像处理器GPU的信息;Information of the image processor GPU;
硬盘的信息。Hard disk information.
其中,所述第二处理层通过操作维护管理OAM的遥测(Telemetry)信息携带所述算力测量请求消息,实现算力感知的工作流程。Wherein, the second processing layer carries the computing power measurement request message through the telemetry (Telemetry) information of the operation and maintenance management OAM, so as to realize the computing power sensing workflow.
例如,如图3所示,利用OAM的遥测信息(例如OAM-trace-type)中未使用的比特位(bit位):bit4-7;For example, as shown in Figure 3, use the unused bits (bits) in OAM telemetry information (such as OAM-trace-type): bit4-7;
Bit 7:定义为算力感知功能是否开启的bit位。Bit 7: The bit defined as whether the computing power perception function is enabled or not.
Bit 4:定义为算力测量请求/算力状态信息响应的bit位。Bit 4: Defined as the bit of the hashrate measurement request/response to the hashrate status information.
Bit 5:定义为算力测量请求/算力状态信息响应的bit位。Bit 5: Defined as the bit of the hashrate measurement request/response to the hashrate status information.
再例如,利用节点数据列表(Node data list)为可变长度列表携带本算力网元节点的算力资源、网络资源等状态信息。其中,算力资源指:服务器、处理、存储器、存储装置、虚拟机等的供应的能力。网络资源包括网络带宽、时延、抖动等需求。For another example, a node data list (Node data list) is used as a variable-length list to carry state information such as the computing power resources and network resources of the own computing power network element node. Among them, computing resources refer to the supply capabilities of servers, processing, storage, storage devices, virtual machines, etc. Network resources include network bandwidth, delay, jitter and other requirements.
本公开实施例中提供的面向计算网络融合的系统不仅定义了算力资源层、算力平台层和算力路由层等功能模块,还给定义了部分功能模块之间的接口。如图1所示:The system for computing network integration provided in the embodiments of the present disclosure not only defines functional modules such as a computing power resource layer, a computing power platform layer, and a computing power routing layer, but also defines interfaces between some functional modules. As shown in Figure 1:
I1接口:定义在算力服务层与算力平台层之间的接口,用于传递SLA(服务等级协议)需求、算力服务部署配置信息等。I1 interface: The interface defined between the computing power service layer and the computing power platform layer, used to transfer SLA (Service Level Agreement) requirements, computing power service deployment configuration information, etc.
I2接口:用于算力建模子模块向算力通告子模块传递算力服务合约、算 力能力模版等信息。I2 interface: used for computing power modeling sub-module to transfer computing power service contract, computing power capability template and other information to computing power notification sub-module.
I3接口:用于算力OAM子模块向算力通告子模块传递算力资源性能监控、算力计费管理、算力资源故障等信息。I3 interface: used for the computing power OAM sub-module to transfer computing power resource performance monitoring, computing power billing management, computing power resource failure and other information to the computing power notification sub-module.
I4接口:用于算力平台层向算力路由层传递算力服务合约信息、算力资源的状态通告。I4 interface: used for the computing power platform layer to transfer computing power service contract information and computing power resource status notifications to the computing power routing layer.
I5接口:指算力资源层与算力平台层之间的接口,主要用于算力资源注册管理,算力资源的性能状态和故障信息传递等。I5 interface: refers to the interface between the computing power resource layer and the computing power platform layer, which is mainly used for computing power resource registration management, computing power resource performance status and fault information transmission, etc.
综上,本公开实施例提供的面向计算网络融合的系统为一种新型网络架构,以无所不在的网络连接为基础,基于高度分布式的计算节点,通过服务的自动化部署、最优路由和负载均衡,构建算力感知的全新的网络基础设施,真正实现网络无所不达,算力无处不在,智能无所不及。海量应用、海量功能函数、海量计算资源构成一个开放的生态,其中,海量的应用能够按需、实时调用不同地方的计算资源,提高计算资源利用效率,最终实现用户体验最优化、计算资源利用率最优化、网络效率最优化。In summary, the system for computing network convergence provided by the embodiments of the present disclosure is a new type of network architecture, based on ubiquitous network connections, based on highly distributed computing nodes, through automated deployment of services, optimal routing, and load balancing. , Build a new network infrastructure with computing power perception, truly realize the network is omnipresent, computing power is everywhere, and intelligence is omnipresent. Massive applications, massive functional functions, and massive computing resources form an open ecology. Among them, massive applications can call computing resources in different places in real time on demand, improve computing resource utilization efficiency, and ultimately achieve the optimization of user experience and computing resource utilization. Optimized and optimized network efficiency.
如图4所示,本公开实施例还提供一种算力处理方法,应用于算力处理的网络系统,包括:As shown in FIG. 4, an embodiment of the present disclosure also provides a computing power processing method, which is applied to a computing power processing network system, including:
步骤41,获取业务的服务请求;Step 41: Obtain the service request of the business;
步骤42,至少根据所述服务请求映射获得算力配置策略;例如,服务请求包括服务ID、服务类型,服务级别、时延等参数。映射为相应的算力配置策略:CPU/GPU资源配置需求,存储配置需求等。Step 42: Obtain a computing power configuration strategy at least according to the service request mapping; for example, the service request includes parameters such as service ID, service type, service level, and delay. Mapping is the corresponding computing power configuration strategy: CPU/GPU resource configuration requirements, storage configuration requirements, etc.
步骤43,至少根据所述算力配置策略选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。Step 43: Select the corresponding network path at least according to the computing power configuration strategy, and dispatch the service to the corresponding computing power network element node for processing.
可选地,所述方法还包括:Optionally, the method further includes:
获取网络资源信息;Obtain network resource information;
至少根据所述算力配置策略和网络资源信息选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。At least the corresponding network path is selected according to the computing power configuration strategy and the network resource information, and the service is scheduled to the corresponding computing power network element node for processing.
需要说明的是,第三处理层可以周期或动态的从图1中的网络资源层获取上述网络资源信息,或者,由网络资源层主动上报网络资源信息,使得第三处理层能够根据算力配置策略和当前的网络资源信息选择对应的网络路径, 将业务调度到对应的算力网元节点进行处理。其中,当前的网络资源信息可以理解为第三处理层最新获取的网络资源信息,或者网络资源层最新上报的网络资源信息。It should be noted that the third processing layer can periodically or dynamically obtain the above network resource information from the network resource layer in Figure 1, or the network resource layer actively reports the network resource information, so that the third processing layer can be configured according to the computing power The strategy and current network resource information select the corresponding network path, and dispatch the service to the corresponding computing power network element node for processing. Among them, the current network resource information can be understood as the latest network resource information obtained by the third processing layer, or the latest network resource information reported by the network resource layer.
可选地,第三处理层选择的网络路径为当前最优网络路径,其调度到的算力网元节点为当前最优算力网元节点,在此不做具体限定。Optionally, the network path selected by the third processing layer is the current optimal network path, and its scheduled computing power network element node is the current optimal computing power network element node, which is not specifically limited here.
可选地,该算力处理方法应用于如图1-3所述的算力处理的网络系统,上述步骤41、步骤42、以及步骤43的执行主体可以为上述算力处理的网络系统的各个对应的处理层,例如,步骤41由第一处理层执行,步骤42由第二处理层执行,步骤43由第三处理层执行。Optionally, the computing power processing method is applied to the computing power processing network system as shown in Figures 1-3, and the execution subjects of the above-mentioned steps 41, 42 and 43 may be each of the above computing power processing network systems. The corresponding processing layer, for example, step 41 is executed by the first processing layer, step 42 is executed by the second processing layer, and step 43 is executed by the third processing layer.
需要说明的是,上述第一处理层、第二处理层以及第三处理层是根据逻辑功能划分的,实际部署中,上述各个处理层可以部署在一个设备上,也可以部署在多个设备上;若部署在一个设备上,各个处理层之间可通过内部接口进行信息传输;若部署在多个设备上,各个处理层之间可通过信令交互实现信息传输。可选地,本公开实施例不对第一处理层、第二处理层、第三处理层的具体名称进行限定,所有能够实现其对应功能的层命名均适用于本公开实施例。It should be noted that the above-mentioned first processing layer, second processing layer, and third processing layer are divided according to logical functions. In actual deployment, each of the above-mentioned processing layers can be deployed on one device or multiple devices. ; If it is deployed on one device, information can be transmitted between each processing layer through internal interfaces; if it is deployed on multiple devices, information can be transmitted between each processing layer through signaling interaction. Optionally, the embodiments of the present disclosure do not limit the specific names of the first processing layer, the second processing layer, and the third processing layer, and the names of all layers capable of realizing their corresponding functions are applicable to the embodiments of the present disclosure.
本公开实施例中对于将业务调度到对应的算力网元节点至少包括两种方式:In the embodiments of the present disclosure, there are at least two methods for scheduling services to the corresponding computing power network element nodes:
方式一:算力资源调度(即根据所述算力配置策略选择对应的网络路径,将业务调度到对应的算力网元节点进行处理),可以做业务和算力的匹配,为业务调度到合适的算力节点进行业务处理。即基于算力的调度能够实现寻找最佳目的服务计算节点。Method 1: Computing power resource scheduling (that is, selecting the corresponding network path according to the computing power configuration strategy, and dispatching the service to the corresponding computing power network element node for processing), which can match the business and computing power to schedule the service Appropriate computing nodes for business processing. That is to say, scheduling based on computing power can realize finding the best purpose to serve computing nodes.
方式二:算力资源调度+网络资源调度(即根据所述算力配置策略和网络资源信息选择对应的网络路径,将业务调度到对应的算力网元节点进行处理);算力资源调度结合网络中已有网络资源信息(例如,网络资源信息包括:带宽、时延、时延抖动等)的调度;其中,算力资源调度实现将业务调度到合适的算力节点,网络资源调度可以实现寻找目的服务算力节点的最优的网络路径。即基于网络的调度+算力的调度(联合算力资源调度和网络资源),通过最佳网络路径,并进行最佳算力的业务处理,提供最佳用户体验。Method 2: Computing power resource scheduling + network resource scheduling (that is, selecting the corresponding network path according to the computing power configuration strategy and network resource information, and scheduling the service to the corresponding computing power network element node for processing); combination of computing power resource scheduling The scheduling of the existing network resource information in the network (for example, the network resource information includes: bandwidth, delay, delay jitter, etc.); among them, the computing power resource scheduling realizes the scheduling of the business to the appropriate computing power node, and the network resource scheduling can be realized Find the optimal network path of the target service computing node. That is, network-based scheduling + computing power scheduling (joint computing power resource scheduling and network resources), through the best network path, and the best computing power business processing, to provide the best user experience.
作为一个可选实施例,所述方法还包括:As an optional embodiment, the method further includes:
获取算力网元节点的算力资源状态信息;Obtain the state information of the computing power resources of the node of the computing power network element;
至少根据所述服务请求以及所述算力资源状态信息获得所述算力配置策略。Obtain the computing power allocation strategy at least according to the service request and the computing power resource status information.
其中,算力资源状态信息用于反应网络中的泛在部署计算能力状态以及部署位置等信息,可以指服务连接数、CPU/GPU计算力、部署形态(物理、虚拟)、部署位置(相应的IP地址)、存储容量、存储形态等的能力,也可以指基于上述基本的计算资源之上抽象出来的计算能力,用于反映网各个节点的当前可用的计算能力与分布的位置以及形态。Among them, the computing power resource status information is used to reflect the ubiquitous deployment computing power status and deployment location information in the network. It can refer to the number of service connections, CPU/GPU computing power, deployment form (physical, virtual), deployment location (corresponding Capabilities such as IP address), storage capacity, storage form, etc., can also refer to computing power abstracted based on the above-mentioned basic computing resources, which are used to reflect the currently available computing power and distribution location and form of each node of the network.
作为一个可选实施例,所述方法还包括:As an optional embodiment, the method further includes:
对所述算力资源状态信息进行抽象描述和表示,生成算力能力模板;Abstract description and representation of the state information of the computing power resources to generate a computing power capability template;
至少根据所述算力能力模板和所述服务请求,生成算力服务合约。Generate a computing power service contract at least according to the computing power capability template and the service request.
面向网络泛在部署的异构计算资源,本公开实施例中的算力处理的网络系统还包括:第四处理层(也可称为算力资源层),第四处理层用于实现算力资源状态信息的收集及上报(上报给第二处理层)。由第二处理层对所述算力资源状态信息进行抽象描述和表示,生成算力能力模板;并根据所述算力能力模板和业务的服务请求,生成算力服务合约。For heterogeneous computing resources deployed ubiquitously on the network, the computing power processing network system in the embodiments of the present disclosure further includes: a fourth processing layer (also referred to as a computing power resource layer), and the fourth processing layer is used to implement computing power Collection and reporting of resource status information (reported to the second processing layer). The second processing layer abstractly describes and expresses the state information of the computing power resources to generate a computing power capability template; and generates a computing power service contract according to the computing power capability template and the service request of the business.
将所述算力能力模板和/或所述算力服务合约发送给对应的算力网元节点。例如,第二处理层先将算力能力模板和/或所述算力服务合约发送给第三处理层,由第三处理层转发给对应的算力网元节点,或者,由第二处理层直接发送给算力网元节点,在此不限定其具体发送路径。Send the computing power capability template and/or the computing power service contract to the corresponding computing power network element node. For example, the second processing layer first sends the computing power template and/or the computing power service contract to the third processing layer, and the third processing layer forwards it to the corresponding computing power network element node, or the second processing layer It is directly sent to the node of the computing power network element, and its specific sending path is not limited here.
可选地,算力能力模板主要用于算力处理的网络系统与用户设备之间一些请求信息的模式的统一,例如,算力处理的网络系统接收到业务的服务请求后,根据算力能力模板将服务请求转换为符合系统处理格式的信息,便于后续处理;或者,用户设备接收到算力能力模板(可通过算力网元节点获取)之后,在发送服务请求之前,先通过算力能力模板将服务请求的相关信息转换为符合系统处理格式的信息,便于系统后续处理,并减轻系统处理负担。Optionally, the computing power template is mainly used for the unification of some request information modes between the computing power processing network system and the user equipment. The template converts the service request into information conforming to the system processing format for subsequent processing; or, after the user equipment receives the computing power template (which can be obtained through the computing power network element node), before sending the service request, pass the computing power first The template converts the relevant information of the service request into information that conforms to the system processing format, which facilitates the subsequent processing of the system and reduces the processing burden of the system.
可选地,算力服务合约主要用来根据用户签约信息,生成相应的算力服务合约,一旦用户业务进来,网络需要根据算力服务合约提供相应的算力服 务。另外,用户设备获知算力服务合约后可以获知其与哪个算力网元节点之间能够进行通信、计费规则等,在此不做具体限定。Optionally, the computing power service contract is mainly used to generate the corresponding computing power service contract based on the user's contract information. Once the user's business comes in, the network needs to provide the corresponding computing power service according to the computing power service contract. In addition, after the user equipment learns the computing power service contract, it can learn which computing power network element node it can communicate with, charging rules, etc., which is not specifically limited here.
需要说明的是,本公开的上述实施例中提及的“至少根据”可以理解为:为了使得其获得的结果的精确性,本领域技术人员可以基于本领域的常用手段参考其他与算力相关的其他参数来获得更优的参数,在此不一一枚举。It should be noted that the “at least based on” mentioned in the above-mentioned embodiments of the present disclosure can be understood as: in order to make the obtained results accurate, those skilled in the art can refer to other computing power related methods based on common methods in this field. The other parameters to get better parameters, we will not enumerate one by one here.
作为又一个可选实施例,所述方法还包括:As yet another optional embodiment, the method further includes:
将所述算力能力模板和/或所述算力服务合约发送至对应的算力网元节点。例如,将所述算力能力模板和/或所述算力服务合约发送给第三处理层,由所述第三处理层将所述算力能力模板和/或所述算力服务合约发送至对应的算力网元节点。Send the computing power capability template and/or the computing power service contract to the corresponding computing power network element node. For example, sending the computing power template and/or the computing power service contract to a third processing layer, and the third processing layer sends the computing power template and/or the computing power service contract to Corresponding computing power network element node.
进一步的,所述方法还包括:Further, the method further includes:
对算力网元节点的算力资源状态信息进行性能监控,并将算力资源的性能、算力计费管理信息以及算力资源故障信息中的至少一项发送给对应的算力网元节点。Perform performance monitoring of the computing power resource status information of the computing power network element node, and send at least one of the computing power resource performance, computing power billing management information, and computing power resource failure information to the corresponding computing power network element node .
作为一个可选实施例,所述算力资源状态信息包括下述至少一项:As an optional embodiment, the computing resource status information includes at least one of the following:
服务ID;Service ID;
中央处理器CPU的信息;Information of the central processing unit CPU;
服务链接数;Number of service links;
内存的信息;Memory information;
图像处理器GPU的信息;Information of the image processor GPU;
硬盘的信息。Hard disk information.
综上,本公开实施例提供的算力处理方法以无所不在的网络连接为基础,基于高度分布式的计算节点,通过服务的自动化部署、最优路由和负载均衡,构建算力感知的全新的网络基础设施,真正实现网络无所不达,算力无处不在,智能无所不及。海量应用、海量功能函数、海量计算资源构成一个开放的生态,其中,海量的应用能够按需、实时调用不同地方的计算资源,提高计算资源利用效率,最终实现用户体验最优化、计算资源利用率最优化、网络效率最优化。In summary, the computing power processing method provided by the embodiments of the present disclosure is based on ubiquitous network connections, based on highly distributed computing nodes, through automated deployment of services, optimal routing, and load balancing, to build a new computing power-aware network The infrastructure truly realizes the ubiquity of the network, the ubiquity of computing power, and the ubiquity of intelligence. Massive applications, massive functional functions, and massive computing resources form an open ecology. Among them, massive applications can call computing resources in different places in real time on demand, improve computing resource utilization efficiency, and ultimately achieve the optimization of user experience and computing resource utilization. Optimized and optimized network efficiency.
本公开实施例还提供一种算力处理的网络系统,包括存储器、处理器及 存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现如上所述的算力处理方法实施例中的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。The embodiment of the present disclosure also provides a network system for computing power processing, including a memory, a processor, and a computer program stored on the memory and running on the processor. The processor executes the program when the program is executed. The various processes in the embodiment of the computing power processing method described above can achieve the same technical effect. In order to avoid repetition, it will not be repeated here.
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上所述的算力处理方法实施例中的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。The embodiment of the present disclosure also provides a computer-readable storage medium on which a computer program is stored. When the program is executed by a processor, the various processes in the foregoing embodiment of the computing power processing method are realized, and the same technology can be achieved. The effect, in order to avoid repetition, will not be repeated here. Wherein, the computer-readable storage medium, such as read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM for short), magnetic disk, or optical disk, etc.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可读存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-readable storage media (including but not limited to disk storage, optical storage, etc.) containing computer-usable program codes.
本申请是参照根据本申请实施例的方法、设备(系统)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或一个方框或多个方框中指定的功能的装置。This application is described with reference to flowcharts and/or block diagrams of methods, equipment (systems) and computer program products according to the embodiments of this application. It should be understood that each process and/or block in the flowchart and/or block diagram, and the combination of processes and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing equipment to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing equipment are generated for use. It is a device that realizes the functions specified in one process or multiple processes and/or one block or multiple blocks in the flowchart.
这些计算机程序指令也可存储在能引导计算机或其它可编程数据处理设备以特定方式工作的计算机可读存储介质中,使得存储在该计算机可读存储介质中的指令产生包括指令装置的纸制品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions can also be stored in a computer-readable storage medium that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable storage medium produce paper products that include the instruction device, The instruction device realizes the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
这些计算机程序指令也可装载到计算机或其它可编程数据处理设备上,使得计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他科编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that the computer or other programmable equipment executes a series of operation steps to produce computer-implemented processing, thereby executing instructions on the computer or other scientific programming equipment Provides steps for realizing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
需要说明的是,应理解以上各个模块的划分仅仅是一种逻辑功能的划分, 实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块通过处理元件调用软件的形式实现,部分模块通过硬件的形式实现。例如,确定模块可以为单独设立的处理元件,也可以集成在上述装置的某一个芯片中实现,此外,也可以以程序代码的形式存储于上述装置的存储器中,由上述装置的某一个处理元件调用并执行以上确定模块的功能。其它模块的实现与之类似。此外这些模块全部或部分可以集成在一起,也可以独立实现。这里所述的处理元件可以是一种集成电路,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。It should be noted that it should be understood that the division of each of the above modules is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated. And these modules can all be implemented in the form of software called by processing elements; they can also be implemented in the form of hardware; some modules can be implemented in the form of calling software by processing elements, and some of the modules can be implemented in the form of hardware. For example, the determining module may be a separately established processing element, or it may be integrated in a chip of the above-mentioned device for implementation. In addition, it may also be stored in the memory of the above-mentioned device in the form of program code, which is determined by a certain processing element of the above-mentioned device. Call and execute the functions of the above-identified module. The implementation of other modules is similar. In addition, all or part of these modules can be integrated together or implemented independently. The processing element described here may be an integrated circuit with signal processing capability. In the implementation process, each step of the above method or each of the above modules can be completed by an integrated logic circuit of hardware in the processor element or instructions in the form of software.
例如,各个模块、单元、子单元或子模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(Application Specific Integrated Circuit,ASIC),或,一个或多个微处理器(digital signal processor,DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(Central Processing Unit,CPU)或其它可以调用程序代码的处理器。再如,这些模块可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。For example, each module, unit, sub-unit or sub-module may be one or more integrated circuits configured to implement the above method, for example: one or more application specific integrated circuits (ASIC), or one or Multiple microprocessors (digital signal processor, DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, FPGA), etc. For another example, when one of the above modules is implemented in the form of processing element scheduling program code, the processing element may be a general-purpose processor, such as a central processing unit (CPU) or other processors that can call program codes. For another example, these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC).
本公开的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例,例如除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。此外,说明书以及权利要求中使用“和/或”表示所连接对象的至少其中之一,例如A和/或B和/或C,表示包含单独A,单独B,单独C,以及A和B都存在,B和C都存在,A和C都存在,以及A、B和C都存在的7种情况。类似地,本说明书以及权利要求中使用“A和B中的至 少一个”应理解为“单独A,单独B,或A和B都存在”。The terms "first", "second", etc. in the specification and claims of the present disclosure are used to distinguish similar objects, and not necessarily used to describe a specific sequence or sequence. It should be understood that the data used in this way can be interchanged under appropriate circumstances, so that the embodiments of the present disclosure described herein are, for example, implemented in a sequence other than those illustrated or described herein. In addition, the terms "including" and "having" and any variations of them are intended to cover non-exclusive inclusions. For example, a process, method, system, product, or device that includes a series of steps or units is not necessarily limited to those clearly listed. Those steps or units may include other steps or units that are not clearly listed or are inherent to these processes, methods, products, or equipment. In addition, the use of "and/or" in the description and claims means at least one of the connected objects, such as A and/or B and/or C, which means that it includes A alone, B alone, C alone, and both A and B. Exist, B and C exist, A and C exist, and A, B, and C all exist in 7 situations. Similarly, the use of "at least one of A and B" in this specification and claims should be understood as "A alone, B alone, or both A and B exist".
以上所述是本公开的可选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开所述原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。The above are optional implementations of the present disclosure. It should be pointed out that for those of ordinary skill in the art, without departing from the principles described in the present disclosure, several improvements and modifications can be made. These improvements and Retouching should also be regarded as the protection scope of this disclosure.

Claims (20)

  1. 一种算力处理的网络系统,包括:A network system for computing power processing, including:
    第一处理层、第二处理层以及第三处理层;The first processing layer, the second processing layer, and the third processing layer;
    其中,所述第一处理层用于获取业务的服务请求,并将所述服务请求发送至所述第二处理层;Wherein, the first processing layer is used to obtain a service request for a business, and send the service request to the second processing layer;
    所述第二处理层用于至少根据所述服务请求获得算力配置策略,并将所述算力配置策略发送至所述第三处理层;The second processing layer is configured to obtain a computing power configuration strategy at least according to the service request, and send the computing power configuration strategy to the third processing layer;
    所述第三处理层用于至少根据所述算力配置策略选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。The third processing layer is used to select the corresponding network path at least according to the computing power configuration strategy, and dispatch the service to the corresponding computing power network element node for processing.
  2. 根据权利要求1所述的方法,其中,所述第三处理层还用于:The method according to claim 1, wherein the third processing layer is further used for:
    获取网络资源信息;Obtain network resource information;
    至少根据所述算力配置策略和网络资源信息选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。At least the corresponding network path is selected according to the computing power configuration strategy and the network resource information, and the service is scheduled to the corresponding computing power network element node for processing.
  3. 根据权利要求1或2所述的系统,还包括:第四处理层;The system according to claim 1 or 2, further comprising: a fourth processing layer;
    所述第四处理层用于获取算力网元节点的算力资源状态信息,并将所述算力资源状态信息发送至所述第二处理层;The fourth processing layer is used to obtain the state information of the computing power resource of the computing power network element node, and send the state information of the computing power resource to the second processing layer;
    第二处理层至少根据所述服务请求以及所述算力资源状态信息获得所述算力配置策略。The second processing layer obtains the computing power configuration strategy at least according to the service request and the computing power resource status information.
  4. 根据权利要求3所述的系统,其中,所述第二处理层还用于:The system according to claim 3, wherein the second processing layer is also used for:
    对所述算力资源状态信息进行抽象描述和表示,生成算力能力模板;Abstract description and representation of the state information of the computing power resources to generate a computing power capability template;
    至少根据所述算力能力模板和所述服务请求,生成算力服务合约。Generate a computing power service contract at least according to the computing power capability template and the service request.
  5. 根据权利要求4所述的系统,其中,所述第二处理层还用于:The system according to claim 4, wherein the second processing layer is also used for:
    将所述算力能力模板和/或所述算力服务合约发送给对应的算力网元节点。Send the computing power capability template and/or the computing power service contract to the corresponding computing power network element node.
  6. 根据权利要求3所述的系统,其中,所述第二处理层还用于:The system according to claim 3, wherein the second processing layer is also used for:
    对算力网元节点的算力资源状态信息进行性能监控,并将算力资源的性能、算力计费管理信息以及算力资源故障信息中的至少一项发送给对应的算力网元节点。Perform performance monitoring of the computing power resource status information of the computing power network element node, and send at least one of the computing power resource performance, computing power billing management information, and computing power resource failure information to the corresponding computing power network element node .
  7. 根据权利要求3所述的系统,其中,所述算力资源状态信息包括下述至少一项:The system according to claim 3, wherein the state information of the computing power resources includes at least one of the following:
    服务ID;Service ID;
    中央处理器CPU的信息;Information of the central processing unit CPU;
    服务链接数;Number of service links;
    内存的信息;Memory information;
    图像处理器GPU的信息;Information of the image processor GPU;
    硬盘的信息。Hard disk information.
  8. 根据权利要求3所述的系统,其中,所述第四处理层将所述算力资源状态信息发送至所述第二处理层,包括:The system according to claim 3, wherein the fourth processing layer sending the computing resource status information to the second processing layer comprises:
    所述第二处理层向所述第四处理层发送算力测量请求消息,并接收所述第四处理层反馈的算力状态信息响应消息,所述算力状态信息响应消息中携带算力网元节点的算力资源状态信息。The second processing layer sends a computing power measurement request message to the fourth processing layer, and receives a computing power status information response message fed back by the fourth processing layer, and the computing power status information response message carries a computing power network. The state information of the computing resources of the meta node.
  9. 根据权利要求3所述的系统,其中,所述第四处理层将所述算力资源状态信息发送至所述第二处理层,包括:The system according to claim 3, wherein the fourth processing layer sending the computing resource status information to the second processing layer comprises:
    所述第四处理层向所述第二处理层周期或非周期上报所述算力网元节点的算力资源状态信息。The fourth processing layer periodically or non-periodically reports the state information of the computing power resources of the computing power network element node to the second processing layer.
  10. 根据权利要求8所述的系统,其中,所述算力测量请求消息包括下述至少一项:The system according to claim 8, wherein the computing power measurement request message includes at least one of the following:
    服务ID;Service ID;
    中央处理器CPU的信息;Information of the central processing unit CPU;
    服务链接数;Number of service links;
    内存的信息;Memory information;
    图像处理器GPU的信息;Information of the image processor GPU;
    硬盘的信息。Hard disk information.
  11. 根据权利要求8所述的系统,其中,所述第二处理层通过操作维护管理OAM的遥测信息携带所述算力测量请求消息。8. The system according to claim 8, wherein the second processing layer carries the computing power measurement request message through operation and maintenance management OAM telemetry information.
  12. 一种算力处理方法,应用于算力处理的网络系统,包括:A computing power processing method applied to a network system for computing power processing, including:
    获取业务的服务请求;Service request for obtaining business;
    至少根据所述服务请求映射获得算力配置策略;Obtaining a computing power allocation strategy at least according to the service request mapping;
    至少根据所述算力配置策略选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。At least the corresponding network path is selected according to the computing power configuration strategy, and the service is scheduled to the corresponding computing power network element node for processing.
  13. 根据权利要求12所述的方法,还包括:The method according to claim 12, further comprising:
    获取网络资源信息;Obtain network resource information;
    至少根据所述算力配置策略和网络资源信息选择对应的网络路径,将业务调度到对应的算力网元节点进行处理。At least the corresponding network path is selected according to the computing power configuration strategy and the network resource information, and the service is scheduled to the corresponding computing power network element node for processing.
  14. 根据权利要求12或13所述的方法,还包括:The method according to claim 12 or 13, further comprising:
    获取算力网元节点的算力资源状态信息;Obtain the state information of the computing power resources of the node of the computing power network element;
    至少根据所述服务请求以及所述算力资源状态信息获得所述算力配置策略。Obtain the computing power allocation strategy at least according to the service request and the computing power resource status information.
  15. 根据权利要求14所述的方法,还包括:The method according to claim 14, further comprising:
    对所述算力资源状态信息进行抽象描述和表示,生成算力能力模板;Abstract description and representation of the state information of the computing power resources to generate a computing power capability template;
    至少根据所述算力能力模板和所述服务请求,生成算力服务合约。Generate a computing power service contract at least according to the computing power capability template and the service request.
  16. 根据权利要求14所述的方法,还包括:The method according to claim 14, further comprising:
    将所述算力能力模板和/或所述算力服务合约发送至对应的算力网元节点。Send the computing power capability template and/or the computing power service contract to the corresponding computing power network element node.
  17. 根据权利要求14所述的方法,还包括:The method according to claim 14, further comprising:
    对算力网元节点的算力资源状态信息进行性能监控,并将算力资源的性能、算力计费管理信息以及算力资源故障信息中的至少一项发送给对应的算力网元节点。Perform performance monitoring of the computing power resource status information of the computing power network element node, and send at least one of the computing power resource performance, computing power billing management information, and computing power resource failure information to the corresponding computing power network element node .
  18. 根据权利要求14所述的方法,其中,所述算力资源状态信息包括下述至少一项:The method according to claim 14, wherein the state information of the computing power resources includes at least one of the following:
    服务ID;Service ID;
    中央处理器CPU的信息;Information of the central processing unit CPU;
    服务链接数;Number of service links;
    内存的信息;Memory information;
    图像处理器GPU的信息;Information of the image processor GPU;
    硬盘的信息。Hard disk information.
  19. 一种算力处理的网络系统,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序;所述处理器执行所述程序时实现如权利要求12-18任一项所述的算力处理方法。A network system for computing power processing, comprising a memory, a processor, and a program stored on the memory and capable of running on the processor; when the processor executes the program, it implements any of claims 12-18 The calculation method described in one item.
  20. 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如权利要求12-18任一项所述的算力处理方法中的步骤。A computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, implements the steps in the computing power processing method according to any one of claims 12-18.
PCT/CN2021/082304 2020-03-27 2021-03-23 Computing power processing network system and computing power processing method WO2021190482A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010232182.0A CN113448721A (en) 2020-03-27 2020-03-27 Network system for computing power processing and computing power processing method
CN202010232182.0 2020-03-27

Publications (1)

Publication Number Publication Date
WO2021190482A1 true WO2021190482A1 (en) 2021-09-30

Family

ID=77808217

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082304 WO2021190482A1 (en) 2020-03-27 2021-03-23 Computing power processing network system and computing power processing method

Country Status (2)

Country Link
CN (1) CN113448721A (en)
WO (1) WO2021190482A1 (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656187A (en) * 2021-10-19 2021-11-16 中通服建设有限公司 Public security big data computing power service system based on 5G
CN114040479A (en) * 2021-10-29 2022-02-11 中国联合网络通信集团有限公司 Calculation force node selection method and device and computer readable storage medium
CN114070854A (en) * 2021-11-26 2022-02-18 中国联合网络通信集团有限公司 Computing power perception and routing method, system, equipment and medium in computing power network
CN114579318A (en) * 2022-05-06 2022-06-03 北京智芯微电子科技有限公司 Resource coordination method, device and equipment for edge computing
CN114615180A (en) * 2022-03-09 2022-06-10 阿里巴巴达摩院(杭州)科技有限公司 Calculation force network system, calculation force calling method and device
CN114745317A (en) * 2022-02-09 2022-07-12 北京邮电大学 Computing task scheduling method facing computing power network and related equipment
CN114979278A (en) * 2022-05-24 2022-08-30 深圳点宽网络科技有限公司 Calculation power scheduling method, device and system based on block chain and electronic equipment
CN115002127A (en) * 2022-06-09 2022-09-02 方图智能(深圳)科技集团股份有限公司 Distributed audio system
CN115086225A (en) * 2022-05-27 2022-09-20 量子科技长三角产业创新中心 Calculation and storage optimal path determination method and monitoring device for computational power network
CN115086720A (en) * 2022-06-14 2022-09-20 烽火通信科技股份有限公司 Network path calculation method and device for live broadcast service
CN115118647A (en) * 2022-05-20 2022-09-27 北京邮电大学 System and method for perceiving and announcing computing power information in computing power network
CN115113821A (en) * 2022-07-07 2022-09-27 北京算讯科技有限公司 5G big data computing power service system based on quantum encryption
CN115297014A (en) * 2022-09-29 2022-11-04 浪潮通信信息系统有限公司 Zero-trust computing network operating system, management method, electronic device and storage medium
CN115421930A (en) * 2022-11-07 2022-12-02 山东海量信息技术研究院 Task processing method, system, device, equipment and computer readable storage medium
CN115509644A (en) * 2022-11-21 2022-12-23 北京邮电大学 Calculation force unloading method and device, electronic equipment and storage medium
CN115562843A (en) * 2022-12-06 2023-01-03 苏州浪潮智能科技有限公司 Container cluster computational power scheduling method and related device
CN116016537A (en) * 2022-12-30 2023-04-25 中国联合网络通信集团有限公司 Method and device for optimizing selection of computing power network resources
CN116170324A (en) * 2022-11-30 2023-05-26 杭州东方通信软件技术有限公司 Visual view generation method and device suitable for computing power network
CN116467071A (en) * 2023-03-22 2023-07-21 北京神州泰岳软件股份有限公司 Method and device for sensing computing power in computing power network, storage medium and electronic equipment
CN116684418A (en) * 2023-08-03 2023-09-01 北京神州泰岳软件股份有限公司 Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway
WO2023169374A1 (en) * 2022-03-07 2023-09-14 中国移动通信有限公司研究院 Routing method and system, and node
CN116760885A (en) * 2023-08-23 2023-09-15 亚信科技(中国)有限公司 Method, device, equipment, medium and program product for managing computing power network business
CN116909758A (en) * 2023-09-13 2023-10-20 中移(苏州)软件技术有限公司 Processing method and device of calculation task and electronic equipment
CN116962409A (en) * 2023-08-16 2023-10-27 北京信息科技大学 Industrial Internet network architecture based on intelligent fusion of general sense calculation and virtual controller
WO2023226545A1 (en) * 2022-05-25 2023-11-30 北京沃东天骏信息技术有限公司 Computing power distribution method, computing power service method, computing power testing method, and system and storage medium
CN117216758A (en) * 2023-11-08 2023-12-12 新华三网络信息安全软件有限公司 Application security detection system and method
CN117331700A (en) * 2023-10-24 2024-01-02 广州一玛网络科技有限公司 Computing power network resource scheduling system and method
WO2024001227A1 (en) * 2022-06-28 2024-01-04 中兴通讯股份有限公司 Computing power routing searching method and apparatus, computing power network node and storage medium
WO2024036470A1 (en) * 2022-08-16 2024-02-22 Ming Zhongxing Computing power network system
CN117614992A (en) * 2023-12-21 2024-02-27 天津建设发展集团股份公司 Edge decision method and system for engineering remote monitoring
WO2024041572A1 (en) * 2022-08-24 2024-02-29 中国电信股份有限公司 Service processing method and apparatus, device, medium and program product
CN117648175A (en) * 2024-01-30 2024-03-05 之江实验室 Service execution method and device based on dynamic algorithm selection and electronic equipment
WO2024119887A1 (en) * 2023-08-18 2024-06-13 Lenovo (Beijing) Limited Policy and charging control for computing power network

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157667B (en) * 2021-10-28 2023-06-06 山东浪潮科学研究院有限公司 Gateway-device-oriented computing power network service system and method
CN114090244B (en) * 2021-11-16 2024-03-19 中国联合网络通信集团有限公司 Service arrangement method, device, system and storage medium
CN114095964B (en) * 2021-11-19 2023-05-26 中国联合网络通信集团有限公司 Fault recovery method and device and computer readable storage medium
CN113867973B (en) * 2021-12-06 2022-02-25 腾讯科技(深圳)有限公司 Resource allocation method and device
CN114756340A (en) * 2022-03-17 2022-07-15 中国联合网络通信集团有限公司 Computing power scheduling system, method, device and storage medium
CN114697400A (en) * 2022-04-13 2022-07-01 中国电信股份有限公司 Service scheduling method, system and VTEP
CN117081764A (en) * 2022-05-10 2023-11-17 中国移动通信有限公司研究院 Communication method, device, related equipment and storage medium
CN117203944A (en) * 2022-05-26 2023-12-08 亚信科技(中国)有限公司 Resource scheduling method of computing power network
CN115086230B (en) * 2022-06-15 2023-06-30 中国联合网络通信集团有限公司 Method, device, equipment and storage medium for determining computing network route
CN115396514B (en) * 2022-08-18 2023-05-26 中国联合网络通信集团有限公司 Resource allocation method, device and storage medium
CN117667327A (en) * 2022-08-29 2024-03-08 华为技术有限公司 Job scheduling method, scheduler and related equipment
CN115499859B (en) * 2022-11-16 2023-03-31 中国移动通信有限公司研究院 NWDAF-based method for managing and deciding computing resources
CN115883268A (en) * 2022-11-21 2023-03-31 中国联合网络通信集团有限公司 Industrial production computing power network service charging method, platform, equipment and medium
CN115883660A (en) * 2022-11-21 2023-03-31 中国联合网络通信集团有限公司 Industrial production computing power network service method, platform, equipment and medium
CN115550370B (en) * 2022-12-01 2023-03-31 浩鲸云计算科技股份有限公司 Computing power resource optimal scheduling allocation method based on multi-factor strategy
CN117440046A (en) * 2023-03-21 2024-01-23 北京神州泰岳软件股份有限公司 Data processing method and device for power computing network
CN116302580B (en) * 2023-05-25 2023-09-15 南方电网数字电网研究院有限公司 Method and device for scheduling calculation force resources of nano relay
CN116402318B (en) * 2023-06-07 2023-12-01 北京智芯微电子科技有限公司 Multi-stage computing power resource distribution method and device for power distribution network and network architecture
CN118034942B (en) * 2024-04-12 2024-06-25 深圳市捷易科技有限公司 Cluster computing management method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150095917A1 (en) * 2013-09-27 2015-04-02 International Business Machines Corporation Distributed uima cluster computing (ducc) facility
CN105321346A (en) * 2015-09-18 2016-02-10 成都融创智谷科技有限公司 Method for utilizing cloud computing basic resource pool to control urban intelligent traffic
CN109167835A (en) * 2018-09-13 2019-01-08 重庆邮电大学 A kind of physics resource scheduling method and system based on kubernetes
CN110213363A (en) * 2019-05-30 2019-09-06 华南理工大学 Cloud resource dynamic allocation system and method based on software defined network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150095917A1 (en) * 2013-09-27 2015-04-02 International Business Machines Corporation Distributed uima cluster computing (ducc) facility
CN105321346A (en) * 2015-09-18 2016-02-10 成都融创智谷科技有限公司 Method for utilizing cloud computing basic resource pool to control urban intelligent traffic
CN109167835A (en) * 2018-09-13 2019-01-08 重庆邮电大学 A kind of physics resource scheduling method and system based on kubernetes
CN110213363A (en) * 2019-05-30 2019-09-06 华南理工大学 Cloud resource dynamic allocation system and method based on software defined network

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656187B (en) * 2021-10-19 2021-12-28 中通服建设有限公司 Public security big data computing power service system based on 5G
CN113656187A (en) * 2021-10-19 2021-11-16 中通服建设有限公司 Public security big data computing power service system based on 5G
CN114040479A (en) * 2021-10-29 2022-02-11 中国联合网络通信集团有限公司 Calculation force node selection method and device and computer readable storage medium
CN114070854B (en) * 2021-11-26 2023-06-27 中国联合网络通信集团有限公司 Method, system, equipment and medium for sensing and routing calculation force in calculation force network
CN114070854A (en) * 2021-11-26 2022-02-18 中国联合网络通信集团有限公司 Computing power perception and routing method, system, equipment and medium in computing power network
CN114745317A (en) * 2022-02-09 2022-07-12 北京邮电大学 Computing task scheduling method facing computing power network and related equipment
CN114745317B (en) * 2022-02-09 2023-02-07 北京邮电大学 Computing task scheduling method facing computing power network and related equipment
WO2023169374A1 (en) * 2022-03-07 2023-09-14 中国移动通信有限公司研究院 Routing method and system, and node
CN114615180A (en) * 2022-03-09 2022-06-10 阿里巴巴达摩院(杭州)科技有限公司 Calculation force network system, calculation force calling method and device
CN114579318A (en) * 2022-05-06 2022-06-03 北京智芯微电子科技有限公司 Resource coordination method, device and equipment for edge computing
CN115118647A (en) * 2022-05-20 2022-09-27 北京邮电大学 System and method for perceiving and announcing computing power information in computing power network
CN115118647B (en) * 2022-05-20 2024-02-09 北京邮电大学 System and method for sensing and advertising calculation force information in calculation force network
CN114979278A (en) * 2022-05-24 2022-08-30 深圳点宽网络科技有限公司 Calculation power scheduling method, device and system based on block chain and electronic equipment
WO2023226545A1 (en) * 2022-05-25 2023-11-30 北京沃东天骏信息技术有限公司 Computing power distribution method, computing power service method, computing power testing method, and system and storage medium
CN115086225B (en) * 2022-05-27 2023-12-05 量子科技长三角产业创新中心 Method and monitoring device for determining optimal path of calculation and storage of power network
CN115086225A (en) * 2022-05-27 2022-09-20 量子科技长三角产业创新中心 Calculation and storage optimal path determination method and monitoring device for computational power network
CN115002127A (en) * 2022-06-09 2022-09-02 方图智能(深圳)科技集团股份有限公司 Distributed audio system
CN115086720B (en) * 2022-06-14 2023-06-09 烽火通信科技股份有限公司 Network path calculation method and device for live broadcast service
CN115086720A (en) * 2022-06-14 2022-09-20 烽火通信科技股份有限公司 Network path calculation method and device for live broadcast service
WO2024001227A1 (en) * 2022-06-28 2024-01-04 中兴通讯股份有限公司 Computing power routing searching method and apparatus, computing power network node and storage medium
CN115113821A (en) * 2022-07-07 2022-09-27 北京算讯科技有限公司 5G big data computing power service system based on quantum encryption
WO2024036470A1 (en) * 2022-08-16 2024-02-22 Ming Zhongxing Computing power network system
WO2024041572A1 (en) * 2022-08-24 2024-02-29 中国电信股份有限公司 Service processing method and apparatus, device, medium and program product
CN115297014A (en) * 2022-09-29 2022-11-04 浪潮通信信息系统有限公司 Zero-trust computing network operating system, management method, electronic device and storage medium
CN115421930B (en) * 2022-11-07 2023-03-24 山东海量信息技术研究院 Task processing method, system, device, equipment and computer readable storage medium
CN115421930A (en) * 2022-11-07 2022-12-02 山东海量信息技术研究院 Task processing method, system, device, equipment and computer readable storage medium
CN115509644A (en) * 2022-11-21 2022-12-23 北京邮电大学 Calculation force unloading method and device, electronic equipment and storage medium
CN116170324B (en) * 2022-11-30 2024-06-11 杭州东方通信软件技术有限公司 Visual view generation method and device suitable for computing power network
CN116170324A (en) * 2022-11-30 2023-05-26 杭州东方通信软件技术有限公司 Visual view generation method and device suitable for computing power network
CN115562843A (en) * 2022-12-06 2023-01-03 苏州浪潮智能科技有限公司 Container cluster computational power scheduling method and related device
WO2024119763A1 (en) * 2022-12-06 2024-06-13 苏州元脑智能科技有限公司 Computing power scheduling method for container cluster, and related apparatus
CN116016537B (en) * 2022-12-30 2024-03-01 中国联合网络通信集团有限公司 Method and device for optimizing selection of computing power network resources
CN116016537A (en) * 2022-12-30 2023-04-25 中国联合网络通信集团有限公司 Method and device for optimizing selection of computing power network resources
CN116467071B (en) * 2023-03-22 2024-06-07 北京神州泰岳软件股份有限公司 Method and device for sensing computing power in computing power network, storage medium and electronic equipment
CN116467071A (en) * 2023-03-22 2023-07-21 北京神州泰岳软件股份有限公司 Method and device for sensing computing power in computing power network, storage medium and electronic equipment
CN116684418B (en) * 2023-08-03 2023-11-10 北京神州泰岳软件股份有限公司 Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway
CN116684418A (en) * 2023-08-03 2023-09-01 北京神州泰岳软件股份有限公司 Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway
CN116962409A (en) * 2023-08-16 2023-10-27 北京信息科技大学 Industrial Internet network architecture based on intelligent fusion of general sense calculation and virtual controller
WO2024119887A1 (en) * 2023-08-18 2024-06-13 Lenovo (Beijing) Limited Policy and charging control for computing power network
CN116760885B (en) * 2023-08-23 2023-10-17 亚信科技(中国)有限公司 Method, device, equipment, medium and program product for managing computing power network business
CN116760885A (en) * 2023-08-23 2023-09-15 亚信科技(中国)有限公司 Method, device, equipment, medium and program product for managing computing power network business
CN116909758A (en) * 2023-09-13 2023-10-20 中移(苏州)软件技术有限公司 Processing method and device of calculation task and electronic equipment
CN116909758B (en) * 2023-09-13 2024-01-26 中移(苏州)软件技术有限公司 Processing method and device of calculation task and electronic equipment
CN117331700A (en) * 2023-10-24 2024-01-02 广州一玛网络科技有限公司 Computing power network resource scheduling system and method
CN117331700B (en) * 2023-10-24 2024-04-19 广州一玛网络科技有限公司 Computing power network resource scheduling system and method
CN117216758B (en) * 2023-11-08 2024-02-23 新华三网络信息安全软件有限公司 Application security detection system and method
CN117216758A (en) * 2023-11-08 2023-12-12 新华三网络信息安全软件有限公司 Application security detection system and method
CN117614992A (en) * 2023-12-21 2024-02-27 天津建设发展集团股份公司 Edge decision method and system for engineering remote monitoring
CN117648175B (en) * 2024-01-30 2024-04-12 之江实验室 Service execution method and device based on dynamic algorithm selection and electronic equipment
CN117648175A (en) * 2024-01-30 2024-03-05 之江实验室 Service execution method and device based on dynamic algorithm selection and electronic equipment

Also Published As

Publication number Publication date
CN113448721A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
WO2021190482A1 (en) Computing power processing network system and computing power processing method
CN112799789B (en) Node cluster management method, device, equipment and storage medium
Cheng et al. FogFlow: Easy programming of IoT services over cloud and edges for smart cities
CN107087019B (en) Task scheduling method and device based on end cloud cooperative computing architecture
Verma et al. An efficient data replication and load balancing technique for fog computing environment
WO2023284830A1 (en) Management and scheduling method and apparatus, node, and storage medium
Ferreira et al. QoS-as-a-Service in the Local Cloud
CN114095577A (en) Resource request method and device, calculation network element node and calculation application equipment
WO2013104217A1 (en) Cloud infrastructure based management system and method for performing maintenance and deployment for application system
WO2022184094A1 (en) Network system for processing hash power, and service processing method and hash power network element node
EP4270204A1 (en) Multi-cloud interface adaptation method and system based on micro-service, and storage medium
Antonini et al. Fog computing architectures: A reference for practitioners
Raeisi-Varzaneh et al. Resource scheduling in edge computing: Architecture, taxonomy, open issues and future research directions
Borsatti et al. Enabling industrial IoT as a service with multi-access edge computing
Tian et al. An overview of compute first networking
Ali et al. Resource management techniques for cloud-based IoT environment
US20230137879A1 (en) In-flight incremental processing
Bumgardner et al. Cresco: A distributed agent-based edge computing framework
Apat et al. Service placement in fog computing environment
CN116684418B (en) Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway
Al-Kasassbeh et al. Analysis of mobile agents in network fault management
WO2023186002A1 (en) Resource scheduling method, apparatus and device
Hung et al. A new technique for optimizing resource allocation and data distribution in mobile cloud computing
Nguyen et al. Software-defined virtual sensors for provisioning iot services on demand
Latif et al. Characterizing the architectures and brokering protocols for enabling clouds interconnection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21774882

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21774882

Country of ref document: EP

Kind code of ref document: A1