WO2022184094A1 - 算力处理的网络系统、业务处理方法及算力网元节点 - Google Patents

算力处理的网络系统、业务处理方法及算力网元节点 Download PDF

Info

Publication number
WO2022184094A1
WO2022184094A1 PCT/CN2022/078802 CN2022078802W WO2022184094A1 WO 2022184094 A1 WO2022184094 A1 WO 2022184094A1 CN 2022078802 W CN2022078802 W CN 2022078802W WO 2022184094 A1 WO2022184094 A1 WO 2022184094A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
computing power
resource
computing
service
Prior art date
Application number
PCT/CN2022/078802
Other languages
English (en)
French (fr)
Inventor
姚惠娟
孙滔
付月霞
Original Assignee
中国移动通信有限公司研究院
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国移动通信有限公司研究院, 中国移动通信集团有限公司 filed Critical 中国移动通信有限公司研究院
Publication of WO2022184094A1 publication Critical patent/WO2022184094A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/02Communication route or path selection, e.g. power-based or shortest path routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation

Definitions

  • the present application relates to the field of data communication networks, and in particular, to a computing power processing network system, a service processing method, and a computing power network element node.
  • the computing resources in the network are integrated into all corners of the network, so that each network node can become a resource provider, and the user's request can be satisfied by calling the nearest node resources, no longer limited to a specific node, to avoid causing connections and waste of network scheduling resources.
  • the traditional network only provides a pipeline for data communication, which is based on connection and is subject to a fixed network addressing mechanism, which is often unable to meet higher and more demanding Quality of Experience (QoE) requirements.
  • QoE Quality of Experience
  • the traditional client (client)-server (server) model is deconstructed, and the application on the server side is decomposed into functional components that are deployed on the cloud platform, and are scheduled by the API gateway uniformly, which can be done on demand.
  • the business logic in the server is transferred to the client side, and the client only needs to care about the computing function itself, and does not need to care about computing resources such as servers, virtual machines, containers, etc., so as to realize the service function.
  • the new-generation network architecture oriented to the future network needs to coordinately consider the unified coordinated scheduling of multi-dimensional resources such as users, networks, and computing power, so that massive applications can call computing resources in different places on demand and in real time.
  • the purpose of the technical solution of the present application is to provide a computing power processing network system, a service processing method and a computing power network element node, which are used to realize the unified coordinated scheduling of multi-dimensional resources such as users, networks, and computing power, so as to ensure that the network can real-time scheduling of computing resources at different locations.
  • the present application provides a network system for computing power processing, including:
  • the first processing layer is used for acquiring resource perception information, and determining computing power routing information of the service request according to the resource perception information.
  • the resource perception information includes at least one of the following:
  • Hashrate service topology information
  • the network system further includes:
  • the second processing layer is configured to generate computing power resource topology information and/or computing power service topology information, and send the computing power resource topology information and/or the computing power service topology information to the first processing layer.
  • the network system wherein the network system further includes: a third processing layer, a fourth processing layer, and a fifth processing layer;
  • the third processing layer is used to provide status information of the initial deployment of computing power services
  • the fourth processing layer is used to provide computing power resources, and measure computing power and services according to the received computing power templates, service measurement parameters and measurement strategies;
  • the fifth processing layer is used to provide network connection using network infrastructure.
  • the first processing layer acquires the resource-aware information in at least one of the following ways:
  • the network resource awareness information of the resource awareness information is acquired through the fifth processing layer.
  • the network resource perception information includes at least one of bandwidth, delay, and delay jitter.
  • the second processing layer is further configured to send messages to the third processing layer, the fourth processing layer, and the fifth processing layer respectively A resource awareness request, so that the third processing layer, the fourth processing layer and the fifth processing layer respectively perform multidimensional resource awareness according to the received multidimensional resource awareness request, and send resource awareness information to the first processing layer .
  • the second processing layer is further configured to configure the third processing layer, the fourth processing layer, and the fifth processing layer to perform Hashrate perception metric configuration information for hashrate measurement.
  • the computing power perception metric configuration information includes at least one of perception parameters, measurement parameters, measurement strategies, and transmission frequencies of multi-dimensional resource perception information.
  • the second processing layer is further configured to have at least one of the following functions:
  • the computing power resources are abstractly described and represented through computing power modeling to form node computing power information
  • the first processing layer includes a first sublayer and a second sublayer, wherein:
  • the first sublayer is used to perform at least one of computing network service announcement, computing network awareness scheduling, computing network topology discovery and computing network routing generation;
  • the second sublayer is used to perform at least one of computing network routing and forwarding, link computing network monitoring, computing network routing identification, and computing network routing addressing.
  • the embodiment of the present application further provides a service processing method, wherein, applied to the first computing power network element node, the method includes:
  • the computing power routing information of the service request is determined according to the resource perception information.
  • the resource perception information includes at least one of the following:
  • Hashrate service topology information
  • the method further includes:
  • the method further includes:
  • the method further includes: acquiring the service request through at least one of data packet carrying, control plane carrying, and management plane data.
  • the service processing method which is applied to the second computing power network element node, includes:
  • the resource perception information includes at least one of the following:
  • Hashrate service topology information
  • the method further includes:
  • the method further includes:
  • the second computing power network element node generates the computing power resource topology information and/or the computing power service topology information according to the registration information of the computing power node.
  • the registration information includes at least one of a computing power identifier, computing power initialization configuration information, service deployment information, function deployment information, and function deployment information .
  • the method further includes:
  • the perceptual measurement configuration information includes at least one of computing power measurement parameters, measurement method information, and reporting policy information.
  • the computing power measurement information includes at least one of measurement results, location information, computing power resource information, and computing power service information.
  • the embodiment of the present application further provides a computing power network element node, the computing power network element node is a first computing power network element node, and includes a processor, wherein:
  • the processor is configured to, when receiving the service request, determine the computing power routing information of the service request according to the resource perception information.
  • the embodiment of the present application further provides a computing power network element node, the computing power network element node is a second computing power network element node, and includes a transceiver, wherein:
  • the transceiver is configured to send resource perception information to the first computing power network element node.
  • the embodiment of the present application further provides a service processing apparatus, which is applied to the first computing power network element node, wherein the apparatus includes:
  • the processing module is configured to, when a service request is received, determine the computing power routing information of the service request according to the resource perception information.
  • An embodiment of the present application further provides a service processing apparatus, which is applied to a second computing power network element node, wherein the apparatus includes:
  • An information sending module configured to send resource perception information to the first computing power network element node.
  • Embodiments of the present application further provide a network system for computing power processing, including: a processor, a memory, and a program stored on the memory and executable on the processor, the program being executed by the processor.
  • a network system for computing power processing including: a processor, a memory, and a program stored on the memory and executable on the processor, the program being executed by the processor.
  • An embodiment of the present application further provides a readable storage medium, wherein a program is stored on the readable storage medium, and when the program is executed by a processor, the steps in any one of the above service processing methods are implemented.
  • the first processing layer comprehensively considers user requirements, network resource status and computing resource status, and schedules service applications to appropriate routing nodes according to resource perception information, so as to ensure that the network can be scheduled on demand and in real time.
  • the computing resources in different locations realize the unified coordinated scheduling of multi-dimensional resources such as users, networks, and computing power, and ensure that the network can schedule computing resources in different locations on demand and in real time.
  • FIG. 1 is a schematic structural diagram of a network system according to an embodiment of the application.
  • FIG. 2 is a schematic flowchart of a service processing method according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a system architecture using the method described in the embodiment of the present application.
  • FIG. 4 is a schematic flowchart of one implementation of the method described in the embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a service processing method according to another embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a computing power network element node according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a computing power network element node according to another embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a service processing apparatus according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a service processing apparatus according to another embodiment of the present application.
  • the network system for computing power processing can also be called computing power-aware network, computing power network, computing-network integrated network or a new type of computing network fusion network.
  • the computing power processing network system includes:
  • the first processing layer also known as the computing power routing layer
  • the second processing layer (also known as the network management layer),
  • the third processing layer (also known as the computing power service layer),
  • the fourth processing layer (also referred to as the computing power resource layer), and,
  • the fifth processing layer (may also be referred to as the network resource layer).
  • Computing power network element node which in this paper refers to network equipment with computing power.
  • the computing power network element node may further include a computing power routing node and a computing power node (the computing power node is also sometimes referred to as a computing node).
  • the computing power routing node which is located at the first processing layer of the computing power processing network system, is a network device that transmits the announcement of computing power resource information in the computing power processing network system.
  • Computing power nodes located in the fourth processing layer and/or the fifth processing layer, refer to devices with computing power, which are used to provide computing power resources, and are equivalent to devices processing computing tasks in a network system for computing power processing, such as Server equipment, all-in-one machines, etc. in the data center.
  • the computing power node in the embodiment of the present application may also be a computing power network element device, and the computing power network element device is a network transmission device of the fifth processing layer, such as a router, etc.
  • the computing power network element device can also provide Computing resources, providing computing services.
  • Computing resource status refers to the computing capability status and deployment location of computing power nodes deployed in the computing power processing network system.
  • the computing resource status can be indicated by the computing resource parameters.
  • the parameters of computing resources specifically include one or more of the number of service connections, CPU/GPU computing power, deployment form (physical, virtual), deployment location (such as the corresponding IP address), storage capacity, storage form and other parameters.
  • the computing power resource status can also be the computing power abstracted based on the computing power resources, which is used to reflect the currently available computing power, distribution location, and deployment form of each computing power node in the computing power processing network system.
  • Network transmission resources refer to the network resources that transmit information of the network system processed by computing power, which can include various forwarding devices (such as routers, switches), transmission links, and transmission capabilities (such as bandwidth, delay, delay, etc.). jitter) etc.
  • the embodiment of the present application provides a network system for computing power processing, as shown in FIG. 1 .
  • the network system for computing power processing described in the embodiments of the present application includes a first processing layer (also referred to as a computing power routing layer), in which:
  • the first processing layer is used for acquiring resource perception information, and determining computing power routing information of the service request according to the resource perception information.
  • the resource perception information includes at least one of the following:
  • Hashrate service topology information
  • the demand-awareness information, application-awareness information, computing power resource-awareness information, and network resource-awareness information in the resource-awareness information are resource information that needs to be obtained through multi-dimensional perception, that is, at least the dynamic information used to determine the computing power routing information is included.
  • Resource information ; of course, it is not limited to only include dynamic resource information, and may also include static resource information.
  • the resource-aware information can reflect at least two multi-dimensional resources among user requirements, network resources, computing power, services, storage, and computing power.
  • the computing power resource topology information and the computing power service topology information are static resource information used for determining computing power routing information.
  • the first processing layer comprehensively considers user requirements, network resource status and computing resource status, and schedules service applications to appropriate routing nodes according to resource perception information, so as to ensure that the network can be scheduled on demand and in real time. Computational resources in different locations to realize users and networks.
  • the dynamic resource information including demand-awareness information, application-awareness information, computing resource-awareness information, and network resource-awareness information obtained according to multi-dimensional resource-awareness measurement, and pre- Based on the obtained computing power resource topology information and/or computing power service topology information, service applications are scheduled to appropriate routing nodes, so as to ensure that the network can schedule computing resources in different locations on demand and in real time, so as to achieve optimal resource utilization.
  • the first processing layer can obtain the computing resource topology information and/or computing service topology information of the local area through a plurality of computing power routing nodes, forming a distributed Service Architecture.
  • the network system further includes:
  • the second processing layer (also referred to as the computing network management layer) is used to manage computing resources, network resources and services, generate computing resource topology information and/or computing service topology information, and store the computing resources.
  • the power resource topology information and/or the computing power service topology information are sent to the first processing layer.
  • the second processing layer can provide global computing resource topology information and/or computing service topology information to the first processing layer, so as to realize unified coordinated scheduling of multi-dimensional resources such as network, storage, and computing power.
  • the second processing layer completes the management of computing resources, network resources and services.
  • the second processing layer It can complete the operation and maintenance of computing power resources, Operation Administration and Maintenance (OAM), security management, computing power operation and computing power service orchestration, and generate computing power resource topology information and/or computing power service topology.
  • OAM Operation Administration and Maintenance
  • security management computing power operation and computing power service orchestration
  • the first processing layer obtains computing power resource topology information and/or computing power service topology information, and comprehensively considers user needs, network resource status and computing resource status, so that according to the resource perception information and the acquired Computing power resource topology information and/or computing power service topology information, to schedule service applications to appropriate routing nodes to ensure that the network can schedule computing resources at different locations on demand and in real time to achieve optimal resource utilization.
  • the network system further includes: a third processing layer, a fourth processing layer, and a fifth processing layer;
  • the third processing layer (computing service layer) is used to provide status information of the initial deployment of the computing service
  • the fourth processing layer (computing resource layer) is used to provide computing resources, and to measure computing power and services according to the received computing power template, service measurement parameters and measurement strategies;
  • the fourth processing layer is used to provide heterogeneous computing resources, that is, to provide computing resources of various types of devices;
  • the fifth processing layer (network resource layer) is used to provide network connection by using network infrastructure.
  • the first processing layer obtains the resource-aware information in at least one of the following ways:
  • the network resource awareness information of the resource awareness information is acquired through the fifth processing layer (network resource layer).
  • the network resource perception information includes at least one of bandwidth, delay, and delay jitter.
  • the network system is logically divided into five functional modules: the first processing layer, the second processing layer, the third processing layer, the fourth processing layer and the fifth processing layer. , in order to realize the perception, interconnection and cooperative scheduling of ubiquitous computing and services. specifically:
  • the third processing layer (computing service layer): It is also used to provide status information of the initial deployment of the computing service.
  • the computing power service layer supports application deconstruction into atomized functional components, which are uniformly scheduled by API Gateway.
  • the computing power service layer is deployed on the computing power resource layer to carry various services and applications of ubiquitous computing, and can pass the user's request to the service level agreement (Service-Level Agreement, SLA), including parameters such as computing power request.
  • SLA Service-Level Agreement
  • the computing power service layer can also receive data from end users, and can implement services such as service decomposition and service scheduling through the API gateway.
  • the second processing layer (computing network management layer): complete computing power operation and computing power service arrangement, complete the management of computing power resources and network resources, including the perception, measurement and OAM management of computing power resources;
  • the computing network operation as well as the management of computing power routing layer and network resource layer.
  • the computing network management layer first abstractly describes and represents computing resources through computing power modeling, forms node computing power information, and shields the differences in underlying hardware devices; computing power information can be transmitted to Corresponding computing power network element nodes; in addition, performance monitoring and management of computing power resources and network resources are also required, and computing power operations and network operations are implemented.
  • the fourth processing layer provides heterogeneous computing resources, and measures computing power and services according to the received computing power templates, service measurement parameters and measurement strategies.
  • the fourth processing layer utilizes the existing computing infrastructure to provide computing resources.
  • the computing infrastructure includes a combination of computing capabilities ranging from single-core CPU to multi-core CPU to CPU+GPU+FPGA; in order to meet the needs of the edge computing field Diversified computing requirements, facing different applications, provide computing power model, computing power API, computing network resource identification and other functions based on physical computing resources.
  • the first processing layer (computing power routing layer): the first processing layer is the core of the network system for computing power processing described in the embodiment of the application, and is used for the discovery of computing network resources based on the abstraction, and comprehensively considering network conditions and computing resources. According to the situation, the business can be flexibly scheduled to different computing resource nodes on demand.
  • the first processing layer includes a first sublayer (control plane) and a second sublayer (forwarding plane).
  • the first sublayer is used to perform at least one of computing network service announcement, computing network awareness scheduling, computing network topology discovery and computing network routing generation;
  • the second sublayer is used to perform at least one of computing network routing and forwarding, link computing network monitoring, computing network routing identification, and computing network routing addressing.
  • the fifth processing layer utilizes the existing network infrastructure to provide ubiquitous network connections for every corner of the network.
  • the network infrastructure includes an access network, Metro network and backbone network, etc.
  • the above five processing layers can be used to realize the perception, control and scheduling of computing network resources.
  • the fourth processing layer (computing resource layer) and the fifth processing layer (network resource layer) constitute the infrastructure layer of the network system
  • the second processing layer (computing network management layer) and the first processing layer (computing power routing layer) Layer) constitutes the two core functional modules of the computing power perception function system to realize multi-dimensional resource perception, and users and applications access the network through the computing power routing layer.
  • the second processing layer (computing network management layer) needs to realize the configuration and management of methods for sensing services, networks, and computing resources, except for generating computing resource topology information and/or computing service topology
  • the second processing layer is used to:
  • the received resource-aware request performs multi-dimensional resource-awareness, and sends resource-awareness information to the first processing layer;
  • the resource-awareness request includes a resource-aware configuration, a third processing layer, a fourth processing layer, and a fifth processing layer.
  • the processing layer can perform multi-dimensional resource awareness according to the resource awareness configuration in the resource awareness request.
  • the resource-aware request may not include the resource-aware configuration, and the resource-aware configuration may be pre-configured to the third processing layer, the fourth processing layer, and the fifth processing layer, respectively. In this way, according to the received resource sensing request, the third processing layer, the fourth processing layer and the fifth processing layer can respond respectively, and perform multi-dimensional resource sensing according to the pre-obtained resource sensing configuration;
  • the computing power perception metric configuration information includes at least one of a perception parameter, a measurement parameter, a measurement strategy, and a transmission frequency of the multi-dimensional resource perception information.
  • the resource-aware configurations delivered by the second processing layer to the third processing layer, the fourth processing layer, and the fifth processing layer may be the same or different.
  • the fifth processing layer delivers the resource-aware configuration respectively, so that the third processing layer, the fourth processing layer and the fifth processing layer are respectively perceived and measured according to the received resource-aware configuration.
  • the third processing layer, the fourth processing layer and the fifth processing layer respectively perform computing power measurement, and report the measured computing power resources and/or service information to the computing network management layer
  • the computing network management layer is made to generate a computing resource topology and/or computing service topology according to the received computing resource and/or service information.
  • the second processing layer configures the perception configuration parameters of the third processing layer, the fourth processing layer, and the fifth processing layer for sensing multi-dimensional resources, so that the second processing layer can adaptively order the third processing layer,
  • the fourth processing layer and the fifth processing layer perform sensing and measurement of sensing parameters, measurement parameters, measurement strategies, and transmission frequency of multi-dimensional resource sensing information.
  • the second processing layer is based on the measurement information measured and reported by the third processing layer, the measurement information measured and reported by the fourth processing layer, and the measurement information measured and reported by the fifth processing layer At least one of the information in the configuration forms the initialization topology information of computing power, service and network; in an optional embodiment of the present application, according to at least one of the above measurement information, the computing power, service and the initial topology information of the network to be changed.
  • the measured and reported measurement information includes computing resources and/or service information.
  • the third processing layer, the fourth processing layer and the fifth processing layer report measurement information when the measurement information is changed, and do not need to report when the measurement information is not changed.
  • the first processing layer provides distributed control of multi-dimensional resource-aware resource status and transmission of perception information, including transmission of multi-dimensional resources such as services, applications, and network resources. , and is responsible for synchronizing the real-time status of multi-dimensional resources to the respective entry computing power routing nodes through the computing power routing nodes.
  • the computing power routing node realizes the coordinated scheduling of computing power and the network according to the obtained computing power demand perception of users or services and the perception of multi-dimensional resources on the network side.
  • the computing power routing layer has the ability to perceive multi-dimensional resources at the network layer or the computing power routing layer, including but not limited to: demand perception capability, application perception capability, computing resource perception capability and network resource perception capability, that is, to obtain multi-dimensional resource perception respectively.
  • the resource-aware information can only include: demand-aware information, application-aware information, computing resource-awareness information, and network resource-awareness information.
  • computing power resource topology information and/or computing power service topology information belonging to static resource information can also be obtained.
  • demand perception capability obtain the diverse demands of the user such as computing power, network and security through the data plane link layer, IP layer or management plane (user service contract);
  • Application awareness Obtain the status of service deployment and initialization from the computing service layer (third processing layer) or the dispatch center of the data center through the API interface;
  • Computing power resource perception capability directly obtain computing power resource capabilities from the computing power resource layer (fourth processing layer) through technologies such as probes, or directly obtain the current computing power resource situation from the data center management platform or edge computing management platform, such as the current Number of connections, CPU and GPU load, etc.
  • Network resource perception capability Obtain the quality status of network links through in-band OAM, out-of-band OAM, and IFit, including but not limited to bandwidth, delay, and jitter.
  • the computing power resource layer completes the abstraction, modeling and measurement of computing power, and informs the computing power routing layer and computing power management layer.
  • the computing power management layer completes the operation and maintenance of computing power resources, OAM management and security management.
  • the computing power routing node comprehensively considers user needs, network resource status and computing resource status to ensure that the network can schedule computing resources in different locations on demand and in real time, so as to improve the utilization rate of network and computing resources, and further improve users. experience.
  • Another aspect of the embodiments of the present application further provides a service processing method, which is applied to the first computing power network element node. As shown in FIG. 2 , the method includes:
  • S210 When receiving the service request, determine the computing power routing information of the service request according to the resource perception information.
  • the resource-aware information includes at least one of the following:
  • Hashrate service topology information
  • the first computing power network element node may determine the resource sensing information of the multi-dimensional resource sensing obtained by measurement and the computing power resource topology information and/or computing power service topology information obtained in advance.
  • the computing power routing information of the service request may be determined.
  • the method further includes:
  • the first computing power network element node may be, but is not limited to, a computing power routing node that can only be a computing power routing layer
  • the second computing power network element node may be a computing power network management layer.
  • the network management center but not limited to the computing network management center that can only be the computing network management layer.
  • a system using the service processing method includes a computing network management center, multiple computing power routing nodes, and multiple computing power nodes.
  • the computing network management center may be located at the computing network management layer
  • the computing power routing node may be located at the computing power routing layer
  • the computing power node may be located at the computing power resource layer and/or the network resource layer.
  • the computing network management center is respectively connected with each computing power routing node and each computing power routing node.
  • the computing power node includes a first computing power node MEC1, a second computing power node MEC2, and a third computing power node MEC3, etc.
  • the computing power routing node includes the first computing power routing node. PE1, the second computing power routing node PE2, the third computing power routing node PE3, etc., wherein the user terminal UE accesses the network system through the first computing power routing node PE1.
  • the process of using the service processing method includes steps:
  • the computing network management center sends a resource awareness request for multi-dimensional resource awareness to each computing power node.
  • the resource awareness request includes a resource awareness configuration, so that each computing power node performs a Calculate computing power, and then report the measurement results to the computing network management center according to the sensing parameters, measurement parameters, measurement strategies, and sending frequencies configured in the resource sensing configuration, so that the computing network management center can calculate the computing power according to the received computing power measurement results.
  • computing power resources and service information and generating computing power resource topology information and/or computing power service topology information.
  • the computing network management center sends computing power resource topology information and/or computing power service topology information to the first computing power routing node PE1 accessed by the user, that is, the ingress router.
  • the user sends a user service request to the first computing power routing node PE1, where the user service request may be carried in a data packet (such as using an extension field of an IP header), a control plane (such as accessing PPOE or IPOM through a link layer) protocol extension) and management plane data (such as a network management service contract) in at least one way.
  • a data packet such as using an extension field of an IP header
  • a control plane such as accessing PPOE or IPOM through a link layer protocol extension
  • management plane data such as a network management service contract
  • the first computing power routing node PE1 based on the received computing power resource topology information and/or computing power service topology information, requests the multiple computing power nodes (for example, including the first computing power node) for the requested service to deploy the corresponding user service MEC1 and the second computing power node MEC2) send request information, wherein the request information carries OAM information that supports computing power and network measurement, and measures computing power and network resource status at different computing power nodes deploying the requested service.
  • the corresponding computing power node of the requested service performs resource sensing and measurement according to the multi-dimensional resource sensing template, and returns the measurement information to the first computing power routing node PE1 through the corresponding computing power routing node;
  • the first computing power routing node PE1 selects a suitable computing power node for the service requested by the user according to the returned measurement information, such as the real-time status including computing power resources, network resources and service status, and considering the user service request. and the best path.
  • the returned measurement information such as the real-time status including computing power resources, network resources and service status, and considering the user service request. and the best path.
  • the present application further provides the service processing method according to another embodiment, which is applied to the second computing power network element node. As shown in FIG. 5 , the method includes:
  • S510 Send resource awareness information to the first computing power network element node.
  • the resource-aware information includes at least one of the following:
  • Hashrate service topology information
  • the first computing power network element node may be, but is not limited to, a computing power routing node that can only be a computing power routing layer
  • the second computing power network element node may be a computing power network management layer.
  • the network management center but not limited to the computing network management center that can only be the computing network management layer.
  • the method further includes:
  • the method further includes:
  • the second computing power network element node generates the computing power resource topology information and/or the computing power service topology information according to the registration information of the computing power node.
  • the registration information includes at least one of a computing power identifier, computing power initialization configuration information, service deployment information, function deployment information, and function deployment information .
  • the method further includes:
  • the computing power measurement information includes at least one of measurement results, location information, computing power resource information, and computing power service information.
  • the computing network management center sends multi-dimensional resource perception templates and perception configuration parameters for multi-dimensional resource perception to each computing power node, so that each computing power node can measure computing power according to the perception configuration parameters sent by the computing network management center, and then follow the
  • the sensing parameters, measurement parameters, measurement strategies and sending frequencies configured by the sensing configuration parameters report the measurement results to the computing network management center, so that the computing network management center can calculate the computing power resources and service information according to the received computing power measurement results.
  • generating computing power resource topology information and/or computing power service topology information and each computing power center performs multi-dimensional resource sensing according to the multi-dimensional resource sensing template, and sends the resource sensing information to the corresponding computing power routing node.
  • the service processing method described in the embodiments of the present application through management, control, and data plane collaborative perception, perception and measurement including user requirements, application services, network resources, and computing resources are realized, so as to comprehensively consider user requirements, network resource conditions and Computing resource status, in this way, according to the resource perception information and the obtained computing power resource topology information and/or computing power service topology information, the service application is scheduled to the appropriate routing node, so as to ensure that the network can schedule different locations on demand and in real time. Calculate resources, achieve optimal resource utilization, and effectively reduce network resource status signaling overhead in the network.
  • the embodiment of the present application also provides a computing power network element node, the computing power network element node is the first computing power network element node, as shown in FIG. 6 , including a processor 610 and a transceiver 620, wherein:
  • the processor 610 is configured to, when receiving the service request, determine the computing power routing information of the service request according to the resource perception information.
  • the resource perception information includes at least one of the following:
  • Hashrate service topology information
  • the transceiver 620 is used for:
  • the method further includes:
  • the transceiver 620 is further configured to: obtain the data by at least one of data packet carrying, control plane carrying and management plane data. the service request.
  • An embodiment of the present application further provides a computing power network element node, where the computing power network element node is a second computing power network element node, as shown in FIG. 7 , including a transceiver 710 and a processor 720, wherein:
  • the transceiver 710 is configured to send resource awareness information to the first computing power network element node.
  • the resource-aware information includes at least one of the following:
  • Hashrate service topology information
  • the processor 720 is configured to:
  • the transceiver 710 is further used for:
  • the processor 720 generates the computing power resource topology information and/or the computing power service topology information according to the registration information of the computing power node.
  • the registration information includes at least one of computing power identification, computing power initialization configuration information, service deployment information, function deployment information, and function deployment information one.
  • the transceiver 710 is further used for:
  • the sensing measurement configuration information includes at least one of computing power measurement parameters, measurement method information, and reporting policy information.
  • the computing power measurement information includes at least one of measurement results, location information, computing power resource information, and computing power service information.
  • the embodiment of the present application further provides a service processing apparatus, which is applied to the first computing power network element node.
  • the apparatus includes:
  • the processing module 810 is configured to determine the computing power routing information of the service request according to the resource perception information when the service request is received.
  • the resource perception information includes at least one of the following:
  • Hashrate service topology information
  • the service processing apparatus wherein the apparatus further includes:
  • the obtaining module 820 is configured to obtain the computing power resource topology information and/or the computing power service topology information sent by the second computing power network element node.
  • the processing module 810 is further configured to:
  • the obtaining module 820 is further configured to obtain the service request by at least one of data packet carrying, control plane carrying and management plane data.
  • An embodiment of the present application further provides a service processing apparatus, which is applied to a second computing power network element node.
  • the apparatus includes:
  • the information sending module 910 is configured to send resource awareness information to the first computing power network element node.
  • the resource perception information includes at least one of the following:
  • Hashrate service topology information
  • the service processing apparatus wherein the apparatus further includes:
  • a generating module 920 configured to generate the computing power resource topology information and/or the computing power service topology information.
  • the information sending module 910 is further configured to:
  • the generating module 920 generates the computing power resource topology information and/or the computing power service topology information according to the registration information of the computing power node.
  • the registration information includes at least one of a computing power identifier, computing power initialization configuration information, service deployment information, function deployment information, and function deployment information .
  • the information sending module 910 is further configured to:
  • the sensing measurement configuration information includes at least one of computing power measurement parameters, measurement method information, and reporting policy information.
  • the computing power measurement information includes at least one of measurement results, location information, computing power resource information, and computing power service information.
  • Embodiments of the present application further provide a network system for computing power processing, including a memory, a processor, and a computer program stored on the memory and running on the processor, and the processor implements the program when the processor executes the program.
  • a network system for computing power processing including a memory, a processor, and a computer program stored on the memory and running on the processor, and the processor implements the program when the processor executes the program.
  • the disclosed method and apparatus may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may be physically included separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the above-mentioned integrated units implemented in the form of software functional units can be stored in a computer-readable storage medium.
  • the above-mentioned software functional unit is stored in a storage medium, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute some steps of the transceiving method described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM for short), Random Access Memory (RAM for short), magnetic disk or CD, etc. that can store program codes medium.

Abstract

本申请提供一种算力处理的网络系统、业务处理方法及算力网元节点。该网络系统包括:第一处理层,用于获取资源感知信息,根据所述资源感知信息,确定服务请求的算力路由信息。采用该方法能够保证网络能够按需、实时调度不同位置的计算资源,实现用户、网络、算力等多维度资源的统一协同调度,保证网络能够按需、实时调度不同位置的计算资源。

Description

算力处理的网络系统、业务处理方法及算力网元节点
相关申请的交叉引用
本申请基于申请号为202110230865.7、申请日为2021年03月02日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及数据通信网络领域,尤其是指一种算力处理的网络系统、业务处理方法及算力网元节点。
背景技术
从云计算、边缘计算的发展大趋势下,未来社会中会在靠近用户的不同距离遍布许多不同规模的算力,通过全球网络为用户提供各类个性化的服务。从百亿量级的智能终端,到全球十亿量级的家庭网关,再到每个城市中未来边缘计算带来的数千个具备计算能力的边缘云,以及每个国家数十个大型的云数据中心DC,形成海量的泛在算力从各处接入互联网,形成计算和网络深度融合的发展趋势。
网络中的计算资源融入到网络的各个角落,使每一个网络节点都可以成为资源的提供者,用户的请求可以通过调用最近的节点资源来满足,不再局限于某一特定节点,避免造成连接和网络调度资源的浪费。而传统的网络只是提供数据通信的管道,以连接为基础,受制于固定的网络寻址机制,在更高更苛刻的体验质量(Quality of Experience,QoE)要求下往往无法满足。此外随着微服务的发展,传统的客户机(client)-服务器(server)模式被解构,服务器侧的应用解构成功能组件布放在云平台上,由API网关统一调度,可以做到按需动态实例化,服务器中的业务了逻辑转移到客户机侧,客户机只需要关心计算功能本身,而无需关心服务器、虚拟机、 容器等计算资源,从而实现服务功能。
面向未来网络的新一代网络架构需协同考虑用户、网络、算力等多维度资源的统一协同调度,使海量的应用能够按需、实时调用不同地方的计算资源。目前,传统的网络架构难以实现用户、网络、算力等多维度资源的统一协同调度,以保证网络能够按需、实时调度不同位置的计算资源。
发明内容
本申请技术方案的目的在于提供一种算力处理的网络系统、业务处理方法及算力网元节点,用于实现用户、网络、算力等多维度资源的统一协同调度,以保证网络能够按需、实时调度不同位置的计算资源。
本申请提供一种算力处理的网络系统,其中,包括:
第一处理层,用于获取资源感知信息,根据所述资源感知信息,确定服务请求的算力路由信息。
在本申请一可选实施方式中,所述的网络系统,其中,所述资源感知信息包括以下至少之一:
需求感知信息;
应用感知信息;
算力资源感知信息;
网络资源感知信息;
算力资源拓扑信息;
算力服务拓扑信息。
在本申请一可选实施方式中,所述的网络系统,其中,所述网络系统还包括:
第二处理层,用于生成算力资源拓扑信息和/或算力服务拓扑信息,并将所述算力资源拓扑信息和/或所述算力服务拓扑信息发送至所述第一处理层。
在本申请一可选实施方式中,所述的网络系统,其中,所述网络系统还包括:第三处理层、第四处理层和第五处理层;
其中,所述第三处理层,用于提供算力服务的初始化部署的状态信息;
所述第四处理层,用于提供算力资源,以及根据所接收到的算力模版、服务测量参数以及测量策略进行算力和服务的测量;
所述第五处理层,用于利用网络基础设施提供网络连接。
在本申请一可选实施方式中,所述的网络系统,其中,所述第一处理层通过以下至少之一方式获取所述资源感知信息:
通过数据面链路层、IP层和管理面中的至少之一获取所述资源感知信息的需求感知信息;
通过第三处理层和/或API接口获取所述资源感知信息的应用感知信息;
通过第四处理层、数据中心管理平台和边缘计算管理平台中的至少之一获取所述资源感知信息的算力资源感知信息;
通过第五处理层获取所述资源感知信息的网络资源感知信息。
在本申请一可选实施方式中,所述的网络系统,其中,所述网络资源感知信息包括带宽、时延和时延抖动中的至少之一。
在本申请一可选实施方式中,所述的网络系统,其中,所述第二处理层还用于向所述第三处理层、所述第四处理层和所述第五处理层分别发送资源感知请求,使所述第三处理层、所述第四处理层和所述第五处理层分别根据所接收的多维资源感知请求进行多维资源感知,向所述第一处理层发送资源感知信息。
在本申请一可选实施方式中,所述的网络系统,其中,所述第二处理层还用于配置所述第三处理层、所述第四处理层和所述第五处理层分别进行算力测量时的算力感知度量配置信息。
在本申请一可选实施方式中,所述的网络系统,其中,所述算力感知度量配置信息包括感知参数、测量参数、测量策略和多维资源感知信息的发送频率中的至少之一。
在本申请一可选实施方式中,所述的网络系统,其中,所述第二处理层还用于具备以下功能至少之一:
完成算力运营及算力服务编排;
通过算力建模对算力资源进行抽象描述和表示,形成节点算力信息;
将节点算力信息发送至算力网元节点。
在本申请一可选实施方式中,所述的网络系统,其中,所述第一处理层包括第一子层和第二子层,其中:
所述第一子层用于进行算网服务通告、算网感知调度、算网拓扑发现和算网路由生成中的至少之一;
所述第二子层用于进行算网路由转发、链路算网监控、算网路由标识和算网路由寻址中的至少之一。
本申请实施例还提供一种业务处理方法,其中,应用于第一算力网元节点,所述方法包括:
在接收到服务请求时,根据资源感知信息,确定所述服务请求的算力路由信息。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述资源感知信息包括以下至少之一:
需求感知信息;
应用感知信息;
算力资源感知信息;
网络资源感知信息;
算力资源拓扑信息;
算力服务拓扑信息。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述方法还包括:
获取第二算力网元节点发送的所述算力资源拓扑信息和/或所述算力服务拓扑信息。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述方法还包括:
在接收到服务请求时,根据所携带的OAM信息,在部署所述服务请求所请求服务的多个算力节点进行测量,获得所述资源感知信息。
在本申请一可选实施方式中,所述的网络系统,其中,所述方法还包括:通过数据包携带、控制面携带和管理面数据中的至少之一方式获取所述服务请求。
在本申请一可选实施方式中,所述的业务处理方法,其中,应用于第二算力网元节点,所述方法包括:
向第一算力网元节点发送资源感知信息。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述资源感知信息包括以下至少之一:
需求感知信息;
应用感知信息;
算力资源感知信息;
网络资源感知信息;
算力资源拓扑信息;
算力服务拓扑信息。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述方法还包括:
生成所述算力资源拓扑信息和/或所述算力服务拓扑信息。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述方法还包括:
向多个算力节点分别发送资源感知请求和算力感知度量配置信息;
获取多个所述算力节点的注册信息;
其中,所述第二算力网元节点根据所述算力节点的注册信息,生成所述算力资源拓扑信息和/或所述算力服务拓扑信息。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述注册信息包括算力标识、算力初始化配置信息、服务部署信息、功能部署信息和函数部署信息中的至少之一。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述方法还包括:
向多个所述算力节点分别发送算力和服务的感知测量配置信息;
获取所述算力节点根据所述感知测量配置信息进行测量后,上报的算力测量信息。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述感知 测量配置信息包括算力测量参数、测量方式信息以及上报策略信息中的至少之一。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述算力测量信息包括测量结果、位置信息、算力资源信息和算力服务信息中的至少之一。
本申请实施例还提供一种算力网元节点,所述算力网元节点为第一算力网元节点,包括处理器,其中:
所述处理器用于,在接收到服务请求时,根据资源感知信息,确定所述服务请求的算力路由信息。
本申请实施例还提供一种算力网元节点,所述算力网元节点为第二算力网元节点,包括收发机,其中:
所述收发机,用于向第一算力网元节点发送资源感知信息。
本申请实施例还提供一种业务处理装置,应用于第一算力网元节点,其中,所述装置包括:
处理模块,用于在接收到服务请求时,根据资源感知信息,确定所述服务请求的算力路由信息。
本申请实施例还提供一种业务处理装置,应用于第二算力网元节点,其中,所述装置包括:
信息发送模块,用于向第一算力网元节点发送资源感知信息。
本申请实施例还提供一种算力处理的网络系统,其中,包括:处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序,所述程序被所述处理器执行时实现如上任一项所述的业务处理方法。
本申请实施例还提供一种可读存储介质,其中,所述可读存储介质上存储有程序,所述程序被处理器执行时实现如上任一项所述的业务处理方法中的步骤。
本申请上述技术方案中的至少一个具有以下有益效果:
采用该实施例所述网络系统,第一处理层综合考虑用户需求、网络资源状况和计算资源状况,根据资源感知信息,将服务应用调度到合适的路由节点,以保证网络能够按需、实时调度不同位置的计算资源,实现用户、 网络、算力等多维度资源的统一协同调度,保证网络能够按需、实时调度不同位置的计算资源。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1为本申请实施例所述网络系统的结构示意图;
图2为本申请其中一实施例所述业务处理方法的流程示意图;
图3为采用本申请实施例所述方法的系统架构示意图;
图4为采用本申请实施例所述方法的其中一实施方式的流程示意图;
图5为本申请另一实施例所述业务处理方法的流程示意图;
图6为本申请其中一实施例所述算力网元节点的结构示意图;
图7为本申请另一实施例所述算力网元节点的结构示意图;
图8为本申请其中一实施例所述业务处理装置的结构示意图;
图9为本申请另一实施例所述业务处理装置的结构示意图。
具体实施方式
下面将参照附图更详细地描述本申请的示例性实施例。虽然附图中显示了本申请的示例性实施例,然而应当理解,可以以各种形式实现本申请而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本申请,并且能够将本申请的范围完整的传达给本领域的技术人员。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、 方法、产品或设备固有的其它步骤或单元。说明书以及权利要求中“和/或”表示所连接对象的至少其中之一。
以下描述提供示例而并非限定权利要求中阐述的范围、适用性或者配置。可以对所讨论的要素的功能和布置作出改变而不会脱离本公开的精神和范围。各种示例可恰适地省略、替代、或添加各种规程或组件。例如,可以按不同于所描述的次序来执行所描述的方法,并且可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
为帮助更好的理解本申请实施例的相关方案,下面对本申请实施例涉及的相关概念进行说明。
1)算力处理的网络系统,也可以称为算力感知网络、算力网络、算网一体网络或者计算网络融合的新型网络等名称。所述算力处理的网络系统包括:
第一处理层(也可以称为算力路由层)、
第二处理层(也可以称为算网管理层)、
第三处理层(也可以称为算力服务层)、
第四处理层(也可以称为算力资源层)、以及,
第五处理层(也可以称为网络资源层)。
2)算力网元节点,在本文中是指具有算力的网络设备。算力网元节点进一步的可包括算力路由节点和算力节点(算力节点有时候也被称为计算节点)。
3)算力路由节点,位于算力处理的网络系统的第一处理层,是一种将算力资源信息在算力处理的网络系统中的通告传输的网络设备。
4)算力节点,位于第四处理层和/或第五处理层,是指具备计算能力的设备,用于提供算力资源,相当于算力处理的网络系统中处理计算任务的设备,例如数据中心的服务器设备、一体机等。另外,本申请实施例的算力节点还可以是算力网元设备,所述算力网元设备为第五处理层的网络传输设备,如路由器等,同时该算力网元设备还可以提供算力资源,提供算力服务。
5)算力资源状态,是指算力处理的网络系统中部署的算力节点的计算能力状态以及部署位置等信息,算力资源状态可以通过算力资源的参数进行指示。算力资源的参数具体包括服务连接数、CPU/GPU计算力、部署形态(物理、虚拟)、部署位置(如相应的IP地址)、存储容量、存储形态等参数中的一种或多种。算力资源状态还可以是基于算力资源抽象出来的计算能力,用于反映算力处理的网络系统中各个算力节点当前可用的计算能力、分布位置及部署形态等信息。
6)网络传输资源,是指算力处理的网络系统的传输信息的网络资源,具有可以包括各种转发设备(如路由器、交换机)、传输链路以及传输能力(如带宽、时延、时延抖动)等。
为实现用户、网络、算力等多维度资源的统一协同调度,以保证网络能够按需、实时调度不同位置的计算资源,本申请实施例提供一种算力处理的网络系统,如图1所示,本申请实施例所述算力处理的网络系统中,包括第一处理层(也可以称为算力路由层),其中:
所述第一处理层,用于获取资源感知信息,根据所述资源感知信息,确定服务请求的算力路由信息。
需要说明的,本申请中,所述资源感知信息包括以下至少之一:
需求感知信息;
应用感知信息;
算力资源感知信息;
网络资源感知信息;
算力资源拓扑信息;
算力服务拓扑信息。
其中,资源感知信息中的需求感知信息、应用感知信息、算力资源感知信息和网络资源感知信息等为需要进行多维度感知获得的资源信息,也即至少包括用于确定算力路由信息的动态资源信息;当然,不限于仅能够包括动态资源信息,也可以包括静态资源信息。该些资源感知信息能够反映用户需求、网络资源、算力、服务、存储和算力等中的至少两个的多维度资源。
另外,算力资源拓扑信息和算力服务拓扑信息为用于确定算力路由信息的静态资源信息。采用该实施例所述网络系统,第一处理层综合考虑用户需求、网络资源状况和计算资源状况,根据资源感知信息,将服务应用调度到合适的路由节点,以保证网络能够按需、实时调度不同位置的计算资源,实现用户、网络。
本申请实施例中,在本申请一可选实施方式中,根据多维资源感知测量获得的包括需求感知信息、应用感知信息、算力资源感知信息和网络资源感知信息等的动态资源信息,以及预先所获取的算力资源拓扑信息和/或算力服务拓扑信息,将服务应用调度到合适的路由节点,以保证网络能够按需、实时调度不同位置的计算资源,实现资源利用率最优。
在本申请一可选实施方式中,其中一实施方式,第一处理层可以通过多个算力路由节点分别获得局部区域的算力资源拓扑信息和/或算力服务拓扑信息,形成为分布式服务架构。
另一实施方式,所述网络系统还包括:
第二处理层(也可以称为算网管理层),用于完成对算力资源、网络资源以及服务的管理,生成算力资源拓扑信息和/或算力服务拓扑信息,并将所述算力资源拓扑信息和/或算力服务拓扑信息发送至所述第一处理层。
采用该实施方式,第二处理层可以向第一处理层提供全局的算力资源拓扑信息和/或算力服务拓扑信息,实现网络、存储、算力等多维度资源的统一协同调度。
具体地,采用该实施方式所述网络系统,由第二处理层(算网管理层)完成对算力资源、网络资源以及服务的管理,在本申请一可选实施方式中,第二处理层可以完成对算力资源的运维、操作维护管理(Operation Administration and Maintenance,OAM)、安全管理、算力运营及算力服务编排等功能,并生成算力资源拓扑信息和/或算力服务拓扑信息;第一处理层(算力路由层)获取算力资源拓扑信息和/或算力服务拓扑信息,综合考虑用户需求、网络资源状况和计算资源状况,这样根据该资源感知信息和所获取的算力资源拓扑信息和/或算力服务拓扑信息,将服务应用调度到合适的路由节点,以保证网络能够按需、实时调度不同位置的计算资源,实 现资源利用率最优。
在本申请一可选实施方式中,所述网络系统还包括:第三处理层、第四处理层和第五处理层;
其中,所述第三处理层(算力服务层),用于提供算力服务的初始化部署的状态信息;
所述第四处理层(算力资源层),用于提供算力资源,以及根据所接收到的算力模版、服务测量参数以及测量策略进行算力和服务的测量;在本申请一可选实施方式中,第四处理层用于提供异构的算力资源,也即提供多种类型设备的算力资源;
所述第五处理层(网络资源层),用于利用网络基础设施提供网络连接。
本申请实施例中,在本申请一可选实施方式中,所述第一处理层通过以下至少之一方式获取所述资源感知信息:
通过数据面链路层、IP层和管理面(用户服务合约)中的至少之一获取所述资源感知信息的需求感知信息;
通过第三处理层(算力服务层)和/或应用程序接口(Application Programming Interface,API)获取所述资源感知信息的应用感知信息;
通过第四处理层(算力资源层)、数据中心管理平台和边缘计算管理平台中的至少之一获取所述资源感知信息的算力资源感知信息;
通过第五处理层(网络资源层)获取所述资源感知信息的网络资源感知信息。
在本申请一可选实施方式中,网络资源感知信息包括带宽、时延和时延抖动中的至少之一。
采用该实施例所述算力处理的网络系统,通过将网络系统从逻辑功能上划分为第一处理层、第二处理层、第三处理层、第四处理层和第五处理层五大功能模块,以实现泛在的计算和服务的感知、互联和协同调度。具体地:
第三处理层(算力服务层):还用于提供算力服务的初始化部署的状态信息。具体地,基于分布式微服务架构,算力服务层支持应用解构成原子化功能组件,由API Gateway统一调度。算力服务层部署于算力资源层之 上用于承载泛在计算的各类服务及应用,可以将用户对业务服务等级协议(Service-Level Agreement,SLA)的请求包括算力请求等参数传递给算力路由层,此外,算力服务层还可以接收来终端用户的数据,并可以通过API网关实现服务分解、服务调度等功能。
第二处理层(算网管理层):完成算力运营及算力服务编排,完成对算力资源和网络资源的管理,包括对算力资源的感知、度量和OAM管理等;实现对终端用户的算网运营,以及对算力路由层和网络资源层的管理。面对异构的计算资源,算网管理层首先通过算力建模对算力资源进行抽象描述和表示,形成节点算力信息,屏蔽底层硬件设备差异;算力信息可以通过算力通告传递给相应的算力网元节点;此外,还需要对算力资源及网络资源进行性能监控和管理,并实现算力运营及网络运营。
第四处理层(算力资源层):提供异构的算力资源,以及根据所接收到的算力模版、服务测量参数以及测量策略进行算力和服务的测量。具体地,第四处理层利用现有的计算基础设施提供算力资源,计算基础设施包括从单核CPU到多核CPU,到CPU+GPU+FPGA等多种计算能力的组合;为满足边缘计算领域多样性计算需求,面向不同应用,在物理计算资源基础上,提供算力模型、算力API、算网资源标识等功能。
第一处理层(算力路由层):该第一处理层为本申请实施例所述算力处理的网络系统的核心,用于基于抽象后的算网资源发现,综合考虑网络状况和计算资源状况,将业务灵活按需调度到不同的计算资源节点中。
在本申请一可选实施方式中,该第一处理层包括第一子层(控制面)和第二子层(转发面)。
其中,第一子层用于进行算网服务通告、算网感知调度、算网拓扑发现和算网路由生成中的至少之一;
所述第二子层用于进行算网路由转发、链路算网监控、算网路由标识和算网路由寻址中的至少之一。
第五处理层(网络资源层):利用现有的网络基础设施为网络中的各个角落提供无处不在的网络连接,在本申请一可选实施方式中,该网络基础设施包括接入网、城域网和骨干网等。
基于上述实施例所述算力处理的网络系统,采用上述五个处理层,能够实现对算网资源的感知、控制和调度。其中,第四处理层(算力资源层)和第五处理层(网络资源层)构成为网络系统的基础设施层,第二处理层(算网管理层)和第一处理层(算力路由层)构成为算力感知功能体系实现多维资源感知的两大核心功能模块,用户及应用通过算力路由层接入网络中。
本申请实施例中,第二处理层(算网管理层)需要实现对服务、网络和算力资源的感知的方法配置和管理,除用于生成算力资源拓扑信息和/或算力服务拓扑信息之外,第二处理层还用于:
向所述第三处理层、所述第四处理层和所述第五处理层分别发送资源感知请求,使所述第三处理层、所述第四处理层和所述第五处理层分别根据所接收的资源感知请求进行多维资源感知,向所述第一处理层发送资源感知信息;其中一实施方式,该资源感知请求中包括资源感知配置,第三处理层、第四处理层和第五处理层可发根据资源感知请求中的资源感知配置,进行多维资源感知。另一实施方式,资源感知请求中也可以不包括资源感知配置,资源感知配置可以预先分别配置给第三处理层、第四处理层和第五处理层。这样,根据所接收的资源感知请求,第三处理层、第四处理层和第五处理层可以分别进行响应,根据预先获得的资源感知配置进行多维资源感知;
配置所述第三处理层、所述第四处理层和所述第五处理层分别进行算力测量时的算力感知度量配置信息。在本申请一可选实施方式中,所述算力感知度量配置信息包括感知参数、测量参数、测量策略和多维资源感知信息的发送频率中的至少之一。
该实施方式中,第二处理层向第三处理层、第四处理层和第五处理层所下发的资源感知配置可以相同,也可以不同,通过向第三处理层、第四处理层和第五处理层分别下发资源感知配置,使第三处理层、第四处理层和第五处理层分别依据接收的资源感知配置进行感知和度量。根据所接收的算力感知度量配置信息,第三处理层、第四处理层和第五处理层分别进行算力测量,并向算网管理层上报所测量的算力资源和/或服务信息,使算 网管理层根据所接收的算力资源和/或服务信息,生成算力资源拓扑和/或算力服务拓扑。
另一方面,第二处理层通过配置第三处理层、第四处理层和第五处理层分别进行多维资源感知时的感知配置参数,使得通过第二处理层可以自适应订购第三处理层、第四处理层和第五处理层进行感知和测量的感知参数、测量参数、测量策略和多维资源感知信息的发送频率等。
在本申请一可选实施方式中,第二处理层根据第三处理层所测量且上报的测量信息、第四处理层所测量且上报的测量信息和第五处理层所测量且上报的测量信息中的至少之一信息,配置形成算力、服务和网络的初始化拓扑信息;在本申请一可选实施方式中,也可以根据上述测量信息中的至少之一信息,对所形成算力、服务和网络的初始化拓扑信息进行更改。
在本申请一可选实施方式中,所测量且上报的测量信息包括算力资源和/或服务信息。
另一实施方式,在本申请一可选实施方式中,第三处理层、第四处理层和第五处理层在当测量信息有更改时上报测量信息,测量信息没有更改时可以不用上报。
本申请实施例中,在本申请一可选实施方式中,第一处理层提供多维资源感知的资源状态的分布式控制和感知信息的传输,包括对服务和应用、网络资源等多维资源的传输,并负责将多维资源的实时状态通过算力路由节点同步到各自的入口算力路由节点。算力路由节点根据获取的用户或服务的算力需求感知以及网络侧对多维资源的感知,实现算力和网络的协同调度。具体在网络层或者算力路由层具备多维资源感知的能力,具体包含且不限于仅包括:需求感知能力、应用感知能力、算力资源感知能力和网络资源感知能力,也即分别获取多维资源感知的资源感知信息,包括且不限于仅能够包括:需求感知信息、应用感知信息、算力资源感知信息和网络资源感知信息。此外,还能够获得属于静态资源信息的算力资源拓扑信息和/或算力服务拓扑信息。
本申请实施例中,具体地,需求感知能力:通过数据面链路层、IP层或者管理面(用户服务合约)获取用户的算力、网络和安全等多样化的需 求;
应用感知能力:通过API接口等从算力服务层(第三处理层)或者数据中心的调度中心获取服务部署初始化的状况;
算力资源感知能力:通过探针等技术直接从算力资源层(第四处理层)获取算力资源能力,或者从数据中心管理平台或者边缘计算管理平台直接获取当前算力资源情况,比如当前连接数、CPU和GPU负载等。
网络资源感知能力:通过带内OAM、带外OAM和IFit等获取网络链路的质量状况,例如包括并不限于仅能够包括带宽、时延、抖动等。
采用本申请实施例所述算力处理的网络系统,基于无所不在的网络连接和高度分布式的计算节点,以算力度量和建模为基础,通过服务的自动化部署、最优路由和负载均衡,构建可以感知算力的全新的网络基础设施。由算力资源层完成对算力的抽象、建模和度量,并通知到算力路由层和算力管理层,由算力管理层完成对算力资源的运维、OAM管理和安全管理等功能,算力路由层由算力路由节点综合考虑用户需求、网络资源状况和计算资源状况,保证网络能够按需、实时调度不同位置的计算资源,以提高网络和计算资源利用率,进一步提升用户体验。
本申请实施例另一方面还提供一种业务处理方法,应用于第一算力网元节点,如图2所示,所述方法包括:
S210,在接收到服务请求时,根据资源感知信息,确定所述服务请求的算力路由信息。
在本申请一可选实施方式中,资源感知信息包括以下至少之一:
需求感知信息;
应用感知信息;
算力资源感知信息;
网络资源感知信息;
算力资源拓扑信息;
算力服务拓扑信息。
本申请实施例所述业务处理方法,第一算力网元节点可以根据测量获得的多维资源感知的资源感知信息,以及预先获得的算力资源拓扑信息和/ 或算力服务拓扑信息,确定所述服务请求的算力路由信息。
在本申请一可选实施方式中,所述方法还包括:
获取第二算力网元节点发送的所述算力资源拓扑信息和/或所述算力服务拓扑信息。
在本申请一可选实施方式中,第一算力网元节点可以为但不限于仅能够为算力路由层的算力路由节点,第二算力网元节点可以为算网管理层的算网管理中心,但不限于仅能够为算网管理层的算网管理中心。
如图3所示,采用本申请实施例所述业务处理方法的系统架构示意图。采用该业务处理方法的系统包括算网管理中心、多个算力路由节点和多个算力节点。其中,结合图1所示,算网管理中心可以位于算网管理层,算力路由节点可以位于算力路由层,算力节点可以位于算力资源层和/或网络资源层。算网管理中心分别与各个算力路由节点与各个算力路由节点连接。
在本申请一可选实施方式中,举例说明,算力节点包括第一算力节点MEC1、第二算力节点MEC2和第三算力节点MEC3等,算力路由节点包括第一算力路由节点PE1、第二算力路由节点PE2、第三算力路由节点PE3等,其中用户终端UE通过第一算力路由节点PE1接入网络系统。
采用该实施例所述业务处理方法,结合图2、图3和图4所示,采用该业务处理方法的过程包括步骤:
S410,算网管理中心向各个算力节点发送进行多维资源感知的资源感知请求,可选地该资源感知请求包括资源感知配置,使各算力节点根据算网管理中心发送的资源感知配置,进行算力测量,然后按照资源感知配置所配置的感知参数、测量参数、测量策略和发送频率等向算网管理中心上报测量结果,使得算网管理中心根据所接收到的算力测量结果也即算力资源和服务信息,生成算力资源拓扑信息和/或算力服务拓扑信息。
S420,算网管理中心向用户所接入的第一算力路由节点PE1也即入口路由器发送算力资源拓扑信息和/或算力服务拓扑信息。
S430,用户向第一算力路由节点PE1发送用户服务请求,其中该用户服务请求可以通过数据包携带(如利用IP报头的扩展字段)、控制面携带(如通过链路层接入PPOE或者IPOM协议扩展携带)和管理面数据(如网络 管理服务合约)中的至少之一方式发送。
S440,第一算力路由节点PE1基于所接收的算力资源拓扑信息和/或算力服务拓扑信息,向部署相应用户服务请求所请求服务的多个算力节点(如包括第一算力节点MEC1和第二算力节点MEC2)发送请求信息,其中该请求信息携带支持算力和网络测量的OAM信息,并在部署所请求服务的不同的算力节点进行算力和网络资源状态的测量。
S450,所请求服务的相应的算力节点根据多维资源感知模版进行资源感知和测量,并通过相应的算力路由节点向第一算力路由节点PE1返回测量信息;
S460,第一算力路由节点PE1根据所返回的测量信息,如包括算力资源、网络资源以及服务状态的实时状态情况,同时考虑用户服务请求,为用户所请求的服务选择合适的算力节点和最佳路径。
本申请还提供另一实施例所述业务处理方法,应用于第二算力网元节点,如图5所示,所述方法包括:
S510,向第一算力网元节点发送资源感知信息。
在本申请一可选实施方式中,所述资源感知信息包括以下至少之一:
需求感知信息;
应用感知信息;
算力资源感知信息;
网络资源感知信息;
算力资源拓扑信息;
算力服务拓扑信息。
在本申请一可选实施方式中,第一算力网元节点可以为但不限于仅能够为算力路由层的算力路由节点,第二算力网元节点可以为算网管理层的算网管理中心,但不限于仅能够为算网管理层的算网管理中心。
在本申请一可选实施方式中,所述方法还包括:
生成所述算力资源拓扑信息和/或所述算力服务拓扑信息。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述方法还包括:
向多个算力节点分别发送多维资源感知请求和算力感知度量配置信息;
获取多个所述算力节点的注册信息;
其中,所述第二算力网元节点根据所述算力节点的注册信息,生成所述算力资源拓扑信息和/或所述算力服务拓扑信息。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述注册信息包括算力标识、算力初始化配置信息、服务部署信息、功能部署信息和函数部署信息中的至少之一。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述方法还包括:
向多个所述算力节点分别发送算力和服务的感知测量配置信息;
获取所述算力节点根据所述感知测量配置信息进行测量后,上报的算力测量信息。
在本申请一可选实施方式中,所述的业务处理方法,其中,所述算力测量信息包括测量结果、位置信息、算力资源信息和算力服务信息中的至少之一。
具体地,算网管理中心向各个算力节点发送进行多维资源感知的多维资源感知模版和感知配置参数,使各算力节点根据算网管理中心发送的感知配置参数,进行算力测量,然后按照感知配置参数所配置的感知参数、测量参数、测量策略和发送频率等向算网管理中心上报测量结果,使得算网管理中心根据所接收到的算力测量结果也即算力资源和服务信息,生成算力资源拓扑信息和/或算力服务拓扑信息;以及各算力中心根据多维资源感知模版进行多维资源感知,向相应的算力路由节点发送资源感知信息。
采用本申请实施例所述业务处理方法,通过管理、控制和数据面协同感知,实现包括用户需求、应用服务、网络资源和算力资源等感知和度量,以综合考虑用户需求、网络资源状况和计算资源状况,这样根据该资源感知信息和所获取的算力资源拓扑信息和/或算力服务拓扑信息,将服务应用调度到合适的路由节点,以保证网络能够按需、实时调度不同位置的计算资源,实现资源利用率最优,并有效降低网络中网络资源状态信令开销。
本申请实施例还提供一种算力网元节点,所述算力网元节点为第一算 力网元节点,如图6所示,包括处理器610和收发机620,其中:
所述处理器610用于,在接收到服务请求时,根据资源感知信息,确定所述服务请求的算力路由信息。
在本申请一可选实施方式中,所述的算力网元节点,其中,所述资源感知信息包括以下至少之一:
需求感知信息;
应用感知信息;
算力资源感知信息;
网络资源感知信息;
算力资源拓扑信息;
算力服务拓扑信息。
在本申请一可选实施方式中,所述的算力网元节点,其中,所述收发机620用于:
获取第二算力网元节点发送的所述算力资源拓扑信息和/或所述算力服务拓扑信息。
在本申请一可选实施方式中,所述的算力网元节点,其中,所述方法还包括:
在接收到服务请求时,根据所携带的OAM信息,在部署所述服务请求所请求服务的多个算力节点进行测量,获得所述资源感知信息。
在本申请一可选实施方式中,所述的算力网元节点,其中,所述收发机620还用于:通过数据包携带、控制面携带和管理面数据中的至少之一方式获取所述服务请求。
本申请实施例还提供一种算力网元节点,所述算力网元节点为第二算力网元节点,如图7所示,包括收发机710和处理器720,其中:
所述收发机710,用于向第一算力网元节点发送资源感知信息。
在本申请一可选实施方式中,所述资源感知信息包括以下至少之一:
需求感知信息;
应用感知信息;
算力资源感知信息;
网络资源感知信息;
算力资源拓扑信息;
算力服务拓扑信息。
在本申请一可选实施方式中,所述的算力网元节点,其中,处理器720用于:
生成所述算力资源拓扑信息和/或所述算力服务拓扑信息。
在本申请一可选实施方式中,所述的算力网元节点,其中,所述收发机710还用于:
向多个算力节点分别发送资源感知请求和算力感知度量配置信息;
获取多个所述算力节点的注册信息;
其中,处理器720根据所述算力节点的注册信息,生成所述算力资源拓扑信息和/或所述算力服务拓扑信息。
在本申请一可选实施方式中,所述的算力网元节点,其中,所述注册信息包括算力标识、算力初始化配置信息、服务部署信息、功能部署信息和函数部署信息中的至少之一。
在本申请一可选实施方式中,所述的算力网元节点,其中,所述收发机710还用于:
向多个所述算力节点分别发送算力和服务的感知测量配置信息;
获取所述算力节点根据所述感知测量配置信息进行测量后,上报的算力测量信息。
在本申请一可选实施方式中,所述的算力网元节点,其中,所述感知测量配置信息包括算力测量参数、测量方式信息以及上报策略信息中的至少之一。
在本申请一可选实施方式中,所述的算力网元节点,其中,所述算力测量信息包括测量结果、位置信息、算力资源信息和算力服务信息中的至少之一。
本申请实施例还提供一种业务处理装置,应用于第一算力网元节点,如图8所示,所述装置包括:
处理模块810,用于在接收到服务请求时,根据资源感知信息,确定所 述服务请求的算力路由信息。
在本申请一可选实施方式中,所述业务处理装置,其中,所述资源感知信息包括以下至少之一:
需求感知信息;
应用感知信息;
算力资源感知信息;
网络资源感知信息;
算力资源拓扑信息;
算力服务拓扑信息。
在本申请一可选实施方式中,所述业务处理装置,其中,所述装置还包括:
获取模块820,用于获取第二算力网元节点发送的算力资源拓扑信息和/或算力服务拓扑信息。
在本申请一可选实施方式中,所述的业务处理装置,其中,处理模块810还用于:
在接收到服务请求时,根据所携带的OAM信息,在部署所述服务请求所请求服务的多个算力节点进行测量,获得所述资源感知信息。
在本申请一可选实施方式中,所述的业务处理装置,其中,获取模块820还用于:通过数据包携带、控制面携带和管理面数据中的至少之一方式获取所述服务请求。
本申请实施例还提供一种业务处理装置,应用于第二算力网元节点,如图9所示,所述装置包括:
信息发送模块910,用于向第一算力网元节点发送资源感知信息。
在本申请一可选实施方式中,所述的业务处理装置,其中,所述资源感知信息包括以下至少之一:
需求感知信息;
应用感知信息;
算力资源感知信息;
网络资源感知信息;
算力资源拓扑信息;
算力服务拓扑信息。
在本申请一可选实施方式中,所述的业务处理装置,其中,所述装置还包括:
生成模块920,用于生成所述算力资源拓扑信息和/或所述算力服务拓扑信息。
在本申请一可选实施方式中,所述的业务处理装置,其中,信息发送模块910还用于:
向多个算力节点分别发送资源感知请求和算力感知度量配置信息;
获取多个所述算力节点的注册信息;
其中,生成模块920根据所述算力节点的注册信息,生成所述算力资源拓扑信息和/或所述算力服务拓扑信息。
在本申请一可选实施方式中,所述的业务处理装置,其中,所述注册信息包括算力标识、算力初始化配置信息、服务部署信息、功能部署信息和函数部署信息中的至少之一。
在本申请一可选实施方式中,所述的业务处理装置,其中,信息发送模块910还用于:
向多个所述算力节点分别发送算力和服务的感知测量配置信息;
获取所述算力节点根据所述感知测量配置信息进行测量后,上报的算力测量信息。
在本申请一可选实施方式中,所述的业务处理装置,其中,所述感知测量配置信息包括算力测量参数、测量方式信息以及上报策略信息中的至少之一。
在本申请一可选实施方式中,所述的业务处理装置,其中,所述算力测量信息包括测量结果、位置信息、算力资源信息和算力服务信息中的至少之一。
本申请实施例还提供一种算力处理的网络系统,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现如上所述的算力处理方法实施例中的各个过程,且 能达到相同的技术效果,为避免重复,这里不再赘述。
另外,本申请具体实施例还提供一种可读存储介质,其上存储有程序,其中,该程序被处理器执行时实现如上中任一项所述的业务处理方法中的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理包括,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述收发方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述的是本申请的优选实施方式,应当指出对于本技术领域的普通人员来说,在不脱离本申请所述原理前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (30)

  1. 一种算力处理的网络系统,包括:
    第一处理层,用于获取资源感知信息,根据所述资源感知信息,确定服务请求的算力路由信息。
  2. 根据权利要求1所述的网络系统,其中,所述资源感知信息包括以下至少之一:
    需求感知信息;
    应用感知信息;
    算力资源感知信息;
    网络资源感知信息;
    算力资源拓扑信息;
    算力服务拓扑信息。
  3. 根据权利要求2所述的网络系统,其中,所述网络系统还包括:
    第二处理层,用于生成算力资源拓扑信息和/或算力服务拓扑信息,并将所述算力资源拓扑信息和/或所述算力服务拓扑信息发送至所述第一处理层。
  4. 根据权利要求3所述的网络系统,其中,所述网络系统还包括:第三处理层、第四处理层和第五处理层;
    其中,所述第三处理层,用于提供算力服务的初始化部署的状态信息;
    所述第四处理层,用于提供算力资源,以及根据所接收到的算力模版、服务测量参数以及测量策略进行算力和服务的测量;
    所述第五处理层,用于利用网络基础设施提供网络连接。
  5. 根据权利要求4所述的网络系统,其中,所述第一处理层通过以下至少之一方式获取所述资源感知信息:
    通过数据面链路层、IP层和管理面中的至少之一获取所述资源感知信息的需求感知信息;
    通过第三处理层和/或API接口获取所述资源感知信息的应用感知信息;
    通过第四处理层、数据中心管理平台和边缘计算管理平台中的至少之 一获取所述资源感知信息的算力资源感知信息;
    通过第五处理层获取所述资源感知信息的网络资源感知信息。
  6. 根据权利要求5所述的网络系统,其中,所述网络资源感知信息包括带宽、时延和时延抖动中的至少之一。
  7. 根据权利要求4所述的网络系统,其中,所述第二处理层还用于向所述第三处理层、所述第四处理层和所述第五处理层分别发送资源感知请求,使所述第三处理层、所述第四处理层和所述第五处理层分别根据所接收的多维资源感知请求进行多维资源感知,向所述第一处理层发送资源感知信息。
  8. 根据权利要求7所述的网络系统,其中,所述第二处理层还用于配置所述第三处理层、所述第四处理层和所述第五处理层分别进行算力测量时的算力感知度量配置信息。
  9. 根据权利要求8所述的网络系统,其中,所述算力感知度量配置信息包括感知参数、测量参数、测量策略和多维资源感知信息的发送频率中的至少之一。
  10. 根据权利要求3所述的网络系统,其中,所述第二处理层还用于具备以下功能至少之一:
    完成算力运营及算力服务编排;
    通过算力建模对算力资源进行抽象描述和表示,形成节点算力信息;
    将节点算力信息发送至算力网元节点。
  11. 根据权利要求1所述的网络系统,其中,所述第一处理层包括第一子层和第二子层,其中:
    所述第一子层用于进行算网服务通告、算网感知调度、算网拓扑发现和算网路由生成中的至少之一;
    所述第二子层用于进行算网路由转发、链路算网监控、算网路由标识和算网路由寻址中的至少之一。
  12. 一种业务处理方法,应用于第一算力网元节点,所述方法包括:
    在接收到服务请求时,根据资源感知信息,确定所述服务请求的算力路由信息。
  13. 根据权利要求12所述的业务处理方法,其中,所述资源感知信息包括以下至少之一:
    需求感知信息;
    应用感知信息;
    算力资源感知信息;
    网络资源感知信息;
    算力资源拓扑信息;
    算力服务拓扑信息。
  14. 根据权利要求13所述的业务处理方法,其中,所述方法还包括:
    获取第二算力网元节点发送的所述算力资源拓扑信息和/或所述算力服务拓扑信息。
  15. 根据权利要求12所述的业务处理方法,其中,所述方法还包括:
    在接收到服务请求时,根据所携带的OAM信息,在部署所述服务请求所请求服务的多个算力节点进行测量,获得所述资源感知信息。
  16. 根据权利要求12所述的业务处理方法,其中,所述方法还包括:通过数据包携带、控制面携带和管理面数据中的至少之一方式获取所述服务请求。
  17. 一种业务处理方法,应用于第二算力网元节点,所述方法包括:
    向第一算力网元节点发送资源感知信息。
  18. 根据权利要求17所述的业务处理方法,其中,所述资源感知信息包括以下至少之一:
    需求感知信息;
    应用感知信息;
    算力资源感知信息;
    网络资源感知信息;
    算力资源拓扑信息;
    算力服务拓扑信息。
  19. 根据权利要求18所述的业务处理方法,其中,所述方法还包括:
    生成所述算力资源拓扑信息和/或所述算力服务拓扑信息。
  20. 根据权利要求19所述的业务处理方法,其中,所述方法还包括:
    向多个算力节点分别发送资源感知请求和算力感知度量配置信息;
    获取多个所述算力节点的注册信息;
    其中,所述第二算力网元节点根据所述算力节点的注册信息,生成所述算力资源拓扑信息和/或所述算力服务拓扑信息。
  21. 根据权利要求20所述的业务处理方法,其中,所述注册信息包括算力标识、算力初始化配置信息、服务部署信息、功能部署信息和函数部署信息中的至少之一。
  22. 根据权利要求20所述的业务处理方法,其中,所述方法还包括:
    向多个所述算力节点分别发送算力和服务的感知测量配置信息;
    获取所述算力节点根据所述感知测量配置信息进行测量后,上报的算力测量信息。
  23. 根据权利要求22所述的业务处理方法,其中,所述感知测量配置信息包括算力测量参数、测量方式信息以及上报策略信息中的至少之一。
  24. 根据权利要求22所述的业务处理方法,其中,所述算力测量信息包括测量结果、位置信息、算力资源信息和算力服务信息中的至少之一。
  25. 一种算力网元节点,所述算力网元节点为第一算力网元节点,包括处理器:
    所述处理器用于,在接收到服务请求时,根据资源感知信息,确定所述服务请求的算力路由信息。
  26. 一种算力网元节点,所述算力网元节点为第二算力网元节点,包括收发机:
    所述收发机,用于向第一算力网元节点发送资源感知信息。
  27. 一种业务处理装置,应用于第一算力网元节点,所述装置包括:
    处理模块,用于在接收到服务请求时,根据资源感知信息,确定所述服务请求的算力路由信息。
  28. 一种业务处理装置,应用于第二算力网元节点,所述装置包括:
    信息发送模块,用于向第一算力网元节点发送资源感知信息。
  29. 一种算力处理的网络系统,包括:处理器、存储器及存储在所述存 储器上并可在所述处理器上运行的程序,所述程序被所述处理器执行时实现如权利要求12至16任一项所述的业务处理方法,或者实现如权利要求17至24任一项所述的业务处理方法。
  30. 一种可读存储介质,所述可读存储介质上存储有程序,所述程序被处理器执行时实现如权利要求12至16任一项所述的业务处理方法中的步骤,或者实现如权利要求17至24任一项所述的业务处理方法中的步骤。
PCT/CN2022/078802 2021-03-02 2022-03-02 算力处理的网络系统、业务处理方法及算力网元节点 WO2022184094A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110230865.7 2021-03-02
CN202110230865.7A CN115002862A (zh) 2021-03-02 2021-03-02 算力处理的网络系统、业务处理方法及算力网元节点

Publications (1)

Publication Number Publication Date
WO2022184094A1 true WO2022184094A1 (zh) 2022-09-09

Family

ID=83018333

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/078802 WO2022184094A1 (zh) 2021-03-02 2022-03-02 算力处理的网络系统、业务处理方法及算力网元节点

Country Status (2)

Country Link
CN (1) CN115002862A (zh)
WO (1) WO2022184094A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115297014A (zh) * 2022-09-29 2022-11-04 浪潮通信信息系统有限公司 零信任算网操作系统、管理方法、电子设备、存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115580539A (zh) * 2022-10-13 2023-01-06 联通(广东)产业互联网有限公司 基于人工智能场景的算网储调度方法、系统及设备
CN117440046A (zh) * 2023-03-21 2024-01-23 北京神州泰岳软件股份有限公司 一种针对算力网络的数据处理方法和装置
CN116708294B (zh) * 2023-08-08 2023-11-21 三峡科技有限责任公司 基于apn6网络实现智能应用感知及报文转发的方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017100640A1 (en) * 2015-12-11 2017-06-15 Interdigital Patent Holdings, Inc. Method and apparatus for enabling third party edge clouds at the mobile edge
CN110198339A (zh) * 2019-04-17 2019-09-03 浙江大学 一种基于QoE感知的边缘计算任务调度方法
CN110891093A (zh) * 2019-12-09 2020-03-17 中国科学院计算机网络信息中心 一种时延敏感网络中边缘计算节点选择方法及系统
CN112260953A (zh) * 2020-10-21 2021-01-22 中电积至(海南)信息技术有限公司 一种基于强化学习的多通道数据转发决策方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017100640A1 (en) * 2015-12-11 2017-06-15 Interdigital Patent Holdings, Inc. Method and apparatus for enabling third party edge clouds at the mobile edge
CN110198339A (zh) * 2019-04-17 2019-09-03 浙江大学 一种基于QoE感知的边缘计算任务调度方法
CN110891093A (zh) * 2019-12-09 2020-03-17 中国科学院计算机网络信息中心 一种时延敏感网络中边缘计算节点选择方法及系统
CN112260953A (zh) * 2020-10-21 2021-01-22 中电积至(海南)信息技术有限公司 一种基于强化学习的多通道数据转发决策方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115297014A (zh) * 2022-09-29 2022-11-04 浪潮通信信息系统有限公司 零信任算网操作系统、管理方法、电子设备、存储介质
CN115297014B (zh) * 2022-09-29 2022-12-27 浪潮通信信息系统有限公司 零信任算网操作系统、管理方法、电子设备、存储介质

Also Published As

Publication number Publication date
CN115002862A (zh) 2022-09-02

Similar Documents

Publication Publication Date Title
WO2022028418A1 (zh) 算力处理的网络系统、业务处理方法及设备
WO2022184094A1 (zh) 算力处理的网络系统、业务处理方法及算力网元节点
JP7012836B2 (ja) ネットワークスライス管理方法及び装置
WO2021190482A1 (zh) 算力处理的网络系统及算力处理方法
US20190327317A1 (en) Service providing method, apparatus, and system
CN109218046B (zh) 网络切片的管理方法及系统和存储介质
JP6408602B2 (ja) Nfvシステムにおけるサービス実装のための方法および通信ユニット
WO2018171459A1 (zh) 网络切片的管理方法和装置
CN114095577A (zh) 资源请求方法、装置、算力网元节点及算力应用设备
WO2023284830A1 (zh) 管理和调度方法、装置、节点及存储介质
CN103428025A (zh) 一种管理虚拟网络服务的方法、装置和系统
CN110326345A (zh) 一种配置网络切片的方法、装置和系统
CN113259272B (zh) 一种基于虚拟网关的流量管理方法、装置及设备
WO2017185992A1 (zh) 一种请求消息传输方法及装置
WO2023274293A1 (zh) 算力通告的发送方法、装置及算力网元节点
CN109391503A (zh) 一种网络切片管理方法及装置
CN113810442B (zh) 资源预留的方法、装置、终端及节点设备
CN111212119B (zh) 一种dubbo服务调用方法及系统
CN103067476B (zh) 一种基于虚拟机的动态网络重构方法
CN102523117A (zh) 一种应用于云环境的网络管理方法
Yao et al. A computing-aware routing protocol for Computing Force Network
CN116455817A (zh) 一种软件定义云网融合架构及路由实现方法
CN115499432A (zh) 家庭终端算力资源管理系统及算力资源调度方法
CN116938837A (zh) 一种资源调度方法、装置及设备
Ghorab et al. Sdn-based service discovery and assignment framework to preserve service availability in telco-based multi-access edge computing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22762557

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE