WO2023284830A1 - Procédé et appareil de gestion et de planification, nœud et support de stockage - Google Patents

Procédé et appareil de gestion et de planification, nœud et support de stockage Download PDF

Info

Publication number
WO2023284830A1
WO2023284830A1 PCT/CN2022/105717 CN2022105717W WO2023284830A1 WO 2023284830 A1 WO2023284830 A1 WO 2023284830A1 CN 2022105717 W CN2022105717 W CN 2022105717W WO 2023284830 A1 WO2023284830 A1 WO 2023284830A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
node
computing
service
computing power
Prior art date
Application number
PCT/CN2022/105717
Other languages
English (en)
Chinese (zh)
Inventor
姚惠娟
付月霞
陆璐
孙滔
Original Assignee
中国移动通信有限公司研究院
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国移动通信有限公司研究院, 中国移动通信集团有限公司 filed Critical 中国移动通信有限公司研究院
Publication of WO2023284830A1 publication Critical patent/WO2023284830A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services

Definitions

  • the present application relates to the field of data communication, and in particular to a management and scheduling method, device, node and storage medium.
  • each network node can become a resource provider; and user requests can be made by calling the nearest To meet the needs of node resources, and no longer limited to a specific node, to avoid the waste of connection and network scheduling resources.
  • traditional networks only provide channels for data communication, are based on connections, and are subject to fixed network addressing mechanisms. They often cannot meet user needs under higher and more stringent quality of experience (QoE) requirements.
  • QoE quality of experience
  • the traditional network client English can be expressed as client
  • server English can be expressed as server
  • the application on the server side is deconstructed into functional components and deployed on the cloud platform, and is uniformly scheduled by the application program interface (API) gateway (English can be expressed as Gateway), which can realize on-demand dynamic instantiation.
  • API application program interface
  • the business logic on the server side is transferred to the client side, and the client only needs to care about the computing function itself, and does not need to care about computing resources such as servers, virtual machines, containers, etc., thereby realizing Function as a Service (FaaS, Function as a Service).
  • embodiments of the present application provide a management and scheduling method, device, node, and storage medium.
  • the embodiment of this application provides a management and scheduling method applied to the first node, including:
  • the service request of the first service is received, and the first service is scheduled.
  • the scheduling of the first service includes:
  • the scheduling policy is used for the second node to determine the forwarding path of the first service, so as to schedule the first service into the network
  • the corresponding third node performs processing; the second node has at least a network control function; and the third node has at least a computing power sensing function and a forwarding function.
  • the scheduling of the first service includes:
  • the scheduling of the first service includes:
  • the method also includes:
  • the computing resources of the network are managed based on the acquired status information of the computing resources.
  • the method also includes:
  • the network resources of the network are managed based on the acquired network resource status information.
  • the method when managing the computing resources and network resources of the network, the method includes:
  • OAM Operation Administration and Maintenance
  • the management of the computing service of the network includes at least one of the following:
  • the management of the computing power service image includes at least one of the following:
  • the management of computing power service instances includes at least one of the following:
  • the fourth node has at least a computing power function.
  • the management of the resources corresponding to the computing service includes at least one of the following:
  • the embodiment of the present application also provides a management and scheduling device, including:
  • the first management unit is configured to manage computing resources and network resources of the network
  • the second management unit is configured to manage the computing service of the network
  • the scheduling unit is configured to receive the service request of the first service, and schedule the first service.
  • the embodiment of the present application also provides a node, including: a processor and a communication interface; wherein,
  • the processor is configured to manage network computing resources and network resources; manage network computing services; and receive a service request for a first service through the communication interface, and schedule the first service .
  • the embodiment of the present application also provides a node, including: a processor and a memory configured to store a computer program that can run on the processor,
  • the processor is configured to execute the steps of any one of the above methods when running the computer program.
  • the embodiment of the present application also provides a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of any one of the above methods are implemented.
  • the management and scheduling method, device, node, and storage medium provided by the embodiments of the present application manage network computing power resources and network resources; manage network computing power services; receive a service request for the first business, and The first service is scheduled.
  • the first node is introduced into the network architecture, and the computing power resources, network resources and computing power services of the network are managed through the first node, and the business is scheduled through the first node; in this way, it is possible to realize Unified management of network computing resources, network resources and computing services, and flexible scheduling of services, so that the network architecture can meet the needs of computing and network integration and evolution, and enable reasonable distribution of services, thereby improving user experience.
  • Figure 1 is a schematic diagram of the development trend of computing and network deep integration in related technologies
  • FIG. 2 is a schematic diagram of the architecture of a computing power-aware network (CAN, Computing-aware Networking) according to an embodiment of the present application;
  • CAN computing power-aware network
  • FIG. 3 is a schematic flow diagram of a management and scheduling method in an embodiment of the present application.
  • Fig. 4 is the architecture schematic diagram of the application embodiment CAN of the present application.
  • Fig. 5 is a schematic diagram of the network architecture of the application embodiment CAN
  • Fig. 6 is a schematic diagram of a computing network collaborative orchestration management in an application embodiment of the present application.
  • FIG. 7 is a schematic diagram of another computing-network collaborative orchestration management in the application embodiment of the present application.
  • Fig. 8 is a schematic diagram of the third computing-network collaborative orchestration management of the application embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a management and scheduling device according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the node structure of the embodiment of the present application.
  • Fig. 11 is a schematic structural diagram of the management and scheduling system of the embodiment of the present application.
  • the new-generation network architecture design for the future network needs to consider the needs of network and computing convergence and evolution, and realize global optimization of the network in the ubiquitous connection and computing power architecture, flexible scheduling of computing power, and reasonable distribution of services.
  • the embodiment of the present application provides a CAN architecture, as shown in Figure 2, the CAN architecture includes: a first processing layer, a second processing layer, a third processing layer, a fourth processing layer and a fifth processing layer; wherein The first processing layer is configured to carry various services and applications of ubiquitous computing; the second processing layer is configured to comprehensively consider the status of network resources and computing power resources, and flexibly schedule services to corresponding nodes on demand; The third processing layer is configured to support computing power registration, computing power operation, computing power notification and other functions; the fourth processing layer is configured to use various computing infrastructure to provide computing power resources; the fifth processing layer is configured to Utilize various network infrastructures to provide ubiquitous network connections for every corner of the network.
  • the CAN architecture in the embodiment of this application can interconnect dynamically distributed computing resources based on ubiquitous network connections, and through the unified collaborative scheduling of multi-dimensional resources such as network, storage, and computing power, massive applications can be invoked on demand and in real time Computing resources in different places realize the global optimization of connection and computing power in the network and provide a consistent user experience.
  • CAN can also be called a future data communication network, a computing network, a computing power network, a computing power endogenous network, or a computing-network integration network, etc., as long as the functions of the network are realized, the name of the network in the embodiment of this application Not limited.
  • processing layer in the embodiment of the present application is a virtual layer structure divided according to logical functions.
  • the above-mentioned processing layers can be deployed on one device or on multiple devices; if deployed On one device, each processing layer can transmit information through internal interfaces; if it is deployed on multiple devices, each processing layer can realize information transmission through signaling interaction.
  • the first processing layer can also be called computing power application layer or computing power service layer; the second processing layer can also be called routing layer or computing power routing layer; the third processing layer It can also be called the computing network management and arrangement layer, the computing network arrangement layer, the computing network orchestration management layer, the computing power platform layer, the computing power management platform layer or the computing power management layer, etc.; the fourth processing layer can also be called computing power resource layer, etc.; the fifth processing layer may also be called a network resource layer; the embodiment of the present application does not limit the names of each processing layer, as long as the functions of each processing layer can be realized.
  • a node is introduced into the CAN architecture, through which the computing power resources, network resources and computing power services of the network are managed, and services are scheduled through the node , that is, through this node, the collaborative orchestration and management of the computing network can be realized, and the unified management of computing resources, network resources, and computing services of the network can be realized, and flexible scheduling of services can be realized, so that the network architecture can meet the requirements of computing and network convergence evolution. demand, and enable reasonable distribution of services, which in turn can improve user experience.
  • the embodiment of the present application provides a management and scheduling method, which is applied to the first node, as shown in FIG. 3, the method includes:
  • Step 301 Manage network computing resources and network resources
  • Step 302 Manage the computing service of the network
  • Step 303 Receive the service request of the first service, and schedule the first service.
  • step 301, step 302 and step 303 are executed in no particular order.
  • the first node may also be called a computing network orchestration center, a computing power orchestration management center, a computing power scheduling and scheduling center, a computing network scheduling center, or a computing network unified scheduling center.
  • the name of the first node is not limited, as long as the function of the first node can be realized.
  • the network refers to a future data communication network that is deeply integrated with computing and network, which may be called CAN, or a computing power network, etc.
  • the computing resources may include computing resources of nodes with computing capabilities in the network, and the nodes may be understood as network devices.
  • the computing resources may include computing resources of processors such as a single-core central processing unit (CPU) or a multi-core CPU in a network device; for another example, the computing resources may include CPUs, graphics processing units (GPUs) ), Field-Programmable Gate Array (FPGA, Field-Programmable Gate Array) and other combinations of computing resources of at least two processors.
  • the computing resources may also include storage resources (expressed as Storage in English) of nodes with computing capabilities in the network.
  • the computing power resources may also include storage resources such as random access memory (RAM, Random Access Memory) or read-only memory (ROM, Read Only Memory) in the network device;
  • RAM random access memory
  • ROM Read Only Memory
  • a combination of storage resources of at least two types of memory such as RAM and ROM in the network device may be included.
  • the nodes with computing capabilities in the network may include a third node and a fourth node; the third node may at least have a computing power sensing function and a forwarding function, such as a router with a computing power sensing function; The fourth node can at least have a computing power function, such as a DC server.
  • nodes with computing capabilities in the network may also be referred to as computing power network element nodes, etc.
  • the embodiment of the present application does not limit the names of such nodes, as long as they have computing capabilities.
  • the third node may also be called a computing power routing node, etc.
  • the embodiment of the present application does not limit the name of the third node, as long as the function of the third node can be realized.
  • the fourth node may also be called a computing power node, etc.
  • the embodiment of the present application does not limit the name of the fourth node, as long as the function of the fourth node can be realized.
  • the network resources may include network resources of networks such as access networks, metropolitan area networks, and backbone networks, such as bandwidth, delay, and jitter.
  • networks such as access networks, metropolitan area networks, and backbone networks, such as bandwidth, delay, and jitter.
  • the network needs to be controlled, therefore, the network may further include a second node having a network control function.
  • the second node may also be called a network controller or a computing network controller, etc.
  • the embodiment of the present application does not limit the name of the second node, as long as the functions of the second node can be realized .
  • the first node needs to have a computing power awareness function, so that when the first node manages the computing power resources of the network, it can perceive (that is, obtain) each computing power resource status of a node with computing power, and manage the computing power resources of the network according to the obtained computing power resource status.
  • the method may also include:
  • the management of network computing resources may include:
  • the computing resources of the network are managed based on the acquired status information of the computing resources.
  • the first node may acquire the status information of computing power resources of the third node and the fourth node of the network.
  • the computing resource status information of the third node may include at least one of the following:
  • An identifier corresponding to the third node (such as a service identification number (ID));
  • Computing resource information of the third node such as CPU, GPU, FPGA and other processor status information;
  • Storage resource information of the third node such as status information of storage such as memory and hard disk.
  • the first node may acquire the computing power resource status information of the third node in the network in any of the following three ways:
  • Mode 1 the first node notifies (that is, notifies) all third nodes in the network, and each third node directly reports the state information of the computing resources to the first node;
  • Method 2 The first node notifies all third nodes in the network, and each third node reports the status information of computing resources to the second node or the first processing layer, and the second node or The first processing layer reports the received computing resource status information to the first node;
  • the first node notifies the second node or the first processing layer to report the computing power resource status information of the third node, and the second node or the first processing layer obtains computing power resources from the third node status information, and send the acquired computing power resource status information of the third node to the first node.
  • the computing power resource status information of the third node can be generated based on the specified template, that is, the first node sends
  • the third node issues a template for reporting the state of the computing power resource, and the third node reports the state information of the computing power resource based on the received template.
  • the first node can shield (that is, ignore) the differences of the underlying hardware devices, so as to realize efficient management of the computing resources of the network.
  • the acquiring the status information of the computing resources of the network may include:
  • the first template is used to abstractly describe and represent the state of the computing resources of the third node, so that the first node can realize efficient management of heterogeneous computing resources.
  • heterogeneous computing resources can be understood as heterogeneous computing resources and/or storage resources, which means that there are differences between two network devices with computing capabilities at the hardware level, for example, the CPU, GPU, and bus interface of a network device
  • the model of processing hardware such as chip (BIC, Bus Interface Chip), digital signal processor (DSP, Digital Signal Processor) and/or storage hardware such as RAM and ROM is different from that of another network device.
  • the first node when it sends the first template to the third node, it may send a unified first template to all third nodes, or send a unified first template to third nodes belonging to a specific type (For example, at least two third nodes supporting the same type of business) issue a unified first template, and can also issue a specific first template for a specific third node; in other words, each third node receives At least one first template can be the same or different.
  • the first node may directly send the first template to the third node, or send the first template to the third node through the second node or the first processing layer.
  • template correspondingly, the computing resource status information reported by the third node based on the at least one first template may be directly sent to the first node, or sent through the second node or the first processing layer to the first node.
  • the first node when it sends at least one first template to the third node, it may also instruct the third node to report the frequency of computing resource status information, and the specific method of determining the frequency can be set according to requirements , which is not limited in this embodiment of the present application.
  • the indication information of the frequency may be included in the first template, or the first node may separately send the indication information of the frequency to the third node.
  • any one of the above three methods can be selected according to requirements to enable the first node to obtain the computing power resource status information of the third node, or other methods can be adopted according to requirements to enable the first node to obtain the third node.
  • the computing power resource status information of the node does not limit the specific method for the first node to obtain the computing power resource status information of the third node, as long as the first node can obtain the computing power of the third node Resource status information is sufficient.
  • the computing resource status information of the fourth node may include at least one of the following:
  • An identifier corresponding to the fourth node (such as a service ID);
  • Computing resource information of the fourth node such as CPU, GPU, FPGA and other processor status information;
  • Storage resource information of the fourth node such as status information of memory, hard disk and other storage.
  • the first node may acquire the computing resource status information of the fourth node in the network in any of the following three ways:
  • Mode 1 the first node notifies all fourth nodes in the network, and each fourth node directly reports the status information of computing resources to the first node;
  • Mode 2 The first node notifies all fourth nodes in the network, and each fourth node reports the status information of computing resources to the second node or the first processing layer, and the second node or The first processing layer reports the received computing resource status information to the first node;
  • the first node notifies the second node or the first processing layer to report the computing resource status information of the fourth node, and the second node or the first processing layer obtains the computing resource from the fourth node state information, and send the obtained state information of computing power resources of the fourth node to the first node.
  • the computing power resource status information of the fourth node can be generated based on the specified template, that is, the first node sends
  • the fourth node issues a template for reporting the state of the computing power resource, and the fourth node reports the state information of the computing power resource based on the received template.
  • the first node can shield (that is, ignore) the differences of the underlying hardware devices, so as to realize efficient management of the computing resources of the network.
  • the acquiring the status information of the computing resources of the network may include:
  • the second template is used to abstractly describe and represent the state of the computing resources of the fourth node, so that the first node can realize efficient management of heterogeneous computing resources.
  • the first node when it sends the second template to the fourth node, it may send a unified second template to all fourth nodes, or send a unified second template to fourth nodes belonging to a specific type (For example, at least two fourth nodes supporting the same type of business) issue a unified second template, and can also issue a specific second template for a specific fourth node; in other words, each fourth node receives At least one second template can be the same or different.
  • the first node issues a template for reporting computing power resource status information to nodes with computing capabilities in the network, it is not necessary to distinguish the types of nodes.
  • the first node The templates delivered to the third node and the fourth node may be the same or different.
  • the first node may directly send the second template to the fourth node, or send the second template to the fourth node through the second node or the first processing layer.
  • template correspondingly, the computing resource status information reported by the fourth node based on the at least one second template may be directly sent to the first node, or sent through the second node or the first processing layer to the first node.
  • the first node when it sends at least one second template to the fourth node, it may also instruct the fourth node to report the frequency of computing resource status information, and the specific method of determining the frequency can be set according to requirements , which is not limited in this embodiment of the present application.
  • the indication information of the frequency may be included in the second template, or the first node may separately send the indication information of the frequency to the fourth node.
  • any one of the above three methods can be selected according to requirements to enable the first node to obtain the computing power resource status information of the fourth node, or other methods can be used according to requirements to enable the first node to obtain the fourth node.
  • the computing power resource status information of the node the embodiment of the present application does not limit the specific method for the first node to obtain the computing power resource status information of the fourth node, as long as the first node can obtain the computing power of the fourth node Resource status information is sufficient.
  • the status information of the computing power resources changes in real time and is a kind of dynamic information; therefore, in order to further improve the efficiency of managing the computing power resources of the network, the first node may obtain The state information of computing power resources of the third node and/or the fourth node is obtained, and static and global network computing power resource topology information is generated, and the computing power resource topology information of the network is calculated by using the generated computing power resource topology information. Human resources are managed.
  • the first node may update the computing power resource topology information to ensure the timeliness of the computing power resource topology information.
  • the method may also include:
  • the management of the computing resources of the network may include:
  • the updated computing resource topology information is used to manage the computing resources of the network.
  • the first node manages the computing power resources of the network, which may specifically include registering, updating and deregistering the third node and/or the fourth node. For example, after the third node and/or the fourth node go online, it may send computing power resource status information to the first node, and the first node The status information of computing resources of the node is registered with the third node and/or the fourth node.
  • the first node also needs to perceive the state of the network, so as to manage the network resources of the network.
  • the method may also include:
  • the managing network resources of the network may include:
  • the network resources of the network are managed based on the acquired network resource state information.
  • the third node may also have a network awareness function, that is, the first node may obtain network resource status information of the third node, and the network resource status information may include network bandwidth, delay, Status information such as delay jitter.
  • the first node can obtain the network resource status information of the third node in any of the following three ways:
  • Mode 1 the first node notifies all third nodes in the network, and each third node directly reports network resource status information to the first node;
  • Mode 2 the first node notifies all third nodes in the network, each third node reports the network resource status information to the second node, and the second node reports the received network resource status information to the first node;
  • Mode 3 The first node notifies the second node to report the network resource status information of the third node, the second node obtains the network resource status information from the third node, and sends the acquired network resource status information of the third node to to the first node.
  • the network resource status information of the third node can be generated based on a specified template, that is, the first node sends the third node
  • the node issues a template for reporting the status of the network resource
  • the third node reports the status information of the network resource based on the received template.
  • the first node obtains the network resource state information, it can shield (ie ignore) the hardware difference between the network infrastructures, so as to realize efficient management of the network resources of the network.
  • the acquiring the network resource state information of the network may include:
  • the third template is used to abstractly describe and represent the state of the network resources perceived by the third node, so that the first node can realize efficient management of heterogeneous network resources.
  • heterogeneous network resources can be understood as differences between two network infrastructures at the hardware level.
  • the first node when it sends the third template to the third node, it may send a unified third template to all third nodes, or send a unified third template to third nodes belonging to a specific type (For example, at least two third nodes supporting the same type of business) issue a unified third template, and can also issue a specific third template for a specific third node; in other words, each third node receives At least one third template can be the same or different.
  • the first node may directly send the third template to the third node, or send the third template to the third node through the second node; correspondingly, the The network resource state information reported by the third node based on the at least one third template may be directly sent to the first node, or sent to the first node through the second node.
  • the first node when it sends at least one third template to the third node, it may also instruct the third node to report the frequency of network resource status information, and the specific method of determining the frequency may be set according to requirements, This embodiment of the present application does not limit it.
  • the indication information of the frequency may be included in the third template, or the first node may separately send the indication information of the frequency to the third node.
  • the frequency at which the third node reports network resource status information and the frequency at which the third node reports computing power resource status information may be the same or different, which is not limited in this embodiment of the present application.
  • any one of the above three methods can be selected to enable the first node to obtain network resource status information according to requirements, or other methods can be used to enable the first node to obtain network resource status information.
  • the embodiment does not limit the specific manner in which the first node obtains the network resource state information, as long as the first node can obtain the network resource state information of the network.
  • the network resource status information changes in real time and is a kind of dynamic information; therefore, in order to further improve the efficiency of managing the network resources of the network, the first node may, according to the obtained network
  • the resource state information generates static and global network resource topology information, and uses the generated network resource topology information to manage the network resources of the network.
  • the first node may update the network resource topology information, so as to ensure the timeliness of the network resource topology information.
  • the method may also include:
  • the managing the network resources of the network may include:
  • the network resources of the network are managed by using the updated network resource topology information.
  • the first node when the first node manages the computing power resources and network resources of the network, it also needs to maintain and operate the computing power resources and network resources.
  • the method when managing the computing resources and network resources of the network, the method may further include:
  • the first node may obtain computing power service status information of the network, and manage computing power service of the network based on the obtained computing power service status information.
  • the computing power service may also be called application service, or directly referred to as service; it can be understood as a description of the application on the server side.
  • the computing service may include virtual reality (VR), augmented reality (AR), vehicle wireless communication technology (V2X), Internet of Things (IoT) and/or application programs (APP).
  • VR virtual reality
  • AR augmented reality
  • V2X vehicle wireless communication technology
  • IoT Internet of Things
  • APP application programs
  • the type of computing power service can be set according to requirements, which is not limited in this embodiment of the present application.
  • the first node may specifically obtain the computing power service status information of the fourth node; the computing power service status information may include computing power service image related information, computing power service corresponding resource related information, Information about the instantiation of the computing power service, etc.
  • the manner in which the first node obtains the computing power service status information of the fourth node can be set according to requirements, for example, the first node can obtain the fourth node's service status information through the first processing layer Computing power service status information; for another example, the first node may obtain computing power service status information of the fourth node through the second node.
  • the computing power service status information of the fourth node can be generated based on a specified template, that is, the first node sends the computing power to the fourth node for reporting A service status template, the fourth node reports computing power service status information based on the received template.
  • the first node obtains the computing power service status information, it can shield (that is, ignore) the difference of the underlying hardware devices, so as to realize efficient management of the computing power service of the network.
  • the first node may send at least one fourth template to the fourth node, and receive computing power service status information reported by the fourth node based on the at least one fourth template.
  • the fourth template is used to abstractly describe and represent the computing service status of the fourth node.
  • the first node when the first node issues the fourth template to the fourth node, it may issue a unified fourth template for all fourth nodes, or issue a fourth template for a specific type of fourth node (For example, at least two fourth nodes that support the same type of business) issue a unified fourth template, and can also issue a specific fourth template for a specific fourth node; in other words, each fourth node receives At least one fourth template can be the same or different.
  • the first node may directly send the fourth template to the fourth node, or send the fourth template to the fourth node through the second node, or send the fourth template to the fourth node through the
  • the first processing layer sends the fourth template to the fourth node; correspondingly, the computing power service status information reported by the fourth node based on the at least one fourth template may be directly sent to the first node, Either send to the first node through the second node, or send to the first node through the first processing layer.
  • the first node when it sends at least one fourth template to the fourth node, it may also instruct the fourth node to report the frequency of computing power service status information, and the specific method of determining the frequency can be set according to requirements , which is not limited in this embodiment of the present application.
  • the indication information of the frequency may be included in the fourth template, or the first node may separately send the indication information of the frequency to the fourth node.
  • the frequency at which the fourth node reports computing power service status information and the frequency at which the fourth node reports computing power resource status information may be the same or different, which is not limited in this embodiment of the present application.
  • the computing power service status information changes in real time and is a kind of dynamic information; therefore, in order to further improve the efficiency of managing the computing power service of the network, the first node can obtain The computing power service status information of the computing power service, generate static, global network computing power service topology information, and use the generated computing power service topology information to manage the computing power service of the network. Moreover, when acquiring new computing power service status information, the first node may update the computing power service topology information to ensure the timeliness of the computing power service topology information.
  • the first node can update the computing power service topology information of the network based on the obtained computing power service status information; correspondingly, the first node can use the updated computing power service topology information , to manage the computing service of the network.
  • the management of the computing service of the network may include at least one of the following:
  • the first node may directly implement the management of the computing service, or may implement the management of the computing service through the first processing layer, which is not limited in this embodiment of the present application.
  • the management of computing power services can be realized based on information communication between nodes and information communication between nodes and the processing layer.
  • the management of the computing service of the network may also include other management related to the lifecycle of the computing service, which is not limited in this embodiment of the present application.
  • the management of the computing power service image may include at least one of the following:
  • the managing of computing power service instances may include at least one of the following:
  • the management of instances of computing power services can be understood as orchestrating computing power services.
  • the notification to the second node to establish the connection between nodes and the connection between nodes and the terminal can be understood as the end-to-end realization of the computing power service to the terminal, that is, the establishment of the connection between the fourth node and the terminal.
  • a service level agreement (SLA, Service Level Agreement) of the corresponding level can be provided according to the quality requirements of the computing power service.
  • the computing power service can be dispatched to at least one fourth node based on a preset strategy or a preset artificial intelligence (AI) algorithm (such as a machine learning model trained using historical data in advance).
  • AI artificial intelligence
  • the updating of instances of computing power services may include adding or deleting instances of computing power services.
  • the termination of the computing service may include terminating the instance of the computing service.
  • the management of the resources corresponding to the computing power service may include at least one of the following:
  • the resources may include computing power resources of the network.
  • the service request can be understood as the user's demand for the first service, that is, the demand that the first service itself needs to be satisfied when implementing the first service, such as bandwidth demand, Latency requirements, Quality of Service (QoS), etc.
  • the first service may be supported by at least one computing power service.
  • the first node needs to comprehensively consider service requirements, computing power resource status information, and network resource status information to generate a coordinated scheduling policy to schedule the first service.
  • the first node may schedule the first service in any of the following three ways:
  • Mode 1 The first node performs scheduling on the management plane, that is, the first node directly generates a scheduling policy for the first service, and sends the scheduling policy to the second node, and the second node The second node schedules the first service according to the scheduling policy.
  • the scheduling of the first service may include:
  • the scheduling strategy is used for the second node to determine the forwarding path of the first service, so as to schedule the first service to a corresponding
  • the third node is processed.
  • Mode 2 The first node performs scheduling on the control plane, that is, the first node sends the computing power resource information of the network to the second node, and the second node generates the information of the first service A scheduling strategy, and scheduling the first service according to the generated scheduling strategy.
  • the scheduling of the first service may include:
  • the sending computing power resource information of the network to the second node and the sent computing power resource information is used for the second node to generate a scheduling strategy for the first service based on at least computing power resource information and network resources ;
  • the scheduling policy is used for the second node to determine a forwarding path of the first service, so as to schedule the first service to a corresponding third node in the network for processing.
  • the computing power resource information of the network may be used to reflect the global static computing power resources of the network, such as the computing power resource topology information.
  • the second node since the second node has a network control function, the second node can determine the network resource status of the network in real time, in other words, after receiving the computing resource information, the The second node may directly generate the scheduling policy based on the computing power resource information and its own network resource information.
  • Mode 3 The first node performs scheduling on the data plane, that is, the first node sends the computing resource information and network resource information of the network to the third node, and the third node generates the a scheduling strategy for the first service, and schedule the first service according to the generated scheduling strategy.
  • the scheduling of the first service may include:
  • the scheduling policy is used for the third node to determine a forwarding path of the first service, so as to schedule the first service to a corresponding first service in the network Three nodes are processed.
  • the computing power resource information of the network can be used to reflect the global static computing power resources of the network, such as the topology information of the computing power resources; the network resource information of the network can be used to reflect the network Global static network resources, such as topology information of the network resources.
  • any one of the above three methods can be selected to schedule the first service according to requirements, or other methods can be used to schedule the first service according to requirements.
  • the specific scheduling manner of a service is not limited, as long as the first service can be scheduled to the corresponding third node for processing.
  • the management and scheduling method provided by the embodiment of the present application manages network computing power resources and network resources; manages network computing power services; receives a service request of a first business, and schedules the first business.
  • the first node is introduced into the network architecture, and the computing power resources, network resources and computing power services of the network are managed through the first node, and the business is scheduled through the first node; in this way, it is possible to realize Unified management of network computing resources, network resources and computing services, and flexible scheduling of services, so that the network architecture can meet the needs of computing and network integration and evolution, and enable reasonable distribution of services, thereby improving user experience.
  • the CAN architecture can be logically divided into five functional modules: computing power service layer, computing network management and arrangement layer, computing power resource layer, computing power routing layer and network resource layer , can support computing-network collaborative orchestration and management, and realize unified operation and maintenance management of computing power resources and network resources.
  • computing power service layer computing network management and arrangement layer
  • computing power resource layer computing power routing layer
  • network resource layer can support computing-network collaborative orchestration and management, and realize unified operation and maintenance management of computing power resources and network resources.
  • ICT information and communication technology
  • the computing power service layer is configured to carry various services and applications of ubiquitous computing, and supports distributed micro-service architecture, that is, it supports decomposing applications into atomic functional components and forming algorithm libraries, which are uniformly scheduled by API Gateway , to realize functions such as service decomposition and service scheduling.
  • the computing power routing layer includes a control plane and a forwarding plane; the computing power routing layer is configured to discover computing network resources based on abstraction (that is, the computing power resource status information generated based on the first template or the second template and based on The network resource status information generated by the third template), comprehensively consider the network status and computing power status, and flexibly schedule services (such as the above-mentioned first service) to different computing resource nodes (that is, the above-mentioned third node and/or the first four nodes).
  • abstraction that is, the computing power resource status information generated based on the first template or the second template and based on The network resource status information generated by the third template
  • the computing network management and orchestration layer is configured to support the registration, update and registration of computing power nodes (that is, the above-mentioned fourth node), network nodes (that is, the above-mentioned third node) and service information (that is, the above-mentioned computing power service) of the entire network.
  • Cancellation and other management that is, to support the registration of the computing power resource layer, network resource layer and computing power service layer with the computing power dispatching and orchestration center (also called the computing network orchestration management center or computing network orchestration and scheduling center) to generate computing power, services and Network topology information.
  • the computing power resource layer is configured to use computing infrastructure to provide computing power resources, and in order to meet the diverse computing needs of the edge computing field, it is oriented to different applications and provides computing power models and computing power APIs on the basis of physical computing resources , Computing network resource identification and other functions.
  • the computing infrastructure can include a combination of various computing capabilities ranging from single-core CPU, multi-core CPU to CPU+GPU+FPGA.
  • the network resource layer is configured to use network infrastructure to provide ubiquitous network connections for every corner of the network; wherein, the network infrastructure may include an access network, a metropolitan area network, and a backbone network.
  • the computing power resource layer and the network resource layer are the infrastructure layers of the CAN architecture; the computing network management and arrangement layer and the computing power routing layer are two core functional modules of the CAN architecture , users and applications access CAN through the computing power routing layer, and realize perception, control and scheduling of computing power resources and network resources through the computing network management and orchestration layer.
  • the computing network management and orchestration layer may include subfunctional modules such as the computing network orchestration management center, computing power resource management center, and network management center.
  • the computing network orchestration management center is configured to complete unified management of computing power resources and network resources, including perception, measurement and OAM management of computing power resources and network resources.
  • the computing network orchestration management center can perceive computing power resources and network resources, construct global topology information of computing power (that is, the above computing power resource topology information) and network global topology information (that is, the above network resource topology information), and construct Service global topology information (that is, the above computing power service topology information), based on the constructed global topology information, realizes the unified operation of computing power resources and network resources.
  • the computing network orchestration management center can also be configured based on the received computing power information (that is, the above-mentioned computing power resource status information), network information (that is, the above-mentioned network resource status information) and service information (that is, the above-mentioned computing power service status information), and generate a coordinated scheduling strategy for computing power resources and network resources according to business requirements (that is, the above-mentioned service requests).
  • computing power information that is, the above-mentioned computing power resource status information
  • network information that is, the above-mentioned network resource status information
  • service information that is, the above-mentioned computing power service status information
  • the computing power resource management center is first configured to abstractly describe and represent computing power resources through computing power modeling for heterogeneous computing resources to form node computing power information (that is, the above-mentioned first template or second template) , to shield the difference of the underlying hardware devices; the computing power information can be transmitted to the corresponding network node through the computing power notification (such as the CAN routing node shown in Figure 5, the function of the CAN routing node is equivalent to the function of the third node above) . Secondly, it can also be configured to perform OAM operations on computing power resources and network resources, and realize computing power operations and network operations.
  • the computing power resource management center needs to receive the configuration and management of the computing network orchestration management center, and report the computing power status (that is, the above computing power resource status information) to the computing network orchestration management center.
  • the network management center is configured to implement management and operation and maintenance of current network resources.
  • the network management center needs to receive the configuration and management of the computing network orchestration management center, and report the network resource status (that is, the above-mentioned network resource status information) to the computing network orchestration management center.
  • the computing network orchestration management center can also be configured to support the management of registration, update and cancellation of computing power nodes, network nodes and service information of the entire network. For example, after a computing power node goes online, it can notify its computing power enabling information to the computing network orchestration management center (that is, send the above computing power resource status information to the computing network orchestrating management center for the first time), and the computing power enabling information can include Resource information such as power node identification or computing power resource identification, device type, chip type, storage, and computing.
  • the computing network orchestration management center can also be configured to realize the configuration and management of the method of perception of services, network resources and computing power resources, mainly including but not limited to:
  • the computing network orchestration management center can adaptively order the parameters that need to be collected, sensed or measured, and the frequency of feedback (that is, the frequency of reporting information), etc. .
  • the computing network management and arrangement layer can update the computing power topology information , service topology information, and network topology information, and then based on the updated computing power topology information, service topology information, and network topology information, the collaborative orchestration of network resources and computing power can be programmed and the automatic adaptation of services can be realized.
  • Manage the computing power service image including adding, version updating, deleting, etc.
  • Support computing power service orchestration function that is, for computing power service experience, intelligently arrange and dispatch computing power services to their suitable computing power nodes, which may include but not limited to:
  • the computing power service management function that is, the computing power service layer
  • the computing power service layer to realize the instantiation of the computing power service on one or more computing power nodes according to the computing power service quality requirements, Service update, flexible expansion and contraction, service termination, etc.;
  • Notify computing power routing nodes such as software-defined network (SDN) network controllers
  • SDN software-defined network
  • the computing power service mainly describes the application from the server side; the computing network management and orchestration layer can directly manage and arrange the computing power service, or through the perception of computing power node status, network status, and computing power service status , generate the computing power service scheduling strategy and send it to the computing power service layer, and the computing power service layer realizes the management and arrangement of computing power services according to the received strategy; in other words, the decomposition, execution and scheduling of computing power services can be It can be executed at the power service layer, or at the computing network management and orchestration layer.
  • the computing network management and orchestration layer can be selected on the management plane (that is, the computing network orchestration management center, corresponding to the above-mentioned first node), the control plane (that is, the computing power The control plane of the routing layer, corresponding to the above-mentioned second node) or the data plane (that is, the computing power routing node, corresponding to the above-mentioned third node) executes specific scheduling.
  • the management plane that is, the computing network orchestration management center, corresponding to the above-mentioned first node
  • the control plane that is, the computing power The control plane of the routing layer, corresponding to the above-mentioned second node
  • the data plane that is, the computing power routing node, corresponding to the above-mentioned third node
  • the computing network collaborative scheduling is performed on the management plane, that is, the "network management module" of the computing network orchestration management center notifies the “computing power orchestrator + network (also called computing network orchestrator)" Network information (that is, the above-mentioned network resource status information), the computing network orchestration management center performs a unified computing network collaborative scheduling, generates a scheduling strategy, and sends the scheduling strategy to the network controller (also called the computing network controller).
  • the network controller also called the computing network controller
  • the network controller further generates a path forwarding table according to the scheduling strategy, so that by enhancing the interface configuration between the "computing power orchestrator" and the network controller, that is, the network controller sends network information to the computing power orchestrator, and the computing power orchestration
  • the server perceives network resource information and improves the efficiency of executing collaborative scheduling strategies.
  • the network controller is configured to collect network information, report the network information to the computing network orchestrator, and is configured to receive the network orchestration policy (ie, the scheduling policy) of the computing network orchestrator.
  • the computing network orchestrator is configured to collect computing power information (that is, the above computing power resource status information), receive network information from the network controller, perform joint programming of computing power resources and network resources, and generate an orchestration strategy; and is configured to Send the orchestration policy to the network controller. It can be understood that the computing network orchestrator is responsible for service scheduling.
  • the computing network collaborative scheduling is performed on the control plane, that is, the "computing power orchestrator" of the computing network orchestration management center notifies the computing power information to the computing network controller through the "network management module", and the computing network
  • the controller performs coordinated scheduling of unified computing power resources and network resources, generates scheduling policies, and further generates path forwarding tables according to the scheduling policies, thereby enhancing the interface configuration between the "computing power orchestrator” and the network controller, and
  • the network controller is enhanced to allow the network controller to perceive computing resource information and improve the efficiency of executing collaborative scheduling strategies.
  • the computing network collaborative scheduling is performed on the data plane, that is, the "computing network unified orchestrator" of the computing network orchestration management center performs collaborative scheduling of the network and computing power, and is constructed by the computing network management and orchestration layer Static computing power topology information and network topology information, the computing power topology information and network topology information are delivered to the data plane, and the data plane realizes the generation and execution of collaborative scheduling strategies; in other words, the computing network management and orchestration layer Topology information and network topology information are delivered to the control plane, which implements distributed service scheduling.
  • each module of the Computing Network Orchestration Management Center is divided according to logical functions.
  • each function of the Computing Network Orchestration Management Center can be divided into different modules according to requirements.
  • the unified control and management of computing power resources and network resources can be realized, and the global optimal configuration of computing power resources and network resources can be realized.
  • the management plane that is, the computing network orchestration management center
  • the control plane that is, the computing network controller
  • the data plane that is, the computing power routing node
  • the embodiment of the present application also provides a management and scheduling device, as shown in Figure 9, the device includes:
  • the first management unit 901 is configured to manage computing resources and network resources of the network
  • the second management unit 902 is configured to manage computing power services of the network
  • the scheduling unit 903 is configured to receive the service request of the first service, and schedule the first service.
  • the scheduling unit 903 is further configured to:
  • the scheduling policy is used for the second node to determine the forwarding path of the first service, so as to schedule the first service into the network
  • the corresponding third node performs processing; the second node has at least a network control function; and the third node has at least a computing power sensing function and a forwarding function.
  • the scheduling unit 903 is further configured to send computing power resource information of the network to the second node of the network, and the sent computing power resource information is used for the second node based on at least computing resource information and network resources, and generate a scheduling policy for the first service; the scheduling policy is used for the second node to determine the forwarding path of the first service, so as to schedule the first service to the
  • the corresponding third node in the network performs processing; the second node has at least a network control function and a computing power information sensing function; the third node has at least a computing power sensing function and a forwarding function.
  • the scheduling unit 903 is further configured to send computing power resource information and network resource information of the network to at least one third node of the network, and the sent computing power resource information and network resource information are used in For the third node to generate a scheduling strategy for the first service based on at least computing power resource information and network resource information; the scheduling strategy is used for the third node to determine the forwarding path of the first service , so as to dispatch the first service to a corresponding third node in the network for processing; the third node has at least a computing power sensing function and a forwarding function.
  • the device further includes an acquiring unit configured to acquire status information of computing power resources of the network;
  • the third node has at least a computing power sensing function and a forwarding function;
  • the first management unit 901 is further configured to manage the computing resources of the network based on the acquired status information of the computing resources.
  • the device further includes an updating unit configured to update the computing power resource topology information of the network based on the obtained computing power resource state information.
  • the obtaining unit is further configured to obtain network resource state information of the network
  • the first management unit 901 is further configured to manage the network resources of the network based on the acquired network resource state information.
  • the updating unit is further configured to update the network resource topology information of the network based on the acquired network resource status information.
  • the first management unit 901 when the first management unit 901 manages the computing resources and network resources of the network, it is further configured to:
  • the second management unit 902 is further configured to perform one of the following operations:
  • the second management unit 902 is further configured to perform one of the following operations:
  • the second management unit 902 is further configured to perform one of the following operations:
  • the fourth node has at least a computing power function.
  • the second management unit 902 is further configured to perform one of the following operations:
  • the first management unit 901, the second management unit 902, the scheduling unit 903, the acquiring unit and the updating unit may be implemented by a processor in the management and scheduling device.
  • the management and scheduling device when the management and scheduling device provided by the above-mentioned embodiment schedules services, it only uses the division of the above-mentioned program modules for illustration. In actual application, the above-mentioned processing can be assigned to different program modules according to needs Completion means that the internal structure of the device is divided into different program modules to complete all or part of the processing described above.
  • the management and scheduling device and the management and scheduling method embodiments provided in the above embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiments, and will not be repeated here.
  • the embodiment of the present application also provides a node, as shown in FIG. 10 , the node 1000 includes:
  • Communication interface 100 capable of information exchange with other nodes
  • the processor 1002 is connected to the communication interface 1001 to implement information interaction with other nodes, and is configured to execute the methods provided by one or more of the above technical solutions when running a computer program;
  • the memory 1003 stores computer programs that can run on the processor 1002 .
  • the processor 1002 is configured to:
  • the service request of the first service is received, and the first service is scheduled.
  • processor 1002 is further configured to:
  • the scheduling policy is used for the second node to determine the forwarding path of the first service, so as to schedule the first service into the network
  • the corresponding third node performs processing; the second node has at least a network control function; and the third node has at least a computing power sensing function and a forwarding function.
  • the processor 1002 is further configured to send computing power resource information of the network to a second node of the network, and the sent computing power resource information is used by the second node based on at least computing resource information and network resources, and generate a scheduling policy for the first service; the scheduling policy is used for the second node to determine the forwarding path of the first service, so as to schedule the first service to the
  • the corresponding third node in the network performs processing; the second node has at least a network control function and a computing power information sensing function; the third node has at least a computing power sensing function and a forwarding function.
  • the processor 1002 is further configured to send computing power resource information and network resource information of the network to at least one third node of the network, and the sent computing power resource information and network resource information are used in For the third node to generate a scheduling strategy for the first service based on at least computing power resource information and network resource information; the scheduling strategy is used for the third node to determine the forwarding path of the first service , so as to dispatch the first service to a corresponding third node in the network for processing; the third node has at least a computing power sensing function and a forwarding function.
  • the processor 1002 is further configured to:
  • the third node has at least a computing power sensing function and a forwarding function
  • the computing resources of the network are managed based on the acquired status information of the computing resources.
  • the processor 1002 is further configured to update the computing power resource topology information of the network based on the obtained computing power resource state information.
  • the processor 1002 is further configured to:
  • the network resources of the network are managed based on the acquired network resource state information.
  • the processor 1002 is further configured to update the network resource topology information of the network based on the acquired network resource state information.
  • processor 1002 when the processor 1002 manages the computing resources and network resources of the network, it is further configured to:
  • processor 1002 is further configured to perform one of the following operations:
  • processor 1002 is further configured to perform one of the following operations:
  • processor 1002 is further configured to perform one of the following operations:
  • the fourth node has at least a computing power function.
  • processor 1002 is further configured to perform one of the following operations:
  • bus system 1004 is used to realize connection and communication between these components.
  • the bus system 1004 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as bus system 1004 in FIG. 10 for clarity of illustration.
  • the memory 1003 in the embodiment of the present application is used to store various types of data to support the operation of the node 1000 .
  • Examples of such data include: any computer program for operating on node 1000 .
  • the methods disclosed in the foregoing embodiments of the present application may be applied to the processor 1002 or implemented by the processor 1002 .
  • the processor 1002 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above-mentioned method may be completed by an integrated logic circuit of hardware in the processor 1002 or instructions in the form of software.
  • the aforementioned processor 1002 may be a general-purpose processor, DSP, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like.
  • the processor 1002 may implement or execute various methods, steps, and logic block diagrams disclosed in the embodiments of the present application.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a storage medium, and the storage medium is located in the memory 1003, and the processor 1002 reads the information in the memory 1003, and completes the steps of the aforementioned method in combination with its hardware.
  • the node 1000 may be implemented by one or more Application Specific Integrated Circuit (ASIC, Application Specific Integrated Circuit), DSP, Programmable Logic Device (PLD, Programmable Logic Device), Complex Programmable Logic Device (CPLD, Complex Programmable Logic Device), FPGA, general-purpose processor, controller, microcontroller (MCU, Micro Controller Unit), microprocessor (Microprocessor), or other electronic components are used to implement the aforementioned method.
  • ASIC Application Specific Integrated Circuit
  • DSP Programmable Logic Device
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA general-purpose processor
  • controller microcontroller
  • MCU Micro Controller Unit
  • microprocessor Microprocessor
  • the memory 1003 in this embodiment of the present application may be a volatile memory or a nonvolatile memory, and may also include both volatile and nonvolatile memories.
  • the non-volatile memory can be ROM, Programmable Read-Only Memory (PROM, Programmable Read-Only Memory), Erasable Programmable Read-Only Memory (EPROM, Erasable Programmable Read-Only Memory), Electrically Erasable Programmable read-only memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), magnetic random access memory (FRAM, ferromagnetic random access memory), flash memory (Flash Memory), magnetic surface memory, optical disc, or CD-ROM -ROM, Compact Disc Read-Only Memory); magnetic surface storage can be disk storage or tape storage.
  • Volatile memory can be RAM, which acts as external cache memory.
  • RAM Static Random Access Memory
  • SSRAM Synchronous Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM Synchronous Dynamic Random Access Memory
  • SLDRAM Synchronous Link Dynamic Random Access Memory
  • DRRAM Direct Rambus Random Access Memory
  • DRRAM Direct Rambus Random Access Memory
  • the memories described in the embodiments of the present application are intended to include but not limited to these and any other suitable types of memories.
  • the embodiment of the present application also provides a management and scheduling system, as shown in Figure 11, the system includes: a first node 1101, a second node 1102, a third node 1103 and Four nodes 1104 .
  • the embodiment of the present application also provides a storage medium, that is, a computer storage medium, specifically a computer-readable storage medium, for example, including a memory 1003 storing a computer program, and the above-mentioned computer program can be executed by the processor of the node 1000 Step 1002 is executed to complete the steps described in the foregoing method.
  • the computer-readable storage medium can be memories such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disc, or CD-ROM.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente demande divulgue un procédé et un appareil de gestion et de planification, un nœud et un support de stockage. Le procédé comprend : la gestion, par un premier nœud, d'une ressource de puissance de calcul et d'une ressource de réseau d'un réseau ; la gestion d'un service de puissance de calcul du réseau ; et la réception d'une demande de service d'un premier service et la planification du premier service.
PCT/CN2022/105717 2021-07-14 2022-07-14 Procédé et appareil de gestion et de planification, nœud et support de stockage WO2023284830A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110797438.7A CN115622904A (zh) 2021-07-14 2021-07-14 管理和调度方法、装置、节点及存储介质
CN202110797438.7 2021-07-14

Publications (1)

Publication Number Publication Date
WO2023284830A1 true WO2023284830A1 (fr) 2023-01-19

Family

ID=84856202

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105717 WO2023284830A1 (fr) 2021-07-14 2022-07-14 Procédé et appareil de gestion et de planification, nœud et support de stockage

Country Status (2)

Country Link
CN (1) CN115622904A (fr)
WO (1) WO2023284830A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115955383A (zh) * 2023-03-14 2023-04-11 中国电子科技集团公司第五十四研究所 一种宽带低时延高精度的混合算力的信号协同处理系统
CN116436800A (zh) * 2023-06-13 2023-07-14 新华三技术有限公司 一种资源调度方法及装置
CN116684418A (zh) * 2023-08-03 2023-09-01 北京神州泰岳软件股份有限公司 基于算力服务网关的算力编排调度方法、算力网络及装置
CN117933529A (zh) * 2023-12-20 2024-04-26 中国信息通信研究院 一种多资源感知的算网大脑能力评价方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412609B (zh) * 2022-08-16 2023-07-28 中国联合网络通信集团有限公司 一种业务处理方法、装置、服务器及存储介质
CN116501501A (zh) * 2023-06-21 2023-07-28 亚信科技(中国)有限公司 算力资源管理和编排方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170078216A1 (en) * 2013-10-17 2017-03-16 Ciena Corporation Method and apparatus for provisioning virtual network functions from a network service provider
CN111953526A (zh) * 2020-07-24 2020-11-17 新华三大数据技术有限公司 一种分层算力网络编排方法、装置及存储介质
CN112003660A (zh) * 2020-07-17 2020-11-27 北京大学深圳研究生院 一种网内资源的量纲测量方法、算力调度方法及存储介质
CN113079218A (zh) * 2021-04-09 2021-07-06 网络通信与安全紫金山实验室 一种面向服务的算力网络系统、工作方法及存储介质
CN114095577A (zh) * 2020-07-31 2022-02-25 中国移动通信有限公司研究院 资源请求方法、装置、算力网元节点及算力应用设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170078216A1 (en) * 2013-10-17 2017-03-16 Ciena Corporation Method and apparatus for provisioning virtual network functions from a network service provider
CN112003660A (zh) * 2020-07-17 2020-11-27 北京大学深圳研究生院 一种网内资源的量纲测量方法、算力调度方法及存储介质
CN111953526A (zh) * 2020-07-24 2020-11-17 新华三大数据技术有限公司 一种分层算力网络编排方法、装置及存储介质
CN114095577A (zh) * 2020-07-31 2022-02-25 中国移动通信有限公司研究院 资源请求方法、装置、算力网元节点及算力应用设备
CN113079218A (zh) * 2021-04-09 2021-07-06 网络通信与安全紫金山实验室 一种面向服务的算力网络系统、工作方法及存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115955383A (zh) * 2023-03-14 2023-04-11 中国电子科技集团公司第五十四研究所 一种宽带低时延高精度的混合算力的信号协同处理系统
CN115955383B (zh) * 2023-03-14 2023-05-16 中国电子科技集团公司第五十四研究所 一种宽带低时延高精度的混合算力的信号协同处理系统
CN116436800A (zh) * 2023-06-13 2023-07-14 新华三技术有限公司 一种资源调度方法及装置
CN116436800B (zh) * 2023-06-13 2023-09-19 新华三技术有限公司 一种资源调度方法及装置
CN116684418A (zh) * 2023-08-03 2023-09-01 北京神州泰岳软件股份有限公司 基于算力服务网关的算力编排调度方法、算力网络及装置
CN116684418B (zh) * 2023-08-03 2023-11-10 北京神州泰岳软件股份有限公司 基于算力服务网关的算力编排调度方法、算力网络及装置
CN117933529A (zh) * 2023-12-20 2024-04-26 中国信息通信研究院 一种多资源感知的算网大脑能力评价方法

Also Published As

Publication number Publication date
CN115622904A (zh) 2023-01-17

Similar Documents

Publication Publication Date Title
WO2023284830A1 (fr) Procédé et appareil de gestion et de planification, nœud et support de stockage
WO2021190482A1 (fr) Système de réseau de traitement de puissance de calcul et procédé de traitement de puissance de calcul
US11051183B2 (en) Service provision steps using slices and associated definitions
Cheng et al. FogFlow: Easy programming of IoT services over cloud and edges for smart cities
Bauer et al. IoT reference architecture
CN109743893A (zh) 用于网络切片的方法和设备
WO2022184094A1 (fr) Système de réseau pour traiter une puissance de hachage, ainsi que procédé de traitement de service et nœud d'élément de réseau de puissance de hachage
Antonini et al. Fog computing architectures: A reference for practitioners
JP2015512091A (ja) クラウドコンピューティング環境におけるプロセスの調整
WO2018090191A1 (fr) Procédé de gestion, unité et système de gestion destinés à une fonction de réseau
Lei et al. Computing power network: an interworking architecture of computing and network based on IP extension
US20230042205A1 (en) Customer activation on edge computing environment
Iorio et al. Computing without borders: The way towards liquid computing
Bolettieri et al. Towards end-to-end application slicing in multi-access edge computing systems: Architecture discussion and proof-of-concept
Alaya et al. Towards semantic data interoperability in oneM2M standard
Camelo et al. Daemon: A network intelligence plane for 6g networks
CN116684418B (zh) 基于算力服务网关的算力编排调度方法、算力网络及装置
Shah Multi-agent cognitive architecture-enabled IoT applications of mobile edge computing
WO2023186002A1 (fr) Procédé, appareil et dispositif de planification de ressources
CN109951370A (zh) 多大数据中心分层互联互通方法及装置
Nguyen et al. Software-defined virtual sensors for provisioning iot services on demand
Rabah et al. A service oriented broker-based approach for dynamic resource discovery in virtual networks
Latif et al. Characterizing the architectures and brokering protocols for enabling clouds interconnection
Rocha et al. CNS-AOM: design, implementation and integration of an architecture for orchestration and management of cloud-network slices
WO2023003686A1 (fr) Registre d'application d'informatique en périphérie multi-accès (mec) dans une fédération mec

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22841463

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE