CN115002681A - Computing power sensing network and using method and storage medium thereof - Google Patents

Computing power sensing network and using method and storage medium thereof Download PDF

Info

Publication number
CN115002681A
CN115002681A CN202110231080.1A CN202110231080A CN115002681A CN 115002681 A CN115002681 A CN 115002681A CN 202110231080 A CN202110231080 A CN 202110231080A CN 115002681 A CN115002681 A CN 115002681A
Authority
CN
China
Prior art keywords
network
computing
calculation
layer
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110231080.1A
Other languages
Chinese (zh)
Inventor
魏华
姚惠娟
付月霞
刘鹏
杜宗鹏
郑韶雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110231080.1A priority Critical patent/CN115002681A/en
Publication of CN115002681A publication Critical patent/CN115002681A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/06Selective distribution of broadcast services, e.g. multimedia broadcast multicast service [MBMS]; Services to user groups; One-way selective calling services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a computing power perception network and a using method and a storage medium thereof, wherein the computing power perception network comprises the following components: the computing power service layer is used for bearing various services and applications; the computation network management layer is used for receiving the reported computation resource information and network resource information and managing the computation resources and the network resources; a computing resource layer for providing computing resources using a computing infrastructure; and the computation routing layer is used for scheduling the service required by the terminal to different computation nodes according to the requirement. By adopting the invention, consistent experience can be provided for users; and the method does not rely on a unified charging server any more, and can provide fair competition opportunities for different edge server operators.

Description

Computing power sensing network and using method and storage medium thereof
Technical Field
The invention relates to the technical field of wireless communication, in particular to a computing power sensing network, a using method thereof and a storage medium.
Background
Under the development trend of cloud computing and edge computing, computing power of many different scales can be distributed at different distances close to users in future society, and various personalized services are provided for the users through a global network. From a billion-level intelligent terminal, to a billion-level home gateway all over the world, to thousands of Edge clouds with Computing power brought by future MEC (Mobile Edge Computing) technology in each city, and tens of large-scale cloud DCs (Data centers) in each country, a huge amount of ubiquitous Computing power is formed, and the development trend of Computing and network deep fusion is formed. Computing resources in the network are integrated into each corner of the network, so that each network node can become a resource provider, and a user request can be satisfied by calling the nearest node resource and is not limited to a specific node, thereby avoiding waste of connection and network scheduling resources. However, the conventional network only provides a pipeline for data communication, is based on connection, is subject to a fixed network addressing mechanism, and often cannot meet higher and more rigorous QoE (Quality of Experience) requirements. Therefore, the new generation network architecture facing the future network needs to consider the requirements of network and calculation fusion evolution cooperatively, realize the global optimization of the network in the ubiquitous connection and computing architecture, flexibly schedule the computing resources and reasonably distribute the services.
IaaS (Infrastructure as a Service) refers to a Service mode in which IT (information Technology) Infrastructure is provided as a Service through a network and charging is performed according to the actual usage or occupancy of resources by a user. In the service model, a common user does not construct a hardware facility such as a data center by himself, but obtains computer infrastructure services including services such as a server, a storage and a network from an IaaS service provider by using the Internet in a renting mode. In the existing IaaS service, a resource operation management center (cloud computing center) or a charging server generally performs centralized management, allocation and charging on resources belonging to the IaaS service directly.
The prior art has the defect that under a network architecture with network computing convergence, the effective distribution of computing resources cannot be realized.
Disclosure of Invention
The invention provides a computing power sensing network, a using method thereof and a storage medium, which are used for solving the problem that computing power resources cannot be effectively acquired under a network architecture of network computing fusion.
The invention provides the following technical scheme:
a computational-force-aware network, comprising: computing power service layer, computing network management layer, computing power resource layer, computing power routing layer and network resource layer, wherein:
the computing power service layer is used for bearing various services and applications;
the computation network management layer is used for receiving the reported computation resource information and network resource information and managing the computation resources and the network resources;
a computing resource layer for providing computing resources using a computing infrastructure;
and the computing routing layer is used for scheduling the service required by the terminal to different computing nodes according to the requirements based on the computing resources and the network resources.
In implementation, the computing power service layer is based on a distributed microservice architecture.
In implementation, the computation force service layer is further used for supporting unified scheduling by the API Gateway after the application solution forms the atomization functional component.
In implementation, the computation service layer is further used for realizing service decomposition and service scheduling through the API gateway after receiving the data of the terminal user.
In implementation, the network management layer is further configured to bill for the use of computing power according to the terminal.
In implementation, the computational network management layer is further used for performing abstract description and representation on computational resources through computational modeling, forming node computational power information and shielding the differences of underlying hardware devices.
In an implementation, the computational network management layer is further configured to communicate the node computational power information to the network node via computational power advertisements.
In the implementation, the computational network management layer is further used for performing performance monitoring and management on computational resources and network resources so as to realize computational power operation and network operation.
In implementation, the computing resource layer is further configured to provide, for different applications, one or a combination of the following functions for satisfying the diverse computing requirements in the edge computing field on the basis of the physical computing resources: the calculation force model, the calculation force API and the calculation force resource identification.
In implementation, the computation routing layer further schedules the service required by the terminal to different computation nodes according to needs based on the computation resources provided by the computation resource layer and the network resources provided by the network resource layer.
In an implementation, the computational routing layer is further configured to include one or a combination of the following functions:
the method comprises the steps of computation network routing identification, computation network routing forwarding, computation power routing addressing and computation network service announcement.
In an implementation, the method further comprises the following steps:
and the network resource layer is used for providing network connection for each communication device in the network by utilizing the network infrastructure.
In an implementation, the network resource layer is further configured to provide network connectivity using a network infrastructure comprising one or a combination of:
access network, metropolitan area network, backbone network.
A method of using a computational-force-aware network, comprising:
after receiving the computing resource request, the computing network management layer broadcasts the computing resource request to computing nodes of the computing resource layer;
receiving computing resource information and charging information returned by computing nodes, wherein the computing nodes meet the requirements in computing resource requests;
and the calculation power routing layer establishes connection between the terminal and the calculation power node after determining the calculation power node according to the calculation power resource information and the calculation power resource request returned by each calculation power node, so as to schedule the calculation service of the terminal to the calculation power node for execution.
In an implementation, the method further comprises the following steps:
receiving the return and charging information of the calculation force node;
the calculation force routing layer establishes connection between the terminal and the calculation force node after determining the calculation force node according to the calculation force resource information, the charging information and the calculation force resource request returned by each calculation force node, so as to schedule the calculation service of the terminal to the calculation force node for execution;
and the computational network management layer charges the terminal according to the computational power resource information and the charging information returned by the computational power nodes in the computational power resource layer.
In the implementation, the computing network management layer receives computing resource requests through a resource operation center; and/or the terminal is charged through the resource operation center.
In implementation, the computing resource request comprises a node selection strategy;
the calculation force routing layer establishes connection between the terminal and the calculation force node after the calculation force node is determined by the entry route according to the calculation force resource information and the charging information returned by each calculation force node and a node selection strategy in the calculation force resource request, so as to schedule the calculation service of the terminal to the calculation force node for execution;
or the like, or, alternatively,
the calculation resource request comprises the size of the calculation resource and the maximum acceptable time delay;
the computing power routing layer transmits computing power resource information and charging information returned by each computing power node to the terminal under the condition of maximum acceptable time delay allowance, the computing power node information meeting the requirement reaches the terminal, and after the terminal determines the computing power node, the connection between the terminal and the computing power node is established for scheduling the computing service of the terminal to the computing power node for execution.
In implementation, the computational resource request received by the computational management layer is sent to the computational management layer after the computational resource request is received by an entry route; and/or the presence of a gas in the gas,
and scheduling the computational resources required by the terminal to different computational nodes according to the needs through the ingress route.
A computer-readable storage medium storing a computer program for executing the method of using the above-described computer-force sensing network.
The invention has the following beneficial effects:
in the technical scheme provided by the embodiment of the invention, because the computing resources are provided for the user based on the same computing power sensing network, consistent experience can be provided for the user;
furthermore, before the computing resources are distributed, the plurality of computing nodes directly provide all information including the charging information for the end user, and the user can independently select among the plurality of computing nodes according to the self requirement, so that the charging information of the computing nodes can be determined by direct agreement with the user, the user does not depend on a uniform charging server, and fair competition opportunities can be provided for different edge server operators.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of a computational power sensing network according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating an implementation method of a force sensing network according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a workflow of computing power resource operation according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a workflow of computing power resource operation in the second embodiment of the present invention.
Detailed Description
The inventor notices in the process of invention that:
in an existing computing resource operation mode (IaaS service), a resource operation management center (cloud computing center) or a charging server generally directly manages, allocates and charges own resources. However, the computing nodes in the network may be subordinate to different owners, and the charging modes thereof are also greatly different, so that the effective allocation of computing resources is difficult to realize on the whole. Because the edge computing node is usually limited in resources and can only provide services for limited users, an operation management center or a charging server needs to monitor the load of the node in real time so as to determine whether to continuously allocate the resources to other users in the following process, and the real-time monitoring cost for the node load is continuously increased along with the increase of the number of computing nodes and the uncertainty of network delay. Therefore, future edge computing operations need to consider how to fairly and effectively distribute computing resources of different owners and can achieve effective control and charging of computing node loads.
Based on this, the embodiment of the present invention provides a new network architecture scheme for network computing convergence for a future data communication network architecture for computing network convergence, and further provides a scheme for computing resource operation based on the new network architecture.
The following describes embodiments of the present invention with reference to the drawings.
Fig. 1 is a schematic diagram of a computational force sensing network, as shown in the figure, the network includes: calculation power service layer, calculation network management layer, calculation power resource layer, calculation power routing layer and network resource layer, wherein:
the computing power service layer is used for bearing various services and applications;
the computation network management layer is used for receiving the reported computation resource information and network resource information and managing the computation resources and the network resources;
a computing resource layer for providing computing resources using a computing infrastructure;
and the computing routing layer is used for scheduling the service required by the terminal to different computing nodes according to the requirements based on the computing resources and the network resources.
In implementation, the network management layer is further configured to bill for the use of computing power according to the terminal.
In the implementation, the method can further comprise the following steps:
and the network resource layer is used for providing network connection for each communication device in the network by utilizing the network infrastructure.
Specifically, in the face of the development trend of Computing network convergence, a network architecture-CAN (Computing-aware network) is provided in the scheme, the Computing-aware network is a novel network architecture for Computing network deep convergence, based on ubiquitous network connection and highly distributed Computing nodes, and through automatic deployment, optimal routing and load balancing of services, a brand new network infrastructure capable of sensing Computing power is constructed, so that the network CAN schedule Computing resources at different positions in real time as required, the utilization rate of the network and the Computing resources is improved, and user experience is further improved.
It is predicted that over 500 billion terminals and devices will be networked in the year 2020, with over 50% of the data needing to be analyzed, processed and stored at the network edge side. The mutual integration of everything brings the requirements of mass data, calculation, connection, storage, processing and the like, the processing of the data requires a large amount of computing power, and the mode of the terminal and the data center is increasingly strong when the data is processed. With the acceleration of 5G network construction, the spread of computing power from clouds and terminals to network edges is straightforward.
The computing power sensing network is oriented to the computing power requirement of the future intelligent society, emphasizes the deep integration of computing and networks, can realize the high-efficiency computing power requirement of cloud, edge and end, can also realize the high throughput, agile connection and balance random selection of data and computing power, and is mainly promoted by operators at present.
The deep fusion of the physical world and the digital world is brought by the industrial digital transformation, and the ubiquitous calculation and connection are needed; meanwhile, with the development of cloud computing and edge computing, computing power of different scales can be distributed at different distances close to a user in the future society, and the computing power is assisted to gradually move from the center to the edge, so that the extreme business experience is provided for vertical industry business; in addition, along with the light weight of a computing carrier and the development trend of gradually solving and forming services and functions by application, more flexible scheduling of a network is required, and the network needs to perceive, interconnect and cooperate with ubiquitous computing power and services in the future. In order to meet the future network requirements, operators and equipment manufacturers propose a novel computing power network technology for computing network fusion development, and develop research on computing power networks and construction work of industrial ecology.
The computing power perception network interconnects dynamically distributed computing resources based on ubiquitous network connection, and enables massive applications to call the computing resources in different places in real time as required through unified management and coordinated scheduling of multi-dimensional resources such as network, storage, computing power and the like, so that global optimization of connection and computing power in the network is realized, and consistent user experience is provided.
The computing power sensing network in the embodiment of the invention is a network which can enable massive applications to call computing resources in different places in real time according to needs and realize the global optimization of connection and computing power in the network. However, it should be noted that this network is still under evolution, and therefore, no matter what kind of nomenclature is used in other places and scenes, as long as the network architecture with such capability is provided, a network that is also suitable for the technical solution provided in the embodiment of the present invention is the computing power aware network referred in the embodiment of the present invention.
For the computing network fused with a novel network architecture-computing power sensing network, in order to realize sensing, interconnection and cooperative scheduling of ubiquitous computing and services, a computing power sensing network architecture system can be logically divided into five power modules, namely a computing power service layer, a computing network management layer, a computing power resource layer, a computing power routing layer and a network resource layer, as shown in fig. 1, wherein the computing power routing layer comprises a control plane and a forwarding plane.
The computing power resource layer finishes abstraction, modeling, control and management of the computing power resources based on ubiquitous computing power resources of the network, and informs the computing power routing layer through a computing power notification module, and the computing power routing layer comprehensively considers user requirements, computing resource conditions and the network resource state of the bottom layer and schedules service applications to appropriate nodes so as to realize optimal resource utilization rate and guarantee extreme user experience.
The following is a description of the specific embodiment in terms of layering. Similarly, it should be noted that other names may be available in other places and scenarios, and therefore, various names in the embodiment of the present invention are only for convenience of description, and do not include technical meanings other than names, for example, an arithmetic routing layer only refers to this logic layer, and does not particularly have "routing" and other technical meanings, and similarly, in other environments, a layer having the same function, regardless of its name, is used for scheduling the service required by the terminal to different arithmetic nodes as needed based on the arithmetic resources and the network resources, that is, the arithmetic routing layer in the embodiment of the present invention.
1. The computing power service layer:
in implementation, the computing power service layer is based on a distributed microservice architecture.
Micro service Architecture (micro service Architecture) is an architectural concept that aims to achieve decoupling of solutions by breaking down functionality into discrete services. It can be seen as applying many SOLIDs (SRP: Single Responsibility Principle, The Single responsiveness Principle; OCP: Open closure Principle, The Open Closed Principle; LSP: Richards replacement Principle, The Liskov customization Principle; ISP: Interface separation Principle, The Interface differentiation Principle; DIP: dependent Inversion Principle, The Dependency Inversion Principle) principles at The architecture level rather than at The class of acquiring services. The microservice architecture is an interesting concept, and the main function of the microservice architecture is to decompose the functions into discrete services, thereby reducing the coupling of the system and providing more flexible service support.
In implementation, the computation force service layer is further used for supporting unified scheduling of the API Gateway after the application deconstructs the atomized functional components.
In a specific implementation, the computation service layer is further configured to implement service decomposition and service scheduling through the API gateway after receiving data of the end user.
Specifically, based on a distributed micro-service architecture, the computation service layer supports Application solution to form an atomization functional component, and the atomization functional component is uniformly scheduled by an API Gateway (API: Application programming Interface, Application Program Interface).
The computing power Service layer is deployed on the computing power resource layer and used for bearing various services and applications of ubiquitous computing, parameters such as computing power requests and the like of requests of users to Service SLAs (Service Level agreements) can be transmitted to the computing power routing layer, in addition, the computing power Service layer can also receive data of terminal users, and functions such as Service decomposition, Service scheduling and the like can be realized through an API gateway.
2. A computation network management layer:
in implementation, the computational network management layer is further used for performing abstract description and representation on computational power resources through computational power modeling, forming node computational power information and shielding the difference of the underlying hardware equipment.
In a specific implementation, the computational network management layer is further configured to transmit the node computational power information to the network node through the computational power advertisement.
In the implementation, the computational network management layer is further used for performing performance monitoring and management on computational resources and network resources so as to realize computational power operation and network operation.
Specifically, the computing power Operation and the computing power service arrangement are completed, and the management of computing power resources and network resources is completed, including the sensing, measurement and OAM (Operation Administration and Maintenance) management of the computing power resources and the like; the network computing operation of the terminal user and the management of the computing power routing layer and the network resource layer are realized.
In the face of heterogeneous computing resources, a computing network management layer firstly carries out abstract description and representation on the computing resources through computing power modeling to form node computing power information and shield the difference of bottom hardware equipment; the calculation force information can be transmitted to the corresponding network node through the calculation force announcement;
in addition, the performance monitoring and management can be carried out on the computing resources and the network resources, and computing operation and network operation are realized.
3. Computing power resource layer:
in implementation, the computing resource layer is further configured to provide, for different applications, one or a combination of the following functions for satisfying the diverse computing requirements in the edge computing field on the basis of the physical computing resources: the calculation force model, the calculation force API and the calculation force resource identification.
Specifically, computing resources are provided by using the existing computing infrastructure, and the computing infrastructure comprises a combination of multiple computing capabilities from a single-core CPU to a multi-core CPU to a Central Processing Unit (CPU) + a Graphics Processing Unit (GPU) + a Field-Programmable Gate Array (FPGA), and the like;
in order to meet the requirement of diversified calculation in the field of edge calculation, the method is oriented to different applications, and on the basis of physical calculation resources, functions such as a calculation power model, a calculation power API (application programming interface), calculation power resource identification and the like are provided.
4. Computation routing layer:
in implementation, the computing power routing layer further schedules the service required by the terminal to different computing power nodes according to needs based on the computing power resource provided by the computing power resource layer and the network resource provided by the network resource layer.
In an implementation, the computational power routing layer is further configured to include one or a combination of the following functions:
the method comprises the steps of computation network routing identification, computation network routing forwarding, computation power routing addressing and computation network service announcement.
Specifically, the computation routing layer may include a control plane and a forwarding plane;
and based on the abstracted computational network resource discovery, comprehensively considering the network condition and the computational resource condition, flexibly scheduling the service to different computational power nodes according to the needs, wherein a computational power routing layer is the core of the computational power sensing network.
The specific functions mainly comprise computation network routing identification, computation network routing forwarding, computation routing addressing, computation network service announcement and the like.
5. Network resource layer:
in an implementation, the network resource layer is further configured to provide network connectivity for a network infrastructure comprising one or a combination of the following:
access network, metropolitan area network, backbone network.
In particular, existing network infrastructure, including access networks, metropolitan networks, and backbone networks, may be utilized to provide ubiquitous network connectivity to various corners of the network.
The computing resource layer and the network resource layer are infrastructure layers of the computing sensing network, the computing network management layer and the computing routing layer are two core function modules for realizing the computing sensing function system, and users and applications are accessed into the network through the computing routing layer.
The computing power perception network architecture realizes perception, control and scheduling of computing network resources based on the defined five-function module.
Based on the above calculation force sensing network, the embodiment of the present invention further provides a method for using the calculation force sensing network, which is described below.
Fig. 2 is a schematic flow chart of an implementation method of the computing power awareness network, which may include:
step 201, after receiving a computing power resource request, a computing power management layer broadcasts the computing power resource request to computing power nodes of a computing power resource layer;
step 202, computing resource information and charging information returned by computing nodes are received, wherein the computing nodes meet the requirements in computing resource requests;
step 203, the computation routing layer establishes the connection between the terminal and the computation node according to the computation resource information and the computation resource request returned by each computation node after determining the computation node, so as to schedule the computation service of the terminal to the computation node for execution.
In the implementation, the method can further comprise the following steps:
receiving the return and charging information of the calculation force node;
the computing power routing layer establishes connection between the terminal and the computing power node according to the computing power resource information, the charging information and the computing power resource request returned by each computing power node after determining the computing power node, so as to schedule the computing service of the terminal to the computing power node for execution;
and the computational network management layer charges the terminal according to the computational power resource information and the charging information returned by the computational power nodes in the computational power resource layer.
In step 202, computing resource information and charging information returned by the computing node are received, and other required node information, such as an address and a port of the computing node, may also include public key node information as required.
In step 203, the computation routing layer (which may be a routing node that receives the computation resource request) selects a policy according to the computation resource information and the charging information returned by each computation node and a node in the computation resource request, and after determining the computation node, establishes a connection between the terminal and the computation node, so as to schedule the subsequent computation service of the terminal to the computation node for execution.
The computing resource information refers to the size of the computing resource which can be provided by the computing node, and the computing resource is a physical resource.
The routing node may specifically be: the routing node that receives the computing resource request, or the routing node that first receives the computing resource request, may also be an entry route to which a terminal that initiates the computing resource request belongs.
In the implementation, the computing network management layer receives computing resource requests through a resource operation center; and/or the terminal is charged through the resource operation center.
In implementation, the computational resource request received by the computational management layer is sent to the computational management layer after the computational resource request is received by an entry route; and/or the presence of a gas in the atmosphere,
and scheduling the computational resources required by the terminal to different computational nodes according to the needs through the ingress route.
In specific implementation, at least two modes can be included:
the computing resource request comprises a node selection strategy;
the calculation force routing layer establishes connection between the terminal and the calculation force node after the calculation force node is determined by the entry route according to the calculation force resource information and the charging information returned by each calculation force node and a node selection strategy in the calculation force resource request, so as to schedule the calculation service of the terminal to the calculation force node for execution;
or the like, or, alternatively,
the calculation resource request comprises the size of the calculation resource and the maximum acceptable time delay;
the calculation force routing layer transmits calculation force resource information and charging information returned by each calculation force node to the terminal under the condition of maximum acceptable time delay allowance, the calculation force node information meeting the requirement reaches the terminal, and after the terminal determines the calculation force node, the connection between the terminal and the calculation force node is established for scheduling the calculation service of the terminal to the calculation force node for execution.
The above two modes are described below by way of examples.
The scheme provides a computing resource operation workflow under the background of computing network fusion development, wherein the computing resource is from an edge server and other facilities which can provide infrastructure as a service (IaaS) for users nearby, and is collectively called a computing node in the example.
The set premise is as follows: the computing node information is registered to the resource operation center, and includes but is not limited to computing node address, node resource information (computing power, bandwidth) and the like.
Node calculation force information: (CPU master frequency, CPU core number, GPU master frequency, video memory) and the like.
The first embodiment is as follows:
in this example, the ingress route completes the compute node selection. In the examples:
the computing resource request comprises a node selection strategy;
and the calculation force routing layer establishes connection between the terminal and the calculation force node after the entry route determines the calculation force node according to the calculation force resource information and the charging information returned by each calculation force node and a node selection strategy in the calculation force resource request, so as to schedule the calculation service of the terminal to the calculation force node for execution.
Fig. 3 is a schematic workflow diagram of computing resource operation according to an embodiment, as shown in the figure, the workflow diagram may include:
when a user/service needs to use computing resources, a computing resource request can be sent to a resource operation center, and the size of the needed computing resources and a node selection strategy (such as minimum rate, minimum delay, or minimum rate under a certain delay threshold value) are indicated; the resource request will arrive at the ingress route first.
2: the entry route forwards the computing resource request information to an operation center of a computing network management layer, and records a node selection strategy of a user.
3: and the operation center broadcasts the computing resource request to the registered computing nodes of the computing resource layer.
4: after the computing node receives the broadcast and receives the resource request information, if the resource request information can be met, the computing node measures the average usage load of the local computing resource in a period of time before and calculates the usage rate to return to the user.
And 5, receiving the rate information and acquiring time delay in the communication process by the inlet route, then selecting the most suitable computational power node according to the recorded user selection strategy, and establishing a computational power resource use protocol to confirm the rate information.
And 6, the ingress route returns the selected node information to the user.
And 7, the calculation force node feeds back the established calculation force resource use protocol to the operation center for recording.
8, the user can inquire the agreement record of the operation center at any time and complete the payment to the operation according to the agreement.
Example two:
in this example, the user completes the selection of the computing node, in this example:
the calculation resource request comprises the size of the calculation resource and the maximum acceptable time delay;
the calculation force routing layer transmits calculation force resource information and charging information returned by each calculation force node to the terminal under the condition of maximum acceptable time delay allowance, the calculation force node information meeting the requirement reaches the terminal, and after the terminal determines the calculation force node, the connection between the terminal and the calculation force node is established for scheduling the calculation service of the terminal to the calculation force node for execution.
Fig. 4 is a schematic workflow diagram of computing power resource operation in the second embodiment, as shown in the figure, the workflow diagram may include:
1: when a user/service needs to use computing resources, a computing resource request can be sent to a resource operation center, and the size of the needed computing resources and the maximum acceptable time delay are indicated; the resource request will arrive at the ingress route first.
2: and the entry route forwards the computing resource request information to an operation center of a computing network management layer and records the maximum acceptable time delay of the user.
3: and the operation center broadcasts the computing resource request to the registered computing nodes of the computing resource layer.
4: after the computing node receives the broadcast and receives the resource request information, if the resource request information can be met, the computing node measures the average usage load of the local computing resource in a period of time before and calculates the usage rate to return to the user.
5: and the entry route receives the rate information and acquires the time delay in the communication process, then filters the information of the computing node according to the recorded maximum acceptable time delay, and returns the information meeting the requirements to the user.
6: and the user autonomously selects the required computational power node according to the self service requirement, the returned rate information and the link delay, and establishes a computational power resource use protocol with the computational power node to confirm the rate information.
7: and the calculation force node feeds back the established calculation force resource use protocol to the operation center for recording.
8: the user can inquire the agreement record of the operation center at any time and pay to the operation according to the agreement.
An example of a dynamic rate calculation method of the calculation force node is as follows:
the calculation node calculates the user rate cost according to the local load:
cost=α*X*Y*Load
wherein X represents the unit computing power resource basic price of the computing node; y represents the usage amount of the computing resources requested by the user; alpha is a discount parameter and is set by a computing node owner; the Load is a function for calculating the node Load, and is greatly increased when the node Load is too high, so that the willingness of a user to select the node is reduced, and on the contrary, when the node Load is smaller, more users can be attracted by lower cost. The Load may be calculated as follows:
Figure BDA0002958130660000141
wherein, Load Curr Represents the average Load, of the node in the previous 5 minutes Best Indicating the highest load operating under the optimum operating conditions of the system.
Figure BDA0002958130660000151
Based on the same inventive concept, the embodiment of the present invention further provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for executing the above using method.
Specific implementations can be found in the implementation of the method of using a computational power aware network.
In summary, the embodiment of the present invention provides a computing power aware network supporting a computing network deep convergence architecture and a method for using the same, where the computing power aware network includes: the system comprises a calculation power service layer, a calculation network management layer, a calculation power resource layer, a calculation power routing layer and a network resource layer. Specifically, the method comprises the following steps:
the computing power service layer: the functions of service decomposition, service scheduling and the like are completed, and a service request of a service or an application can be sent to a computer management layer; the computing service layer is deployed above the computing resource layer;
a computation network management layer: awareness, measurement and OAM management of computational power and network resources needs to be done. In the face of heterogeneous computing resources, a computing network management layer firstly performs unified description and representation on the computing power resources through computing power modeling to form node computing power information; in addition, it is also necessary to monitor the performance of the computing resources and notify the performance, failure, etc. of the computing resources to the corresponding network nodes.
Computation routing layer: and based on the abstracted computing resource discovery, comprehensively considering the network condition and the computing resource condition, and flexibly scheduling the service to different computing power nodes according to the requirement. The specific functions mainly comprise calculation force route identification, calculation force route control, calculation force state network notification, calculation force route addressing, calculation force route forwarding and the like; and the calculation routing layer is used for connecting the calculation resource layer and the network resource layer.
Computing power resource layer: and in the face of the heterogeneous computing resources which are deployed in a network ubiquitous manner, the system is responsible for collecting computing power resource information and reporting the information to a computing network management layer.
The using method comprises the following steps: calculating the use rate by each calculation node; a user directly establishes a protocol with the calculation force node; and the message flow among the user, the entry route, the calculation node and the operation center.
Compared with the prior art, the technical effect of the scheme is as follows:
1. the new computing power perception network architecture is provided, the network and the computing are highly coordinated, ubiquitous computing is interconnected based on ubiquitous connection, efficient cloud, edge and network coordination is achieved, computing resource utilization efficiency is improved, and consistent experience is provided for users.
2. The calculation node charging information is determined by direct agreement with the user, does not depend on a uniform charging server, and can provide fair competition opportunities for different edge server operators.
3. The server dynamically adjusts the basic rate through the self load, the rate is reduced when the load is small, otherwise, the rate is increased; thereby maximizing the edge server operational value.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (19)

1. A computational-force-aware network, comprising: calculation power service layer, calculation network management layer, calculation power resource layer, calculation power routing layer and network resource layer, wherein:
the computing power service layer is used for bearing various services and applications;
the computation network management layer is used for receiving the reported computation resource information and network resource information and managing the computation resources and the network resources;
a computing resource layer for providing computing resources using a computing infrastructure;
and the computing routing layer is used for scheduling the service required by the terminal to different computing nodes according to the requirements based on the computing resources and the network resources.
2. The network of claim 1, wherein the computing services layer is based on a distributed microservice architecture.
3. The network of claim 1, wherein the computing power service layer is further configured to support unified scheduling by an Application Programming Interface (API) gateway after the application deconstructs the atomic functional components.
4. The network of claim 3, wherein the computational services layer is further configured to perform service decomposition and service scheduling via the API gateway after receiving end-user data.
5. The network of claim 1, wherein the network management layer is further configured to bill for the use of computing power based on the terminal.
6. The network of claim 1, wherein the computational network management layer is further configured to abstract and represent computational power resources through computational power modeling, form node computational power information, and mask underlying hardware device differences.
7. The network of claim 6, wherein the computing network management layer is further configured to communicate node computing power information to the network node via a computing power advertisement.
8. The network of claim 1, wherein the computing management layer is further configured to monitor and manage performance of computing resources and network resources for computing and network operations.
9. The network of claim 1, wherein the computing resources layer is further configured to provide one or a combination of the following functions for satisfying diverse computing requirements in the edge computing domain based on physical computing resources for different applications: the calculation force model, the calculation force API and the calculation force resource identification.
10. The network of claim 1, wherein the computational routing layer further enables a user to schedule services required by the terminal to different computational nodes as needed based on computational resources provided by the computational resource layer and network resources provided by the network resource layer.
11. The network of claim 1, wherein the computational routing layer is further configured to include one or a combination of the following functions:
the method comprises the steps of computation network routing identification, computation network routing forwarding, computation power routing addressing and computation network service announcement.
12. The network of claim 1, further comprising:
and the network resource layer is used for providing network connection for each communication device in the network by utilizing the network infrastructure.
13. The network of claim 12, wherein the network resource layer is further configured to provide network connectivity for a network infrastructure comprising one or a combination of:
access network, metropolitan area network, backbone network.
14. A method of using the computational effort awareness network of any of claims 1 to 13, comprising:
after receiving the computing resource request, the computing network management layer broadcasts the computing resource request to computing nodes of the computing resource layer;
receiving computing resource information returned by computing nodes, wherein the computing nodes meet the requirements in the computing resource request;
and the calculation power routing layer establishes connection between the terminal and the calculation power node after determining the calculation power node according to the calculation power resource information and the calculation power resource request returned by each calculation power node, so as to schedule the calculation service of the terminal to the calculation power node for execution.
15. The method of claim 13, further comprising:
receiving the return and charging information of the calculation force node;
the calculation force routing layer establishes connection between the terminal and the calculation force node after determining the calculation force node according to the calculation force resource information, the charging information and the calculation force resource request returned by each calculation force node, so as to schedule the calculation service of the terminal to the calculation force node for execution;
and the computational network management layer charges the terminal according to the computational power resource information and the charging information returned by the computational power nodes in the computational power resource layer.
16. The method of claim 15, wherein the computing network management layer receives a computing resource request through a resource operation center; and/or the terminal is charged through the resource operation center.
17. The method of claim 15, wherein the computing resource request includes a node selection policy;
the calculation force routing layer establishes connection between the terminal and the calculation force node after the calculation force node is determined by the entry route according to the calculation force resource information and the charging information returned by each calculation force node and a node selection strategy in the calculation force resource request, so as to schedule the calculation service of the terminal to the calculation force node for execution;
or the like, or a combination thereof,
the calculation resource request comprises the size of the calculation resource and the maximum acceptable time delay;
the calculation force routing layer transmits calculation force resource information and charging information returned by each calculation force node to the terminal under the condition of maximum acceptable time delay allowance, the calculation force node information meeting the requirement reaches the terminal, and after the terminal determines the calculation force node, the connection between the terminal and the calculation force node is established for scheduling the calculation service of the terminal to the calculation force node for execution.
18. The method of claim 14, wherein the computational resource request received by the computational management layer is sent to the computational management layer after the computational resource request is received by the ingress route; and/or the presence of a gas in the gas,
and scheduling the computational resources required by the terminal to different computational nodes according to the needs through the ingress route.
19. A computer-readable storage medium, characterized in that it stores a computer program for performing the method of any of claims 14 to 18.
CN202110231080.1A 2021-03-02 2021-03-02 Computing power sensing network and using method and storage medium thereof Pending CN115002681A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110231080.1A CN115002681A (en) 2021-03-02 2021-03-02 Computing power sensing network and using method and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110231080.1A CN115002681A (en) 2021-03-02 2021-03-02 Computing power sensing network and using method and storage medium thereof

Publications (1)

Publication Number Publication Date
CN115002681A true CN115002681A (en) 2022-09-02

Family

ID=83018796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110231080.1A Pending CN115002681A (en) 2021-03-02 2021-03-02 Computing power sensing network and using method and storage medium thereof

Country Status (1)

Country Link
CN (1) CN115002681A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115328663A (en) * 2022-10-10 2022-11-11 亚信科技(中国)有限公司 Method, device, equipment and storage medium for scheduling resources based on PaaS platform
CN115562879A (en) * 2022-12-06 2023-01-03 北京邮电大学 Computing power sensing method, computing power sensing device, electronic device and storage medium
CN115623602A (en) * 2022-12-19 2023-01-17 浪潮通信信息系统有限公司 Resource reselection method and device
CN115883660A (en) * 2022-11-21 2023-03-31 中国联合网络通信集团有限公司 Industrial production computing power network service method, platform, equipment and medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111953526A (en) * 2020-07-24 2020-11-17 新华三大数据技术有限公司 Hierarchical computational power network arrangement method, device and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111953526A (en) * 2020-07-24 2020-11-17 新华三大数据技术有限公司 Hierarchical computational power network arrangement method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于清林: ""从边缘计算到算力网络"", 《产业科技创新》, vol. 2, no. 3, 25 January 2020 (2020-01-25), pages 49 - 51 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115328663A (en) * 2022-10-10 2022-11-11 亚信科技(中国)有限公司 Method, device, equipment and storage medium for scheduling resources based on PaaS platform
CN115328663B (en) * 2022-10-10 2023-01-03 亚信科技(中国)有限公司 Method, device, equipment and storage medium for scheduling resources based on PaaS platform
CN115883660A (en) * 2022-11-21 2023-03-31 中国联合网络通信集团有限公司 Industrial production computing power network service method, platform, equipment and medium
CN115562879A (en) * 2022-12-06 2023-01-03 北京邮电大学 Computing power sensing method, computing power sensing device, electronic device and storage medium
CN115623602A (en) * 2022-12-19 2023-01-17 浪潮通信信息系统有限公司 Resource reselection method and device

Similar Documents

Publication Publication Date Title
CN115002681A (en) Computing power sensing network and using method and storage medium thereof
Martinez et al. Design, resource management, and evaluation of fog computing systems: A survey
CN113448721A (en) Network system for computing power processing and computing power processing method
Aburukba et al. Scheduling Internet of Things requests to minimize latency in hybrid Fog–Cloud​ computing
Nan et al. Optimal resource allocation for multimedia cloud based on queuing model
CN102902536B (en) A kind of Internet of Things computer system
CN112134802A (en) Edge computing power resource scheduling method and system based on terminal triggering
CN103053146B (en) Data migration method and device
CN110784779B (en) Data acquisition method of electricity consumption information acquisition system
CN103825964A (en) SLS (Service Level Specification) scheduling device and SLS scheduling method based on cloud computing PaaS (platform-as-a-service) platform
WO2023284830A1 (en) Management and scheduling method and apparatus, node, and storage medium
CN113342478A (en) Resource management method, device, network system and storage medium
CN103561078A (en) Telecom operation system and service implementation method
WO2022001941A1 (en) Network element management method, network management system, independent computing node, computer device, and storage medium
CN102801812A (en) Novel cloud service component management system and method in loose network environment
Bolettieri et al. Application-aware resource allocation and data management for MEC-assisted IoT service providers
CN110430068A (en) A kind of Feature Engineering method of combination and device
CN111381957B (en) Service instance refined scheduling method and system for distributed platform
CN115002862A (en) Network system for computing power processing, service processing method and computing power network element node
CN114615096B (en) Event-driven architecture-based telecommunication charging method, system and related equipment
Phung et al. onevfc—a vehicular fog computation platform for artificial intelligence in internet of vehicles
Pakhrudin et al. A review on orchestration distributed systems for IoT smart services in fog computing
CN116684418B (en) Calculation power arrangement scheduling method, calculation power network and device based on calculation power service gateway
Siar et al. Offloading coalition formation for scheduling scientific workflow ensembles in fog environments
CN112486666A (en) Model-driven reference architecture method and platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination