CN116016550A - Service credible providing system and method for decentralized network resources - Google Patents

Service credible providing system and method for decentralized network resources Download PDF

Info

Publication number
CN116016550A
CN116016550A CN202211628439.XA CN202211628439A CN116016550A CN 116016550 A CN116016550 A CN 116016550A CN 202211628439 A CN202211628439 A CN 202211628439A CN 116016550 A CN116016550 A CN 116016550A
Authority
CN
China
Prior art keywords
service
resource
micro
node
providing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211628439.XA
Other languages
Chinese (zh)
Inventor
孟慧平
李文萃
高峰
金翼
郭少勇
党芳芳
秦龙
齐芫苑
谢波
邵苏杰
徐思雅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Beijing University of Posts and Telecommunications
State Grid Henan Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Henan Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Beijing University of Posts and Telecommunications
State Grid Henan Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Henan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Beijing University of Posts and Telecommunications, State Grid Henan Electric Power Co Ltd, Information and Telecommunication Branch of State Grid Henan Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202211628439.XA priority Critical patent/CN116016550A/en
Publication of CN116016550A publication Critical patent/CN116016550A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a service credible providing system and method for scattered network resources, which effectively solve the problem that the existing patent technology cannot achieve satisfactory effects due to the defects of the existing patent technology. The system and the method provided by the invention utilize the blockchain technology to store the resource information and the service behavior data, and design a micro-service deployment model oriented to multidimensional resources on the basis of a service credible providing platform based on the blockchain, the model takes node resource limitation as constraint, takes the optimal deployment cost and service time as targets to consider the conditions of node failure and the like, monitors the node resource state in real time, supports micro-service dynamic migration, and realizes service provision and dynamic adjustment of dispersed network resources according to a multi-target optimization model, thereby realizing better resource allocation effect and guaranteeing information security.

Description

Service credible providing system and method for decentralized network resources
Technical Field
The invention relates to the field of resource allocation service, in particular to a service credible providing system and method for decentralized network resources.
Background
The internet of things (IoT), a network of everything interconnected, links up scattered resources and provides intelligent services by extending and expanding on the internet basis. In recent years, the internet of things has been rapidly developed, and has been widely used in various fields, such as smart cities, smart homes, intelligent transportation, etc. The intelligent service of the Internet of things can allocate and flexibly schedule computing resources, storage resources and communication resources among cloud, edges and ends according to service requirements, instantiate computing tasks through function arrangement, and service provision is achieved. However, in the process of resource sharing, the heterogeneous cloud network environment causes a trust problem among the multiparty main bodies of resource sharing, and the isomerism of the resource also brings challenges to resource scheduling. In the process of arranging intelligent service functions, how to improve the utilization rate of network resources, optimize network delay and cost and dynamically ensure service quality becomes a problem to be solved.
Blockchains and microservices become the choice. The blockchain is used as a distributed account book technology, has the characteristics of decentralization, transparency, non-falsification and the like, creates a new pattern of interconnection and intercommunication, can realize information credible sharing of scattered network resources by sharing the whole life cycle information of the network resources to the upper chain, breaks a data barrier and realizes linkage integration of the resources; while micro services are small autonomous units of executable code, functionality can be broken down into finer grained service modules, each running in a separate process and capable of being deployed independently through an automated deployment mechanism. To support intelligent applications, intelligent computing tasks may be provided as micro-services. Deep Reinforcement Learning (DRL) can improve accuracy and reduce repetitive modeling by learning from previous experiences to learn about the network and tasks and make optimal decisions. Because DRL has the advantages of self-learning and online learning, the DRL can be used for realizing automatic allocation and dynamic adjustment of heterogeneous resources in the micro-service deployment process.
In order to understand the development state of the prior art, the prior papers and patents are searched, compared and analyzed, and the following prior technical schemes are screened:
in the prior art, a patent number CN 110166567A, namely a block chain-based Internet of things resource sharing method and system, is provided, and a block chain-based Internet of things resource sharing method and system are disclosed, a two-layer block chain structure with sub-block chains and global block chains is established, and a low-overhead consensus algorithm of a master-slave multi-chain structure is combined, so that reliable sharing of the decentralized network energy resources and maximization of the network resource utilization are realized. However, the technology of the invention focuses on the underlying network resource sharing, does not consider the process of providing the decentralized network resource supporting service, and simultaneously omits the integration and adjustment of heterogeneous resources, and lacks dynamic adaptability in the resource sharing process.
The prior art scheme 2 is a patent with a patent number of CN114221967A, namely a resource sharing platform and a resource sharing method based on a blockchain network, and discloses a resource sharing platform based on the blockchain network, which comprises a service processing layer and a blockchain network layer. And the user performs computing power resource sharing according to the preset service processing logic, generates computing power resource sharing behavior data, and uploads the data to the blockchain network so that the blockchain network stores the computing power resource sharing behavior data. And the resource consumer sends an access address of the resource to be shared to the resource consumer aiming at a resource sharing request initiated by the resource to be shared displayed by the resource sharing platform, and the computing resource consumer completes a computing resource sharing task according to the access address. The resources involved in the invention are only computational resources, and the resource sharing method is too simple, so that it is difficult to integrate scattered network resources and provide trusted services.
In the prior art scheme 3, a cooperative edge computing method, a blockchain, and a cooperative edge computing system are provided for the "cooperative edge computing method, blockchain, and cooperative edge computing system" patent with patent number CN110851531a, which describes that a plurality of edge computing nodes can cooperate and share resources with each other to achieve higher data processing capability at the network edge. The blockchain technique is applied to edge computation, and edge nodes are also authentication nodes in the blockchain, which can participate in the consensus process and save a full blockchain backup. The invention uses the blockchain as the cooperative edge computing service, performs distributed behavior audit through the edge nodes, and performs expense settlement according to the audit report, so as to realize a fine mechanism, effectively restrict the behavior of the cooperative edge nodes, improve the safety and the credibility of the cooperative edge computing service, but the method does not consider the on-demand allocation of resources facing the business.
The present invention thus provides a new solution to this problem.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a service credible providing system and method for dispersing network resources, which effectively solve the problem that the prior patent technology cannot achieve satisfactory effects due to the defects existing in the prior patent technology.
The technical scheme is that the service credible providing system for the decentralized network resources comprises a physical resource layer, a resource credible management layer, an intelligent service providing layer and an application layer;
the physical resource layer consists of an Internet of things terminal, an edge resource, a core backbone network resource and a cloud resource, and provides computing resources, communication resources and storage resources for the service trusted providing system;
the resource credible management layer consists of a virtualized resource pool and a resource management module based on a block chain, registers resource information on the block chain and carries out on-chain management;
the intelligent service providing layer comprises a micro service registration center, a micro service orchestrator and a micro service monitor, wherein the micro service orchestrator invokes a micro service orchestration algorithm to design a micro service deployment strategy, the micro service monitor monitors state information of resources in a cluster in real time, and self-adaptively adjusts deployment nodes of the micro service according to a multi-objective optimization model according to the resource tendency of the micro service to be deployed;
the application layer comprises intelligent application of the Internet of things and directly provides intelligent service for users.
A service trusted provision method of decentralized network resources, the service trusted provision method comprising the steps of:
s1, a resource providing node sends a registration request to a resource management module based on a blockchain, wherein registration information contained in the registration request comprises self ID, position information and self available resource amount information;
s2, triggering the intelligent contract to verify the registration information of the node after receiving the registration request of the resource providing node, returning confirmation registration information after the verification is passed, and writing the registration information of the resource providing node into the blockchain in a transaction mode;
s3, the user initiates a service request to the service trusted providing platform, wherein the service request comprises user identity information and service requirements, the service requirements comprise functional requirements and non-functional requirements, and the service request is registered in a micro-service registration center;
s4, the micro-service configuration micro-service orchestrator runs a micro-service orchestration algorithm, matches the service request with the resource information, selects a resource providing node, and obtains a resource scheduling decision;
s5, the micro-service registration center sends the service request to the block chain resource management module, and the block chain resource management module sends a resource sharing request to the selected resource providing node according to the resource scheduling decision;
s6, the resource providing node provides resource support micro-service deployment and provides service for users;
s7, the micro-service monitor monitors the resource surplus of the resource providing node in real time, adaptively adjusts micro-service deployment, and carries out micro-service migration;
s8, after the resource providing node provides the resource, if the resource information changes, uploading the updated resource information to a resource management module based on the block chain;
and S9, after the service is finished, the user evaluates the QoS, and the intelligent contract is allocated according to the trigger excitation of the user evaluation, and the blockchain platform allocates excitation according to the contribution of the resource providing nodes.
Further, the service trusted providing platform in step S3 includes a virtualized resource pool, a blockchain-based resource management module, and a micro-service module.
Further, the resource providing nodes are divided into two kinds: one type is a server node having computing resources and storage resources
Figure BDA0004004738740000041
Carrying out deployment of micro-services; one class is switching node forwarding traffic, server node +.>
Figure BDA0004004738740000042
Having an amount of computational resources denoted r ic The storage capacity is r im Switching node->
Figure BDA0004004738740000043
Is defined as r jc =0, the storage capacity is defined as r jm =0, node n i And n j The physical link between them is denoted as l ij The bandwidth of the link is limited to B ij A transmission delay of d ij
Further, the specific steps of the micro-service orchestrator in step S4 for performing micro-service deployment to obtain the deployment policy are as follows:
f1, input microservice ms i To micro-service orchestrator, where micro-service ms i Including related information, resource providing node information;
f2, initializing cost functions corresponding to all environment states and actions, initializing and evaluating parameters of a Q network and a target Q network, and emptying a playback buffer area;
f3, the agent interacts with the environment, randomly selects an initial state, and selects an action with the maximum Q value through a Q network based on the state;
f4, the agent executes actions in the environment, calculates instantaneous rewards of the basic tasks, and puts the states, the operations, rewards and the next state into a playback buffer area;
f5, randomly sampling experience in the playback buffer area, and sending the instantaneous rewards and the next state which are specific to the sampling into a target network for target value calculation;
f6, sending the target value into an evaluation Q network to calculate an error function, and starting to update the Q network;
and F7, obtaining a micro-service deployment strategy.
Further, in the step S4, the micro service orchestrator matches the resource providing nodes according to the micro service resource requirement to generate a micro service deployment policy s= { S 1 ,s 2 ,...,s |A| (s is therein i Including microservice ms i Deployed target nodes and resourcesQuota, micro-service monitor periodically monitors micro-service behavior, including end-to-end delay, throughput, ms for each micro-service i Execution time of each micro-service ms i And the resource quota occupation condition of the micro-service orchestrator is fed back to the micro-service orchestrator.
Further, the specific steps of the multi-objective optimization model establishment in the step S7 are as follows:
d1, micro-service orchestration and migration costs for the overall application may be expressed as
Cost=ω d Cost deploym Cost mg (16)
Wherein omega d And omega m Representing the Cost of deployment of micro services respectively deploy And migration Cost mg And ω dm =1;
D2, micro-service orchestration and migration delays for the overall application may be expressed as
Delay=ω dd D deploydm D mg (17)
Wherein omega dd And omega dm Representing micro-service deployment delays D respectively deploy And migration delay D mg And ω dddm =1;
And the optimization objective of the multi-dimensional resource constrained micro-service dynamic deployment model is to minimize the micro-service orchestration cost and time delay:
min Cost (18)
min Delay (19)
d3, considering load balancing of the whole application program, as shown in a formula (22):
Figure BDA0004004738740000051
and D4, obtaining a multi-objective optimization model: min Cost (18)
min Delay (19)
Figure BDA0004004738740000052
Further, the step D1 obtains the micro-service deployment Cost deploy The specific steps of (a) are as follows:
g1, microservice ms i The resource usage cost when deployed at a physical node is as shown in equation (1):
Figure BDA0004004738740000053
wherein c i Representation provision for micro services ms i M i Representation provision for micro services ms i Storage resource amount q of (2) c Represent the unit price of computing resources, q m Representing storage resource unit price;
g2, the link resource use cost is shown as formula (2):
Figure BDA0004004738740000054
wherein tr is ij Representing microservices ms i And ms j The data transmission rate required by data interaction is shown in ql, which represents the unit price of link resources;
and G3, obtaining the overall deployment cost of the micro-service as shown in the formula (3):
Cost deploy =Cost node +Cost link (3)。
further, the specific calculation process in the step D3 is as follows:
e1, compute node n j As shown in the following formula, wherein r is the calculated resource occupancy of the system jc Representing node n j Total amount of computing resources provided:
Figure BDA0004004738740000061
e2, according to the step E1, the storage resource utilization rate is expressed as:
Figure BDA0004004738740000062
and E3, because the load balancing aims at enabling the resource occupancy rates of different nodes to be similar, the problems that the processing time delay is too long and the overall service performance is reduced due to overlarge loads of some nodes are avoided, and the load balancing is represented by using variances of the resource utilization rates of different nodes:
Figure BDA0004004738740000063
the invention has the following beneficial effects:
firstly, a block chain is introduced, and the resource information and service behavior data are stored and uplink, so that the non-falsification and whole-course traceability of the data are ensured, and the trusted sharing of the resource is realized; secondly, on the basis of a service credible providing platform based on a blockchain, a micro-service deployment model oriented to multidimensional resources is designed, the model takes node resource limitation as constraint, the optimal deployment cost and service time as targets, the condition of node failure and the like is considered, the node resource state is monitored in real time, and the micro-service dynamic migration is supported; finally, the invention realizes the service provision and dynamic adjustment of the decentralized network resources according to the multi-objective optimization model, avoids the problems that the process of supporting the service provision of the decentralized network resources is not considered, the integration and adjustment of the heterogeneous resources is ignored, and the lack of dynamic adaptability in the process of sharing the resources is avoided, simultaneously, the problems that the decentralized network resources are difficult to integrate and the trusted service is provided are avoided, the problem that the service-oriented resource allocation is not considered as required is also avoided, and the satisfactory effect is achieved.
Drawings
FIG. 1 is a schematic diagram of a model of a service trusted providing system of the present invention;
fig. 2 is a schematic diagram of an example of the practical use of the present invention.
Detailed Description
The foregoing and other features, aspects and advantages of the present invention will become more apparent from the following detailed description of the embodiments, which proceeds with reference to the accompanying figures 1-2. The following embodiments are described in detail with reference to the drawings.
Exemplary embodiments of the present invention will be described below with reference to the accompanying drawings.
The service trusted providing system for decentralized network resources comprises a physical resource layer, a resource trusted management layer, an intelligent service providing layer and an application layer;
the physical resource layer consists of an Internet of things terminal, an edge resource, a core backbone network resource and a cloud resource, and provides computing resources, communication resources and storage resources for the service trusted providing system;
the resource credible management layer consists of a virtualized resource pool and a resource management module based on a block chain, registers resource information on the block chain and carries out on-chain management;
the intelligent service providing layer comprises a micro service registration center, a micro service orchestrator and a micro service monitor, wherein the micro service orchestrator invokes a micro service orchestration algorithm to design a micro service deployment strategy, and the micro service monitor monitors state information of resources in a cluster in real time and adaptively adjusts deployment nodes of the micro service according to the tendency of the resources of the micro service to be deployed;
the application layer comprises intelligent application of the Internet of things and directly provides intelligent service for users.
The virtualized resource pool decouples network elements and hardware resources in a physical network, the resource management module based on the blockchain selects equal nodes of a terminal, a gateway, an edge server or a cloud server in the Internet of things network to be configured as all nodes of a alliance chain, other nodes are used as light nodes, nodes with insufficient capacity are used as the light nodes to only download block heads and transaction related information, participate in caching and credibility verification of resources and data on the alliance chain, but do not participate in consensus, register the resource information on the blockchain and manage the chain, and the virtualized resource pool defines the network resource managementThe set of domains is g=g 1 ∪G 2 ∪…∪G |G| U.L, wherein G represents the number of resource management domains, each network resource management domain acts as an undirected graph G m ={N m ,L m N, where N m Representing a network resource management domain G m In a resource providing node set, L m Represents G m L represents the inter-domain link set.
A service trusted provision method of decentralized network resources, the service trusted provision method comprising the steps of:
s1, a resource providing node sends a registration request to a resource management module based on a blockchain, wherein registration information contained in the registration request comprises self ID, position information and self available resource amount information;
s2, triggering the intelligent contract to verify the registration information of the node after receiving the registration request of the resource providing node, returning confirmation registration information after the verification is passed, and writing the registration information of the resource providing node into the blockchain in a transaction mode;
s3, the user initiates a service request to a service trusted providing platform, wherein the service request comprises user identity information and service requirements, the service requirements comprise functional requirements and nonfunctional requirements, the service request is registered in a micro-service registration center, and the service trusted providing platform comprises a virtualized resource pool, a resource management module based on a block chain and a micro-service module;
s4, the micro-service configuration micro-service orchestrator runs a micro-service orchestration algorithm, matches the service request with the resource information, selects a resource providing node, and obtains a resource scheduling decision;
s5, the micro-service registration center sends the service request to the block chain resource management module, and the block chain resource management module sends a resource sharing request to the selected resource providing node according to the resource scheduling decision;
s6, the resource providing node provides resource support micro-service deployment and provides service for users;
s7, the micro-service monitor monitors the resource surplus of the resource providing node in real time, adaptively adjusts micro-service deployment according to the multi-objective optimization model, and performs micro-service migration;
s8, after the resource providing node provides the resource, if the resource information changes, uploading the updated resource information to a resource management module based on the block chain;
and S9, after the service is finished, the user evaluates the QoS, and the intelligent contract is allocated according to the trigger excitation of the user evaluation, and the blockchain platform allocates excitation according to the contribution of the resource providing nodes.
The resource providing nodes are divided into two categories: one type is a server node having computing resources and storage resources
Figure BDA0004004738740000081
Carrying out deployment of micro-services; one class is switching node forwarding traffic, server node +.>
Figure BDA0004004738740000082
Having an amount of computational resources denoted r ic The storage capacity is r im Switching node->
Figure BDA0004004738740000083
Is defined as r jc =0, the storage capacity is defined as r jm =0, node n i And n j The physical link between them is denoted as l ij The bandwidth of the link is limited to B ij A transmission delay of d ij
In the step S3, when the user initiates a service request, the micro service module divides the intelligent application program into a plurality of micro services a with specific dependency relationship, denoted as a= { ms 1 ,ms 2 ,...,ms |A| (where |A| represents the total amount of micro services, ms) i Representing micro-services such that each micro-service is encapsulated in only the same container for container i and service node j, x ij =1 means that container i is deployed on service node j, otherwise x ij When the communication link e between the containers is =0 ij Deployed on physical link l ab On the other hand, it is denoted as y ijab =1, otherwise y ij,ab =0,D ij Representing deployment of micro-services ms i And ms j Is defined as the physical distance between nodes.
In the step S4, the micro-service orchestrator matches the resource providing nodes with the appropriate resources according to the micro-service resource requirements to generate a micro-service deployment policy s= (S) 1 ,s 2 ,...,s |A| (s is therein i Including microservice ms i The deployed target nodes and resource quota, the micro-service monitor periodically monitors the micro-service operation, including end-to-end delay, throughput, ms for each micro-service i Execution time of each micro-service ms i And the resource quota occupation condition of the micro-service orchestrator is fed back to the micro-service orchestrator. In the next time slot, the allocation situation of the micro service resources is adjusted according to the multi-objective optimization model, and whether to migrate the micro service is determined according to the feedback performance.
The step S4 uses a DDQN method, so the specific steps of the step S4 micro-service orchestrator using micro-service deployment to obtain deployment strategies are as follows:
f1, input microservice ms i To micro-service orchestrator, where micro-service ms i Including related information, resource providing node information;
f2, initializing cost functions corresponding to all environment states and actions, initializing and evaluating parameters of a Q network and a target Q network, and emptying a playback buffer area;
f3, the agent interacts with the environment, randomly selects an initial state, and selects an action with the maximum Q value through a Q network based on the state;
f4, the agent executes actions in the environment, calculates instantaneous rewards of the basic tasks, and puts the states, the operations, rewards and the next state into a playback buffer area;
f5, randomly sampling experiences in the playback buffer area, and sending the instantaneous rewards and the next state which are specific to the sampling into a target network to perform target value calculation, wherein the experiences are the state, operation, rewards and the next state in the step F4;
f6, sending the target value into an evaluation Q network to calculate an error function, and starting to update the Q network;
and F7, obtaining a micro-service deployment strategy.
The above agents, environments, buffers, states, operations, rewards are all common nouns in the DDQN algorithm.
The step S4 of the micro-service orchestrator using DDQN to obtain micro-service deployment strategy is to use DRL-driven micro-service orchestration method, namely modeling the maximized jackpot prize as a Markov decision process MDP, including { S, A, R, P }, wherein S represents a state space, A represents an action space, R represents a prize function, P represents a state transition probability, in the MDP, each time slot t, agent observes the state of the external environment and takes action a t The environment is influenced by the action to enter a new state, and is fed back to the agent to give an instant prize r t Then the environmental state becomes s t+1 Agent takes another action a t+1 And get feedback r t+1 The environmental state transitions to s t+2 The goal of the DRL agent is to maximize the jackpot, and take an action with the DRL agent, where the action includes selecting a node for deployment of a micro-service and the resources the node provides for the micro-service, obtaining a new state and a reward for updating parameters of the Q-network, and making the neural network converge, so that the obtained state space, action space and rewarding function are as follows:
state space S: selecting node positions and distributing resources for the containerized micro-service of the application program according to the real-time resource surplus of each physical node, and defining a state space S= { nRes, mRes }, wherein nRes represents the resource occupation condition of all nodes, and mRes represents the resource request of the containerized micro-service;
action space a: deploying a containerized micro-service example on a physical node, selecting the physical node and the resource quantity allocated for the micro-service when the micro-service deployment is performed, and defining each action in an action space A as a i ={n j ,Res i I e {1,2,., M }, j e {1,2,., N }, res i Representing ms allocated to containerized micro-services i Is composed ofComputing resources, communication resources, and storage resources;
bonus function R: for learning process driven by rewarding function in deep reinforcement learning, agent maximizes its rewarding value through interaction with environment, when action selection satisfies constraint condition, instant rewarding function can be defined as:
Rd s (s t ,s t+1 ,a t )=C(s t )-C(s t+1 ) (24)
wherein C(s) t )=aCost(s t )+βDelay(s t ) Representing state s t The final goal of agent learning is to maximize the jackpot, thus calculating the jackpot as:
Figure BDA0004004738740000101
when t=0, C (s 0 ) Representing a positive constant, maximizing the cumulative prize represents the joint optimization objective C (s T ) Minimizing.
In order to avoid the situation that the Q value is overestimated, a Double-DQN model (DDQN) can be introduced, action selection and value estimation are decoupled, two Q networks are trained simultaneously, smaller Q values are selected for calculating an error function, the Q value overestimation is avoided, and the micro-service dynamic self-adaptive programming algorithm based on the DDQN comprises the following specific steps:
g1, enabling the agent to acquire state data from the environment, wherein a random strategy pi (a|s; theta) of actions taken by the agent indicates that the probability and theta of selecting an action port are weights in a state S, a state value function is calculated by adopting a time sequence difference method, and the current state value is updated according to the next state value, wherein the state S is an element in a state space S, and the state value function is an expected value of accumulated rewards from a certain state until one round of iteration is completed, and the expected value is shown in the following formula: v (V) π (s t )=V π (s t+1 )+r t (26)
Defining an action cost function as
Figure BDA0004004738740000111
Evaluating the Q network to obtain the current state s t Q values corresponding to different actions are generated, and the probability of 1 epsilon expressed by adopting an epsilon-greedy algorithm is selected according to a formula (28) to select an action value to select an action a t So that the Q value is the maximum probability:
a t =argmax a Q π (s t ,a) (28)
the micro-server generates a target Q value in the DDQN algorithm framework using another network, equation (29), alone, in addition to the approximate representation of the current value function using a deep convolutional network, where Q (s, a|θ e ) Representing a value function evaluating a Q network, Q (s, a|θ g ) Representing the output of the target Q network:
Figure BDA0004004738740000112
evaluating a parameter θ of a Q network e Updating in real time, synchronously copying parameters of an evaluation Q network to a target Q network every time N rounds of iteration, and updating network parameters by minimizing the mean square error between the evaluation Q value and the target Q value, wherein an error function is expressed as follows:
L(θ t )=E s,a,r,s ,[(y t -Q(s,a|θ t )) 2 ] (30)
for parameter theta t The bias guide was calculated to obtain the following gradient:
Figure BDA0004004738740000121
after the target value network is introduced, the target Q value is kept unchanged in a period of time, so that the correlation between the target Q value and the estimated Q value is reduced, and the stability of the algorithm is improved. Meanwhile, the DDQN trains two Q networks simultaneously, and the action selection and the value estimation are decoupled, so that the Q value overestimation error can be reduced.
Step S6 is performed on the micro-suitThe optimization goal in service deployment is to minimize the resource use Cost, and the overall deployment Cost of the micro-service is one of the optimization goals, the step D1 obtains the Cost of the micro-service deployment Cost deploy The specific steps of (a) are as follows:
g1, microservice ms i The resource usage cost when deployed at a physical node is as shown in equation (1):
Figure BDA0004004738740000122
wherein c i Representation provision for micro services ms i M i Representation provision for micro services ms i Storage resource amount q of (2) c Represent the unit price of computing resources, q m Representing storage resource unit price;
g2, because each micro service has a dependency relationship, frequent data interaction can be performed, the optimization target should also consider the link resource use cost, and the link resource use cost is shown in the formula (2):
Figure BDA0004004738740000123
wherein tr ij Representing microservices ms i And ms j Data transmission rate, q, required for data interaction between l Indicating the unit price of the link resource,
and G3, obtaining the overall deployment cost of the micro-service as shown in the formula (3):
Cost deploy =Cost node +Cost link (3)
because the application program is split into a plurality of micro services msi for processing, the specific calculation steps of the overall Delay of the application program are as follows:
b1, calculating the overall task processing time delay
Figure BDA0004004738740000124
Wherein TA i For micro-service ms i The size of the task to be completed is,
Figure BDA0004004738740000125
τ is the amount of computational resources required to complete a unit task i For the task completion time threshold, c i For providing micro-services ms i If c i =0, the task processing delay is infinity, wherein the task can be divided into a plurality of unit tasks;
b2, calculating task transmission time delay
Figure BDA0004004738740000131
Where tr is the task transmission rate, d ab For node n a And n b A link delay between them;
b3, the overall time delay of the application program can be expressed as:
D deploy =D e +D t (6)
to avoid the problems that the resource providing node may be stopped, failed, limited or out of service range, etc., so that the service providing fails, therefore, micro service stable migration needs to be implemented, the overall migration delay and cost are reduced, the migration problem is characterized by using a time slot model, the micro service deployment condition refers to that the state of the resource providing node, i.e. the resource amount, is kept constant in one time slot, while the state of the resource providing node, i.e. the resource amount, may change when moving from the current time slot to the next time slot, i.e. the node is closed, withdrawn and a new node joins, and the micro service may migrate from the current deployment node to another resource providing node, so that one time slot is considered to be a migration round, and a binary variable z is defined ij To describe the migration state if x ij (t) =1 and x ij ' t+1) =1, then z jj ' =1, which means micro-service ms i Migration from server node j to server node j' at time slot t, otherwise z jj ' 0 i.e. micro service ms i From server node j at time slot tMigration to server node j' so micro-service ms in step S7 i The migration cost specifically comprises the following steps:
c1, calculate micro service ms i Deployed at node n j Time of task processing
Figure BDA0004004738740000132
Wherein->
Figure BDA0004004738740000133
Representing resource providing node n j Can provide micro-service ms i And the task transmission delay is as shown in formula (8): />
Figure BDA0004004738740000134
Wherein d is ab (t) represents a link delay between resource providing nodes a, b deploying the micro service;
c2, micro service ms at time t of slot i Only partial tasks are completed, and due to the state of the node or the movement of the user, the node is forced to migrate to other nodes to continue task processing in a time slot t+1, and at this time, the size and the calculation time of the unfinished tasks need to be calculated, wherein the size of the unfinished tasks is represented by a formula (9):
Figure BDA0004004738740000135
wherein Δt represents the length of one slot, and if the transmission delay is greater than or equal to one slot, the size of the unfinished task is equal to TA i Indicating that the task has not yet started to execute in the time slot; if the communication delay is less than one time slot, the communication delay represents that a part of the task is executed, and if the task can be completed in one time slot, the size of the unfinished task is 0, namely, no task needs to be migrated;
c3, according to step C2, micro service ms i Delays generated during migration include migration downtime and delays generated by incomplete task re-execution, thus micro-servicing ms i The migration delay of (2) can be expressed as
Figure BDA0004004738740000141
Where λ represents migration downtime, only at z jj′ When (t, t+1) =1, i.e. when micro-service is migrated, taking into account downtime, let
Figure BDA0004004738740000142
Indicating the time required for re-executing the task and the time difference required for not re-executing, wherein if delta is less than or equal to 0, migration can reduce service delay, otherwise, service delay can be increased;
c4, overall migration delay is initially expressed as
Figure BDA0004004738740000143
C5, microservice ms i Bandwidth resource consumption and relocation of micro-services ms during migration i And therefore micro-service ms i Migration transport costs can be expressed as:
Figure BDA0004004738740000144
and relocate micro-service ms i Will result in multi-dimensional resource re-allocation, i.e. computation, communication and storage resources, thus micro-service ms i The additional resource usage cost by migration can be expressed as:
Figure BDA0004004738740000145
order the
Figure BDA0004004738740000146
Representing the difference between the amount of computing resources provided by the new resource providing node and the amount of computing resources provided by the original node, if delta is less than or equal to 0, the migration can reduce the resource consumption, otherwiseThe resource consumption is increased, and the same storage resource is used;
c6 due to micro-service ms i Has a dependency relationship between them when micro-services ms i After migration, the network needs to be rerouted, so the cost of network rerouting needs to be considered:
Figure BDA0004004738740000151
c7, overall micro-service ms i The migration cost is expressed as:
Cost mg =Cost tr +Cost pl +Cost ln (15)
the physical meaning of minimizing the resource use cost is to allocate the minimum resources for the micro-service as much as possible to complete the task, and minimize the resource waste and the energy consumption. However, the allocation amount of resources affects the task processing and the transmission delay, so that a multi-objective optimization model of the resource cost and the overall delay needs to be built, and the specific steps of the multi-objective optimization model building in step S7 are as follows:
d1, micro-service orchestration and migration costs for the overall application may be expressed as
Cost=ω d Cost deploym Cost mg (16)
Wherein omega d And omega m Representing the Cost of deployment of micro services respectively deploy And migration Cost ma And ω dm When no migration occurs, the Cost can be obtained by the formulas (12), (13), (14) mg =0, i.e. the overall cost considers only the micro service deployment cost;
d2, micro-service orchestration and migration delays for the overall application may be expressed as
Delay=ω dd D deploydm D mg (17)
Wherein omega dd And omega dm Weights representing micro-service deployment delay and migration delay, respectively, and ω dddm =1. Micro-services with multi-dimensional resource constraintsThe optimization objective of the dynamic deployment model is to minimize the micro-service orchestration cost and latency:
min Cost (18)
min Delay (19)
d3, considering load balancing of the whole application program, as shown in a formula (22):
Figure BDA0004004738740000152
and D4, obtaining a multi-objective optimization model: min Cost (18)
min Delay (19)
Figure BDA0004004738740000161
Namely, the goal of the micro-service dynamic deployment model of the multidimensional resource constraint is to minimize the resource use cost and minimize the time delay, and meanwhile, the node resource limitation is considered, and the sum of the memory capacity required by all micro-services on the node cannot exceed the available capacity of the node; the total amount of computing resources allocated to the microservices at a node cannot exceed the total amount of computing resources provided by the node; the link bandwidth occupied by the micro services deployed on the same physical link cannot exceed the physical link bandwidth. Finally, to ensure QoS, the load of the overall service deployment must not exceed the load balancing threshold B lin The delay should not exceed the delay threshold D lim
The specific calculation process in the step D3 is as follows:
e1, compute node n j As shown in the following formula, wherein r is the calculated resource occupancy of the system jc Representing node n j Total amount of computing resources provided:
Figure BDA0004004738740000162
e2, according to the step E1, the storage resource utilization rate is expressed as:
Figure BDA0004004738740000163
and E3, because the load balancing aims at enabling the resource occupancy rates of different nodes to be similar, the problems that the processing time delay is too long and the overall service performance is reduced due to overlarge loads of some nodes are avoided, and the load balancing is represented by using variances of the resource utilization rates of different nodes:
Figure BDA0004004738740000164
when the method is specifically used, firstly, taking a smart power grid and an industrial scene as examples, the resource use conditions of different domains are different, and a resource supply shortage state or a resource idle state can occur in a single domain. But because of trust issues between different resource principals and the heterogeneity of resources, resource sharing is difficult and service provisioning is not trusted. Heterogeneous resources in the smart grid and industrial fields can be integrated and managed through the platform constructed by the method, and the blockchain is used as a supporting technology, so that the trusted sharing of data is ensured. Taking the integrated energy service as an example, energy suppliers and customers may perform anonymous transactions without revealing their private information. When a user initiates a service request, triggering an intelligent contract, providing trusted resource information for micro-service arrangement, calling an intelligent resource scheduling algorithm, performing micro-service deployment, and performing service trusted provision such as energy transaction, demand side response and the like.
The invention has the following beneficial effects:
firstly, a block chain is introduced, and the resource information and service behavior data are stored and uplink, so that the non-falsification and whole-course traceability of the data are ensured, and the trusted sharing of the resource is realized; secondly, on the basis of a service credible providing platform based on a blockchain, a micro-service deployment model oriented to multidimensional resources is designed, the model takes node resource limitation as constraint, the optimal deployment cost and service time as targets, the condition of node failure and the like is considered, the node resource state is monitored in real time, and the micro-service dynamic migration is supported; finally, the invention realizes the service provision and dynamic adjustment of the decentralized network resources according to the multi-objective optimization model, avoids the problems that the process of supporting the service provision of the decentralized network resources is not considered, the integration and adjustment of the heterogeneous resources is ignored, and the lack of dynamic adaptability in the process of sharing the resources is avoided, simultaneously, the problems that the decentralized network resources are difficult to integrate and the trusted service is provided are avoided, the problem that the service-oriented resource allocation is not considered as required is also avoided, and the satisfactory effect is achieved.

Claims (9)

1. The service credible providing system of the decentralized network resource is characterized by comprising a physical resource layer, a resource credible management layer, an intelligent service providing layer and an application layer;
the physical resource layer consists of an Internet of things terminal, an edge resource, a core backbone network resource and a cloud resource, and provides computing resources, communication resources and storage resources for the service trusted providing system;
the resource credible management layer consists of a virtualized resource pool and a resource management module based on a block chain, registers resource information on the block chain and carries out on-chain management;
the intelligent service providing layer comprises a micro service registration center, a micro service orchestrator and a micro service monitor, wherein the micro service orchestrator invokes a micro service orchestration algorithm to design a micro service deployment strategy, the micro service monitor monitors state information of resources in a cluster in real time, and self-adaptively adjusts deployment nodes of the micro service according to a multi-objective optimization model according to the resource tendency of the micro service to be deployed;
the application layer comprises intelligent application of the Internet of things and directly provides intelligent service for users.
2. A service trusted provision method for decentralized network resources, the service trusted provision method comprising the steps of:
s1, a resource providing node sends a registration request to a resource management module based on a blockchain, wherein registration information contained in the registration request comprises self ID, position information and self available resource amount information;
s2, triggering the intelligent contract to verify the registration information of the node after receiving the registration request of the resource providing node, returning confirmation registration information after the verification is passed, and writing the registration information of the resource providing node into the blockchain in a transaction mode;
s3, the user initiates a service request to the service trusted providing platform, wherein the service request comprises user identity information and service requirements, the service requirements comprise functional requirements and non-functional requirements, and the service request is registered in a micro-service registration center;
s4, the micro-service configuration micro-service orchestrator runs a micro-service orchestration algorithm, matches the service request with the resource information, selects a resource providing node, and obtains a resource scheduling decision;
s5, the micro-service registration center sends the service request to the block chain resource management module, and the block chain resource management module sends a resource sharing request to the selected resource providing node according to the resource scheduling decision;
s6, the resource providing node provides resource support micro-service deployment and provides service for users;
s7, the micro-service monitor monitors the resource surplus of the resource providing node in real time, adaptively adjusts micro-service deployment, and carries out micro-service migration;
s8, after the resource providing node provides the resource, if the resource information changes, uploading the updated resource information to a resource management module based on the block chain;
and S9, after the service is finished, the user evaluates the QoS, and the intelligent contract is allocated according to the trigger excitation of the user evaluation, and the blockchain platform allocates excitation according to the contribution of the resource providing nodes.
3. The method of claim 2, wherein the service trusted provisioning platform in step S3 includes a virtualized resource pool, a blockchain-based resource management module, and a micro-service module.
4. The service trust providing method for dispersing network resources according to claim 2, wherein the resource providing nodes are divided into two types: one type is a server node having computing resources and storage resources
Figure FDA0004004738730000021
Carrying out deployment of micro-services; one class is switching node forwarding traffic, server node +.>
Figure FDA0004004738730000022
Having an amount of computational resources denoted r ic The storage capacity is r im Switching node->
Figure FDA0004004738730000023
Is defined as r jc =0, the storage capacity is defined as r jm =0, node n i And n j The physical link between them is denoted as l ij The bandwidth of the link is limited to B ij A transmission delay of d ij 。/>
5. The method for providing service trust of decentralized network resource according to claim 2, wherein the specific steps of the micro service orchestrator in step S4 for deployment of micro services to obtain deployment policy are as follows:
f1, input microservice ms i To micro-service orchestrator, where micro-service ms i Including related information, resource providing node information;
f2, initializing cost functions corresponding to all environment states and actions, initializing and evaluating parameters of a Q network and a target Q network, and emptying a playback buffer area;
f3, the agent interacts with the environment, randomly selects an initial state, and selects an action with the maximum Q value through a Q network based on the state;
f4, the agent executes actions in the environment, calculates instantaneous rewards of the basic tasks, and puts the states, the operations, rewards and the next state into a playback buffer area;
f5, randomly sampling experience in the playback buffer area, and sending the instantaneous rewards and the next state which are specific to the sampling into a target network for target value calculation;
f6, sending the target value into an evaluation Q network to calculate an error function, and starting to update the Q network;
and F7, obtaining a micro-service deployment strategy.
6. The method for providing service trust of decentralized network resources according to claim 2, wherein the micro service orchestrator in step S4 matches resource providing nodes according to the micro service resource requirement to generate a micro service deployment policy s= { S 1 ,s 2 ,…,s |A| (s is therein i Including microservice ms i The deployed target nodes and resource quota, the micro-service monitor periodically monitors the micro-service operation, including end-to-end delay, throughput, ms for each micro-service i Execution time of each micro service sm i And the resource quota occupation condition of the micro-service orchestrator is fed back to the micro-service orchestrator.
7. The service trust providing method for decentralized network resources according to claim 2, wherein the specific steps of the multi-objective optimization model establishment in step S7 are:
d1, micro-service orchestration and migration costs for the overall application may be expressed as
Cost=ω d Cost deploym Cost mg (16)
Wherein omega d And omega m Representing the Cost of deployment of micro services respectively deploy And migration Cost mg And ω dm =1;
D2, micro-service orchestration and migration delays for the overall application may be expressed as
Delay=ω dd D deploydm D mg (17)
Wherein omega dd And omega dm Representing micro-service deployment delays D respectively deploy And migration delay D mg And ω dddm =1;
And the optimization objective of the multi-dimensional resource constrained micro-service dynamic deployment model is to minimize the micro-service orchestration cost and time delay:
min Cost (18)
min Delay (19)
d3, considering load balancing of the whole application program, as shown in a formula (22):
Figure FDA0004004738730000031
and D4, obtaining a multi-objective optimization model: min Cost (18)
min Delay (19)
Figure FDA0004004738730000041
8. The method for providing service trustworthiness of decentralized network resources according to claim 2, wherein step D1 obtains a Cost of deployment of micro services deploy The specific steps of (a) are as follows:
g1, microservice ms i The resource usage cost when deployed at a physical node is as shown in equation (1):
Figure FDA0004004738730000042
wherein c i Representation provision for micro services ms i M i Representation provision for micro services ms i Storage resource amount q of (2) c Represent the unit price of computing resources, q m Representing storage resource unit price;
g2, the link resource use cost is shown as formula (2):
Figure FDA0004004738730000043
wherein tr is ij Representing microservices ms i And ms j Data transmission rate, q, required for data interaction between l Representing link resource unit price;
and G3, obtaining the overall deployment cost of the micro-service as shown in the formula (3):
Cost deploy =Cost node +Cost link (3)。
9. the service trusted providing method of decentralized network resources according to claim 2, wherein the specific calculation process in step D3 is as follows:
e1, compute node n j As shown in the following formula, wherein r is the calculated resource occupancy of the system jc Representing node n j Total amount of computing resources provided:
Figure FDA0004004738730000044
e2, according to the step E1, the storage resource utilization rate is expressed as:
Figure FDA0004004738730000045
and E3, because the load balancing aims at enabling the resource occupancy rates of different nodes to be similar, the problems that the processing time delay is too long and the overall service performance is reduced due to overlarge loads of some nodes are avoided, and the load balancing is represented by using variances of the resource utilization rates of different nodes:
Figure FDA0004004738730000046
/>
CN202211628439.XA 2022-12-17 2022-12-17 Service credible providing system and method for decentralized network resources Withdrawn CN116016550A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211628439.XA CN116016550A (en) 2022-12-17 2022-12-17 Service credible providing system and method for decentralized network resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211628439.XA CN116016550A (en) 2022-12-17 2022-12-17 Service credible providing system and method for decentralized network resources

Publications (1)

Publication Number Publication Date
CN116016550A true CN116016550A (en) 2023-04-25

Family

ID=86036495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211628439.XA Withdrawn CN116016550A (en) 2022-12-17 2022-12-17 Service credible providing system and method for decentralized network resources

Country Status (1)

Country Link
CN (1) CN116016550A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116302448A (en) * 2023-05-12 2023-06-23 中国科学技术大学先进技术研究院 Task scheduling method and system
CN117112242A (en) * 2023-10-24 2023-11-24 纬创软件(武汉)有限公司 Resource node allocation method and system in cloud computing system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116302448A (en) * 2023-05-12 2023-06-23 中国科学技术大学先进技术研究院 Task scheduling method and system
CN116302448B (en) * 2023-05-12 2023-08-11 中国科学技术大学先进技术研究院 Task scheduling method and system
CN117112242A (en) * 2023-10-24 2023-11-24 纬创软件(武汉)有限公司 Resource node allocation method and system in cloud computing system
CN117112242B (en) * 2023-10-24 2024-01-26 纬创软件(武汉)有限公司 Resource node allocation method and system in cloud computing system

Similar Documents

Publication Publication Date Title
Ye et al. A survey of self-organization mechanisms in multiagent systems
CN116016550A (en) Service credible providing system and method for decentralized network resources
Hammoud et al. On demand fog federations for horizontal federated learning in IoV
Moreno et al. Efficient decision-making under uncertainty for proactive self-adaptation
Khorsand et al. A self‐learning fuzzy approach for proactive resource provisioning in cloud environment
Cao Self-organizing agents for grid load balancing
US11640322B2 (en) Configuring nodes for distributed compute tasks
Kim et al. Multi-agent reinforcement learning-based resource management for end-to-end network slicing
Ayoubi et al. An autonomous IoT service placement methodology in fog computing
CN101873224A (en) Cloud computing load balancing method and equipment
CN110022230A (en) The parallel dispositions method of service chaining and device based on deeply study
Chen et al. Dynamic QoS optimization architecture for cloud-based DDDAS
Gu et al. Deep reinforcement learning based VNF management in geo-distributed edge computing
Caporuscio et al. Reinforcement learning techniques for decentralized self-adaptive service assembly
He et al. A-DDPG: Attention mechanism-based deep reinforcement learning for NFV
CN116126534A (en) Cloud resource dynamic expansion method and system
Carpio et al. Scaling migrations and replications of virtual network functions based on network traffic forecasting
Toumi et al. On using deep reinforcement learning for multi-domain SFC placement
Dalgkitsis et al. SCHE2MA: Scalable, energy-aware, multidomain orchestration for beyond-5G URLLC services
Robles-Enciso et al. A multi-layer guided reinforcement learning-based tasks offloading in edge computing
Zhang et al. An efficient and autonomous scheme for solving IoT service placement problem using the improved Archimedes optimization algorithm
Sebastio et al. A holistic approach for collaborative workload execution in volunteer clouds
Faraji-Mehmandar et al. A self-learning approach for proactive resource and service provisioning in fog environment
Šešum-Čavić et al. Chapter 8 self-organized load balancing through swarm intelligence
Yang et al. A self-adaptive method of task allocation in clustering-based MANETs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20230425