CN117407174A - Predictive maintenance task unloading and resource allocation method based on artificial bee colony algorithm - Google Patents

Predictive maintenance task unloading and resource allocation method based on artificial bee colony algorithm Download PDF

Info

Publication number
CN117407174A
CN117407174A CN202311564764.9A CN202311564764A CN117407174A CN 117407174 A CN117407174 A CN 117407174A CN 202311564764 A CN202311564764 A CN 202311564764A CN 117407174 A CN117407174 A CN 117407174A
Authority
CN
China
Prior art keywords
resource allocation
bee
bees
scheme
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311564764.9A
Other languages
Chinese (zh)
Inventor
张博
王承昊
杨锟浩
赵名扬
赵巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University
Original Assignee
Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University filed Critical Zhengzhou University
Priority to CN202311564764.9A priority Critical patent/CN117407174A/en
Publication of CN117407174A publication Critical patent/CN117407174A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a predictive maintenance task unloading and resource allocation method based on an artificial bee colony algorithm, which comprises the following steps: task offloading scheme based on modified ABC algorithm: initializing employment bee parameters; employment of a bee neighborhood search for food sources; calculating the probability of a random wheel disc and selecting a next food source; adopting bees and observing bees to exchange information, judging whether the detection bees appear, if so, using genetic algorithm to cross variation to regenerate the bee sources and then storing the optimal bee sources, otherwise, directly storing the optimal bee sources; judging whether the number of food source searching rounds of hiring bees reaches the number of the food sources, ending if yes, and returning if not; further comprises: the resource allocation method based on DDPG comprises the following steps: randomly initializing a Critic network and an Actor network, and converting data of bee states of a bee colony; generating a resource allocation scheme; updating Critic and Actor target networks; and judging whether the set training round number is reached, if so, outputting a resource allocation scheme, and otherwise, returning.

Description

Predictive maintenance task unloading and resource allocation method based on artificial bee colony algorithm
Technical Field
The invention belongs to the field of edge calculation, and particularly relates to a predictive maintenance task unloading and resource allocation method based on an artificial bee colony algorithm.
Background
In the industrial internet of things, industrial equipment often fails to work normally due to damage caused by various reasons, equipment maintenance is an important component in industrial production, has a great influence on cost and equipment reliability, and directly determines the competitiveness of a company in low price, high performance and quality to a certain extent. The stopping of any machine outside the program can reduce or stop the company's core business, resulting in significant losses. The basic concept of PdM is to monitor the health of a machine with the aid of sensed data to determine the future possible degradation or failure of the machine. PdM helps companies optimize their policies by performing maintenance activities when fully necessary, rather than having equipment or components be replaced after they fail, or when they still have useful lives. Existing predictive maintenance researches are aimed at improving the accuracy of PdM and a complete PdM method, and few researches are conducted on the processing or distribution of predictive maintenance tasks.
Because PdM activities generate a large amount of data and have real-time requirements, the method of local computing and offloading to the cloud is not basically capable of meeting the requirements of PdM. Edge computation may overcome the limitation of limited computing power of the terminal device. Edge computing may avoid the high latency of offloading certain tasks to the remote cloud as compared to offloading computing to the remote cloud. The key technology of mobile edge computing MEC is computing unloading, which is to unload the computing task of the mobile terminal to the edge network, so as to solve the problems of the mobile device in aspects of resource storage, computing performance, energy efficiency and the like. There are many related studies on computing offloading, mainly including offloading decisions and offloading resource allocation. The offloading process may be affected by different factors, such as the usage habits of the user, the interference of the wireless channel, the quality of the communication link, the performance of the mobile device, etc., and the key to computing the offloading is to make a suitable offloading decision, which is also a research hotspot in recent years.
In the prior art, MEC models are established, and different methods are adopted to optimize the unloading scheme, including a convex optimization algorithm, a machine learning method, a group intelligent method and the like. However, the existing MEC model is mainly composed of a single edge node or single task offload, and the distinction of different task performance requirements is not high even in a multi-task scenario. In practical environments, the network is often faced with complex heterogeneous edge networks, various different access modes are not single in the user side, and the requirements of different types of tasks on computing power, data transmission capability and time delay are different. Therefore, deep research is needed for the problem of multi-task unloading in the practical application scene, a heterogeneous edge network model and a multi-task unloading model are established, and a proper optimization algorithm is adopted to solve the problem of multi-task unloading in the complex network model.
Disclosure of Invention
The invention aims to solve the problem of unloading and resource scheduling of a plurality of maintenance tasks to edge equipment, and provides a predictive maintenance task unloading and resource allocation method based on an artificial bee colony algorithm, which comprises the following specific scheme:
the first aspect of the invention provides a predictive maintenance task unloading and resource allocation method based on an artificial bee colony algorithm, which comprises the following steps:
task offloading scheme based on modified ABC algorithm:
food source: the food source is expressed as fs= < O, Z, C >, where O represents the task offloading scheme, Z represents the resource allocation scheme, C is the total cost under the task offloading and resource allocation scheme for the PdM process;
step 1-1, inputting the number SN of food sources, a task T and an edge node K;
step 1-2, initializing employment bee parameters;
step 1-3, hiring a bee neighborhood to search for food sources;
step 1-4, calculating the probability of a random wheel disc and selecting a next food source;
step 1-5, employing bees to communicate information with observing bees, observing beesChanging information, judging whether the detection bees appear at the same time, if so, regenerating the bee sources by using the crossover variation of the genetic algorithm, and then storing the optimal bee sources, otherwise, directly storing the optimal bee sources; wherein, the optimal bee source is taken as an unloading scheme FS of the subsequent resource allocation i .O
Step 1-6, judging whether the number of hiring bees to search food sources reaches the number of food sources SN, if yes, ending, otherwise, returning to step 1-3;
the resource allocation method based on DDPG comprises the following steps:
step 2-1, inputting an available edge computing node K and a task set { Ti }; setting total computing resources F of each edge node; computing offload scheme FS i O, wherein each offloading scheme is represented as<K1,K2,…,Kn>Representing the offloaded edge node;
step 2-2, randomly initializing Critic network Q (S (t), A (t), θ Q ) And the weight is theta Q And theta μ Actor network μ (S (t), θ) μ );
Step 2-3, according to θ Q →θ Q′ And theta μ →θ μ′ Will be theta Q And theta μ Respectively copying the data to an Actor target network and a Critic target network, wherein theta Q′ For actor target network weights, θ μ′ Network weights for target critics;
step 2-4: unloading scheme FS as output in steps 1-5 i O initializing a bee colony FS;
step 2-5.1: randomly initializing bee state transition data FS 0
Step 2-5.2: receiving an initial state S (1) = [ Re (1), B (1), FS 0 ];
Step 2-5.3: for each time t, performing action A (t), obtaining a prize R (t) and a state S (t+1) of the next slot, storing tuples (S (t), A (t), R (t), S (t+1)), and generating a resource allocation scheme Z;
step 2-5.4: updating current network weight theta of Critic network Q Updating current network weight theta of Actor network μ Updating Critic target network and Actor target network θ using soft update function Q′ ←μθ Q +(1+μ)θ Q′ ,θ μ′ ←μθ μ +(1+μ)θ μ′
Step 2-6: and judging whether the number of the training rounds of the network reaches the set number of the training rounds, if so, outputting a resource allocation scheme Z to the step 1-2, otherwise, returning to the step 2-2.
The second aspect of the present invention provides a system for unloading predictive maintenance tasks and distributing resources based on artificial bee colony algorithm, which is characterized in that: the system comprises a memory and a processor, wherein the memory stores a computer program, and the processor calls the computer program to execute the steps of the predictive maintenance task unloading and resource allocation method based on the artificial bee colony algorithm.
A third aspect of the invention provides a non-transitory computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of a predictive maintenance task offloading and resource allocation method based on an artificial bee colony algorithm as described.
Compared with the prior art, the invention has outstanding substantive characteristics and remarkable progress, and concretely comprises the following steps:
1. the method combines the genetic algorithm and the artificial bee colony algorithm to solve the problem of unloading the edge calculation task, and utilizes the variation principle of the genetic algorithm to lead the artificial bee colony algorithm to get rid of the local optimal solution.
2. The method of the invention uses DDPG to complete limited resource allocation and feeds back the calculation result to the task unloading algorithm, and the two algorithms are mutually nested to obtain the optimal unloading scheme.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more clear, the following description of the technical solutions in the embodiments of the present invention will be given in detail, but the present invention is not limited to these embodiments:
example 1
The unloading model of the task unloading scheme oriented to the method comprises the following steps:
there are multiple devices N denoted as n= { N 1 ,N 2 ,…N i …,N n The device generates predictive maintenance requirements in the running process, and a PdM task set t= { T is generated 1 ,T 2 ,…T i …,T n An edge server k= { K, denoted by K, is deployed around the device 1 ,K 2 ,…K i …,K n -a }; when performing PdM tasks, the tasks need to be offloaded to the edge base station when device N i Is not equal to all tasks T of i When all ends, the end of one PdM; one edge server is denoted as k= { R, F }, where R is the available bandwidth of the edge server and F is the aggregate computing resource of the edge server, which may be assigned to tasks on the edge server;
each task can be offloaded to a nearby edge server, but the limiting device Ni can only submit one task T at a time i The method comprises the steps of carrying out a first treatment on the surface of the Arbitrary task T i Describing t= { D, c, λ, α, τ }, with 5 entries, where D is the data size of task T; c represents the treatment density of T; lambda is the equipment failure rate during uploading of the task; alpha is the offload decision variable, i.e. the amount of offload data occupies task T i When the ratio of the total data amount is alpha=0, the task T is locally processed, if alpha=1, the task T is completely unloaded, and furthermore, the data with the size alpha D is unloaded to the edge server, and the data with the size (1-alpha) D is locally processed; τ is the data size of the processing result and T i A ratio of data sizes of (2);
local calculation time: the core number of local calculation is g 1 Beta is represented as a parallelizable part, and the processing power allocated to each core of the local computation is f l The method comprises the steps of carrying out a first treatment on the surface of the Dividing the task of computing (1-alpha) D-bit data locally into serial partsAnd parallelizable part->The local computation time is expressed as:
edge calculation time: obtaining the transmission delay from alpha D bit data to the edge through local calculation time; unloading the alpha D bit data, and processing the data by an edge server; by g 2 Representing the number of cores allocated to an edge processing task, let f e Representing the processing power per core of the edge server, f e >>f l Wherein the serializeable portion may be represented asThe parallelizable part can be expressed as +.>The edge computation time is expressed as:
transmission delay of unloading: task T i Is offloaded to the edge over the wireless communication link; by r 1 Representing the data transfer rate, the transfer delay of the alpha D bit data offloaded to the edge according to the time calculated by the edge is expressed as:
the total time allotted to the edge computation is:
t off =t e +t up
task return time: task T i After the processing is finished, returning the result to the terminal equipment; taking r 2 For data transmission rate in the process of returning results, similarly unloaded transmission delay, τd results return transmission delay expressed as:
total delay: processing task T i Is a combination of local computation time, edge computation time, offload transfer delay, and return result transfer delay, and edge computation and local computation are performed simultaneously, then the total delay is formulated as:
t d =max{t l ,t e }+t up +t down
maintenance cost: the cost when the equipment fails to perform maintenance activities is represented as the corrective replacement cost C c Suppose that the corrective maintenance cost C involved in PdM is expressed as:
epsilon represents the remaining time from the last PdM activity to the end of the planned production period P, a predictive maintenance period comprising maintenance time and delay time, M representing the number of cycles of PdM M in the planned production period P m=p/(t) d +t p );
Equipment failure rate model: discrete random variable t k Representing the accumulated running time from the last implementation of predictive maintenance to the next implementation, the failure rate of the device after the kth predictive maintenance is represented as a piecewise continuous variable:
λ k+1 (t)=b k λ k (t+a k t k )
lambda is the failure rate of the k devices of the task at time t, k.epsilon.I, a k B as life-span reduction factor k Increasing the failure rate by a factor of 0<a k <1、b k >1, a step of; failure rate is in initial state lambda 1 The variation trend of (t) is obtained through test data; the change of the failure rate is carried out along with the period of predictive maintenance, the unloading weight of the task is changed, and the unloading sequence of the task is influenced by the task weight;
task computing resource allocation problem of deferred soft constraints: given a set of devices N and the tasks T they produce, the optimization problem is defined as:
the optimization problem minimizes the maintenance cost of all the devices by adjusting the task unloading scheme O and the resource allocation scheme Z;
the total computation time for constraint offloading must be less than the computation time for the task to be entirely local;
for limiting allocation to T i The sum of the computing resources of the edge servers is not greater than the total computing power available to the edge servers;
is->The data size of n prediction tasks used for constraining each calculation cannot exceed the total storage space of the edge server, and the edge server should buffer the resources required by the tasks when processing the tasks.
In the ABC algorithm, the location of the food source represents one possible solution to the optimization problem, and the amount of nectar of the food source corresponds to the quality (fitness) of the relevant solution. The number of employment bees or onlookers is equal to the number of solutions in the population. ABC generates SN solutions (food source locations) for a randomly distributed initial population P (g=0), where SN represents the population size. Each solution (food source) x i (i=1, 2, …, SN) is an E-dimensional vector. Wherein E is the number of optimization parameters. After initialization, the filling of the positions (solutions) is cycled repeatedly (b=1, 2, …, B max ) Representing the search process of employed bees, onlookers, and scouts. Employment of bees or Apis cerana will take place in memory (solution) when looking for new food sourcesRow correction and testing of nectar content (fitness value) of the new source (solution).
Genetic algorithms perform genetic representation on chromosome-like data structures consisting of genes and fitness functions, initialize a solution population, and then rely on bioheuristic operators (such as mutation, crossover, and selection) to improve it.
The task unloading scheme based on the improved ABC algorithm in the method of the invention refers to the idea of the genetic algorithm, and modifies the search process of the scout bees of the artificial bee colony algorithm:
food source: the food source is expressed as fs= < O, Z, C >, where O represents the task offloading scheme, Z represents the resource allocation scheme, C is the total cost under the task offloading and resource allocation scheme for the PdM process; the total set of FS is denoted FSA with the aim of finding a suitable offloading and allocation scheme on the edge server for task T, so that the total cost C is minimized.
Employment stage: the employment bees find food from the surroundings and record. To obtain a new honey source from an old honey source, the artificial bee colony algorithm uses the following expression:
v ij =x ijij (x ij -x kj ) (1)。
a period of Apis cerana: the choice of a food source by the onlookers depends on the probability value p associated with the food source i Calculated from the following expression:
in the fit i A fitness value for solution i assessed by the employment bees, the fitness value being proportional to the amount of nectar of the food source at location i, SN being the number of food sources equal to the number of bees employed (BN). In this way, the bees employed exchange information with bystanders.
A stage of bee detection: in ABC, when the original food source fails to produce a better solution within a predetermined period, part of the colony will be converted to a spy bee to randomly find a new food source. In order to avoid too fast convergence and falling into a local optimal solution, the cross mutation operation of the genetic algorithm is consulted, so that a better solution is searched in an unknown direction, and a bee source is searched again by utilizing the mutation process of the genetic algorithm, so that a solution space can be searched better.
As shown in fig. 1, the task offloading scheme based on the modified ABC algorithm:
step 1-1, inputting the number SN of food sources, a task T and an edge node K;
step 1-2, initializing employment bee parameters;
step 1-3, hiring a bee neighborhood to search for food sources;
step 1-4, calculating the probability of a random wheel disc and selecting a next food source;
step 1-5, employing bees to exchange information with observed bees, observing the changed information of the bees, judging whether the detected bees appear, if so, using genetic algorithm to cross variation to regenerate the bees, and then storing the optimal bees, otherwise, directly storing the optimal bees; wherein, the optimal bee source is taken as an unloading scheme FS of the subsequent resource allocation i .O
Step 1-6, judging whether the number of hiring bees to search the food source number reaches the number of food sources SN, if yes, ending, otherwise returning to step 1-3.
Algorithm 1: specific task unloading scheme algorithm based on improved ABC algorithm
Input: food source FSA, task T, edge node K;
and (3) outputting: task unloading scheme O and resource allocation scheme Z;
invoking an algorithm 2;
invoking algorithm 3:
invoking algorithm 4:
end while
FS=FS best
algorithm 2: specific employment of bee algorithm
Input: food source FSA, task T, edge node K;
and (3) outputting: updated food source FS;
for FS i ∈FSA
randomly selecting food sources FS j ,FS j ∈FSA;
FS j Updating a new food source FS using algorithm 5 j .Z;
Obtaining a food source FS according to formula (1) m
FS m Updating new food source FS using DDPG-based resource allocation method m .Z;
FS=FS m
end for
Algorithm 3: specific Apis Algorithm
Input: food source FSA, task T, edge node K;
and (3) outputting: updated food source FS;
selecting FS according to probability of (2) j ,FS j ∈FSA;
Obtaining a food source FS according to formula (1) m
Judging whether a scout bee appears or not, and updating the new food source FS by using a scout front algorithm m
Comparison of FS m 、FS j Updating optimal food source FS best
Algorithm 4: specific scout front algorithm
Input: food source FSA, maintenance task T, edge node K;
and (3) outputting: updated food source FS;
the method solves the problem of overestimation of q value by introducing DDPG so as to obtain the optimal resource allocation scheme in the multi-user polygonal server scene.
Deep reinforcement learning is generally required to address the problem of continuous interaction with the surrounding environment to obtain learning rewards, ultimately achieving target optimization. Using tuples (S, a, rw) to represent states, behaviors, and rewards in a markov decision process;
status: the state space needs to contain all the information in the environment and reflect the environmental changes for each slot. Thus, the state space of the system may be defined as S (t) = { R (t), B (t) }, R (t) representing the available computing resources of each edge server, B (t) representing the available migration bandwidth of each connection between edge servers;
behavior (action): the action space is defined as a (t) = { r_i (t), b_i (t) }, r_i (t) represents the computing resources required to be used by each maintenance task, and b_i (t) represents the migration bandwidth required to be occupied by the task of each mobile user;
rewarding: the rewards in the system are the total number of tasks successfully processed in each step.
Algorithm 5: resource allocation method based on DDPG
Step 2-1, inputting an available edge computing node K and a task set { Ti }; setting total computing resources F of each edge node; computing offload scheme FS i O, wherein each offloading scheme is represented as<K1,K2,…,Kn>Representing the offloaded edge node;
step 2-2, randomly initializing Critic network Q (S (t), A (t), θ Q ) And the weight is theta Q And theta μ Actor network μ (S (t), θ) μ );
Step 2-3, according to θ Q →θ Q′ And theta μ →θ μ′ Will be theta Q And theta μ Respectively copying the data to an Actor target network and a Critic target network, wherein theta Q′ For actor target network weights, θ μ′ Network weights for target critics;
step 2-4: unloading scheme FS as output in steps 1-5 i O initializing a bee colony FS;
step 2-5.1: randomly initializing bee state transition data FS 0
Step 2-5.2: receiving an initial state S (1) = [ R (1), B (1), FS 0 ];
Step 2-5.3: for each time t, performing action A (t), obtaining a prize R (t) and a state S (t+1) of the next slot, storing tuples (S (t), A (t), R (t), S (t+1)), and generating a resource allocation scheme Z;
step 2-5.4: updating current network weight theta of Critic network Q Updating current network weight theta of Actor network μ Updating Critic target network and Actor target network θ using soft update function Q′ ←μθ Q +(1+μ)θ Q′ ,θ μ′ ←μθ μ +(1+μ)θ μ′
Step 2-6: and judging whether the number of the training rounds of the network reaches the set number of the training rounds, if so, outputting a resource allocation scheme Z to the step 1-2, otherwise, returning to the step 2-2.
Example 2
The present embodiment provides a system for unloading and resource allocation of predictive maintenance tasks based on an artificial bee colony algorithm, which includes a memory and a processor, wherein the memory stores a computer program, and the processor calls the computer program to execute the steps of the method for unloading and resource allocation of predictive maintenance tasks based on an artificial bee colony algorithm as described in embodiment 1.
Example 3
The present embodiment provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the predictive maintenance task offloading and resource allocation method based on artificial bee colony algorithm as described in embodiment 1.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-non-transitory readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flowchart and/or block of the flowchart illustrations and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same; while the invention has been described in detail with reference to the preferred embodiments, those skilled in the art will appreciate that: modifications may be made to the specific embodiments of the present invention or equivalents may be substituted for part of the technical features thereof; without departing from the spirit of the invention, it is intended to cover the scope of the invention as claimed.

Claims (3)

1. A predictive maintenance task unloading and resource allocation method based on an artificial bee colony algorithm is characterized by comprising the following steps of:
task offloading scheme based on modified ABC algorithm:
food source: the food source is expressed as fs= < O, Z, C >, where O represents the task offloading scheme, Z represents the resource allocation scheme, C is the total cost under the task offloading and resource allocation scheme for the PdM process;
step 1-1, inputting the number SN of food sources, a task T and an edge node K;
step 1-2, initializing employment bee parameters;
step 1-3, hiring a bee neighborhood to search for food sources;
step 1-4, calculating the probability of a random wheel disc and selecting a next food source;
step 1-5, employing bees to exchange information with observed bees, observing the changed information of the bees, judging whether the detected bees appear, if so, using genetic algorithm to cross variation to regenerate the bees, and then storing the optimal bees, otherwise, directly storing the optimal bees; wherein, the optimal bee source is taken as an unloading scheme FS of the subsequent resource allocation i .O
Step 1-6, judging whether the number of hiring bees to search food sources reaches the number of food sources SN, if yes, ending, otherwise, returning to step 1-3;
the resource allocation method based on DDPG comprises the following steps:
step 2-1, inputting an available edge computing node K and a task set { Ti }; setting total computing resources F of each edge node; computing offload scheme FS i O, wherein each offloading scheme is represented as<K1,K2,…,Kn>Representing the offloaded edge node;
step 2-2, randomly initializing Critic network Q (S (t), A (t), θ Q ) And the weight is theta Q And theta μ Actor network μ (S (t), θ) μ );
Step 2-3, according to θ Q →θ Q′ And theta μ →θ μ′ Will be theta Q And theta μ Respectively copying the data to an Actor target network and a Critic target network, wherein theta Q′ For actor target network weights, θ μ′ Network weights for target critics;
step 2-4: unloading scheme FS as output in steps 1-5 i O initializing a bee colony FS;
step 2-5.1: randomly initializing bee state transition data FS 0
Step 2-5.2: receiving an initial state S (1) = [ Re (1), B (1), FS 0 ];
Step 2-5.3: for each time t, performing action A (t), obtaining a prize R (t) and a state S (t+1) of the next slot, storing tuples (S (t), A (t), R (t), S (t+1)), and generating a resource allocation scheme Z;
step 2-5.4: updating current network weight theta of Critic network Q Updating current network weight theta of Actor network μ Updating Critic target network and Actor target network θ using soft update function Q′ ←μθ Q +(1+μ)θ Q′ ,θ μ′ ←μθ μ +(1+μ)θ μ′
Step 2-6: and judging whether the number of the training rounds of the network reaches the set number of the training rounds, if so, outputting a resource allocation scheme Z to the step 1-2, otherwise, returning to the step 2-2.
2. The predictive maintenance task unloading and resource allocation system based on the artificial bee colony algorithm is characterized in that: comprising a memory storing a computer program and a processor invoking said computer program to perform the steps of the artificial bee colony algorithm based predictive maintenance task offloading and resource allocation method of claim 1.
3. A non-transitory computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the artificial bee colony algorithm based predictive maintenance task offloading and resource allocation method of claim 1.
CN202311564764.9A 2023-11-22 2023-11-22 Predictive maintenance task unloading and resource allocation method based on artificial bee colony algorithm Pending CN117407174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311564764.9A CN117407174A (en) 2023-11-22 2023-11-22 Predictive maintenance task unloading and resource allocation method based on artificial bee colony algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311564764.9A CN117407174A (en) 2023-11-22 2023-11-22 Predictive maintenance task unloading and resource allocation method based on artificial bee colony algorithm

Publications (1)

Publication Number Publication Date
CN117407174A true CN117407174A (en) 2024-01-16

Family

ID=89492642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311564764.9A Pending CN117407174A (en) 2023-11-22 2023-11-22 Predictive maintenance task unloading and resource allocation method based on artificial bee colony algorithm

Country Status (1)

Country Link
CN (1) CN117407174A (en)

Similar Documents

Publication Publication Date Title
CN111835827B (en) Internet of things edge computing task unloading method and system
Potu et al. Optimizing resource scheduling based on extended particle swarm optimization in fog computing environments
KR102499076B1 (en) Graph data-based task scheduling method, device, storage medium and apparatus
Zheng et al. Deep reinforcement learning-based workload scheduling for edge computing
CN108833352B (en) Caching method and system
CN109788046B (en) Multi-strategy edge computing resource scheduling method based on improved bee colony algorithm
JP2021523474A (en) Graph data processing methods, graph data calculation task distribution methods, equipment, computer programs, and computer equipment
CN113098714B (en) Low-delay network slicing method based on reinforcement learning
WO2022171066A1 (en) Task allocation method and apparatus based on internet-of-things device, and network training method and apparatus
CN113037800B (en) Job scheduling method and job scheduling device
CN113568727A (en) Mobile edge calculation task allocation method based on deep reinforcement learning
CN111813539A (en) Edge computing resource allocation method based on priority and cooperation
JP2020123356A (en) System for manufacturing dispatching by using deep reinforcement learning and transfer learning
Wei et al. Joint resource placement and task dispatching in mobile edge computing across timescales
CN116489708B (en) Meta universe oriented cloud edge end collaborative mobile edge computing task unloading method
Tahmasebi-Pouya et al. A Blind Load‐Balancing Algorithm (BLBA) for Distributing Tasks in Fog Nodes
CN116582407A (en) Containerized micro-service arrangement system and method based on deep reinforcement learning
CN116367190A (en) Digital twin function virtualization method for 6G mobile network
CN117407174A (en) Predictive maintenance task unloading and resource allocation method based on artificial bee colony algorithm
CN115185660A (en) Unloading and buffer storage method and system for MAR task in multi-access edge calculation
CN108256694A (en) Based on Fuzzy time sequence forecasting system, the method and device for repeating genetic algorithm
CN116450658A (en) Node cluster gain-based maximum block storage method and device
Nethaji et al. Differential Grey Wolf Load-Balanced Stochastic Bellman Deep Reinforced Resource Allocation in Fog Environment
Yadav E-MOGWO Algorithm for Computation Offloading in Fog Computing.
CN113630476A (en) Communication method and communication device applied to computer cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination