CN114237889A - Fog computing resource scheduling method based on improved particle swarm algorithm and neural network - Google Patents

Fog computing resource scheduling method based on improved particle swarm algorithm and neural network Download PDF

Info

Publication number
CN114237889A
CN114237889A CN202111552520.XA CN202111552520A CN114237889A CN 114237889 A CN114237889 A CN 114237889A CN 202111552520 A CN202111552520 A CN 202111552520A CN 114237889 A CN114237889 A CN 114237889A
Authority
CN
China
Prior art keywords
fog
task
particle swarm
neural network
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111552520.XA
Other languages
Chinese (zh)
Inventor
丁绪星
姜香樊
王冲
许蓉
邹孝龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Normal University
Original Assignee
Anhui Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Normal University filed Critical Anhui Normal University
Priority to CN202111552520.XA priority Critical patent/CN114237889A/en
Publication of CN114237889A publication Critical patent/CN114237889A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Abstract

The invention discloses a fog computing resource scheduling method based on an improved particle swarm algorithm and a neural network, which comprises the following steps: the method comprises the steps that an intelligent terminal node transmits tasks to a fog center node, the fog center node generates a scheduling scheme according to a task scheduler, the scheduling scheme solves an available distribution scheme for an improved particle swarm algorithm, the fog center node uses data in the available distribution scheme for training and testing a BP neural network to obtain a trained neural network, the trained neural network serves as the scheduler to process the tasks transmitted to the fog center node to obtain an optimal solution for task execution, and the fog center node distributes different tasks to different fog calculation nodes for calculation and caching according to the optimal solution. The invention takes the neural network as a dispatcher, can quickly and effectively unload the task to the fog node, has excellent performance and is easy to realize.

Description

Fog computing resource scheduling method based on improved particle swarm algorithm and neural network
Technical Field
The invention relates to the technical field of fog computing resource scheduling and task allocation in the intelligent manufacturing industry, in particular to a fog computing resource scheduling method based on an improved particle swarm algorithm and a neural network.
Background
The combination of smart manufacturing and internet of things has become a trend towards digitization, networking and intelligence, which greatly facilitates industrial production. Meanwhile, explosive data increase is brought, and the data can be well processed by cloud computing to a certain extent. However, due to the geographical distance between the cloud computing data center and the intelligent factory, when a large amount of data of the intelligent factory is required to be processed in real time, time delay and even network congestion can be caused. In 2012, cisco proposed fog computing, which supports geographically distributed, delay sensitive, and quality of service (QoS) required internet of things applications, can provide low-delay communication and more context awareness. Thus, the method is a powerful tool for processing explosion data and responding in real time in the intelligent manufacturing industry, but has some challenges in the intelligent manufacturing industry.
In the prior art, if different tasks are randomly distributed to different fog computing nodes for processing, the fog nodes with weak computing power can process big data tasks, and the fog nodes are always in an occupied state, so that the energy of the nodes is exhausted and dead seriously; or the fog node with strong computing power processes the small data task, no new task is transmitted after the task is processed, and the fog node is always in an idle standby state to cause waste and the like. Therefore, how to make the fog node efficiently and quickly process tasks transmitted from a large number of terminal devices, and to realize double optimization of fog calculation delay and load becomes one of the key problems to be solved.
Disclosure of Invention
In view of the shortcomings of the prior art, the invention aims to provide a fog computing resource scheduling method based on an improved particle swarm algorithm and a neural network, which is used for solving the problem that in the prior art, a fog node which may cause weak computing power processes a big data task and can seriously cause the energy exhaustion of the node to die; or the fog node with strong computing power processes the small data task, no new task is transmitted after the task is processed, and the fog node is always in an idle standby state, so that waste is caused.
In order to achieve the above and other related objects, the present invention provides a method for scheduling resource in fog computing based on improved particle swarm optimization and neural network, comprising:
s1, the intelligent terminal node transmits the task to the fog center node;
s2, the fog center node generates a scheduling scheme according to a task scheduler, and the scheduling scheme is an available distribution scheme solved for an improved particle swarm algorithm;
s3, the fog center node uses data in an available distribution scheme for training and testing a BP neural network to obtain a trained neural network;
s4, taking the trained neural network as a scheduler, and processing the task transmitted to the fog center node to obtain an optimal solution for task execution;
and S5, the fog center node distributes different tasks to different fog computing nodes for computation and caching according to the optimal solution.
In an embodiment of the present invention, the generating, by the fog center node in step S2, a scheduling scheme according to a task scheduler, where the solving of the available allocation scheme for the improved particle swarm algorithm by the scheduling scheme includes:
s21, initializing particle swarm: each individual in the population of individuals is a viable solution, each individual corresponding to an N M usage matrix S, where S isij0, meaning that task j is not running on fog compute node i, S ij1, representing that the task j runs the calculation on the fog calculation node i;
s22, particle swarm updating iteration process: each particle comprising an individual optimum value PbestThe particle group including the global optimum GbestWhen the particle swarm is updated iteratively, a roulette factor algorithm and a simulated bird foraging algorithm are added into the particle swarm, and all particles of the particle swarm tend to be the wholeThe optimal value is changed;
and S23, continuously iterating and updating the particle swarm until the particle swarm reaches the maximum iteration number, and stopping updating.
In an embodiment of the present invention, the adding roulette factor algorithm to the group of particles in the step S22 includes:
when the particle swarm is updated in an iterative way, a roulette factor theta related to the computing capacity and the storage capacity of the fog computing node i is addedijRoulette factor thetaijComprising an execution force thetaf of a computing node i for executing the task jijAnd a penalty term θ pijWherein the force θ f is executedijAnd a penalty term θ pijThe formula of (1) is:
Figure BDA0003418152970000021
wherein, CijRepresenting the computing power of the fog computing node i when executing task j, KijRepresenting the storage capacity of the fog computing node i;
Figure BDA0003418152970000022
wherein n is the total number of the fog computing nodes in the intelligent factory, marknumberijCalculating the use condition of the node i in the process of processing the task j;
obtaining an operator theta of the processing task j of the fog node iij
Figure BDA0003418152970000023
Wherein r is1、r2A weighting parameter indicative of an adjustment to the roulette factor.
In an embodiment of the present invention, the adding of the simulated bird foraging algorithm to the particle swarm in step S22 includes:
after each iteration update of the particle swarm, the particles are updated to a new position, and the update formula is as follows:
Figure BDA0003418152970000031
Figure BDA0003418152970000032
since the position of the particle uses the matrix, add Sigmoid function to limit the updated position to 0, 1 use matrix:
Figure BDA0003418152970000033
Figure BDA0003418152970000034
wherein Pbestid is the optimal position of a single particle, GBestid is the optimal position of a particle group,
Figure BDA0003418152970000035
in order to be able to update the distance,
Figure BDA0003418152970000036
for the pre-update position, w is the inertial weight of the adjusted particle flight, c1、c2Is the acceleration, r2Is a constant value, and is characterized in that,
Figure BDA0003418152970000037
in order to be able to update the position,
Figure BDA0003418152970000038
is the location before the update.
In an embodiment of the present invention, the iterative updating of the particle swarm in step S22 includes an objective function, where the objective function is:
Fobject=min{γE+δT,γ+δ=1},
e represents the total energy consumed by the fog node when the task is completed according to the distribution scheme, T represents the time required by the fog node when the task is completed according to the distribution scheme, gamma and delta represent the preference degree of energy consumption and time delay in the plant, and when the two are equal, the importance degree of the energy consumption and the time delay in the plant is the same.
The invention also provides a fog computing resource scheduling system based on the improved particle swarm algorithm and the neural network, which comprises the following steps:
the intelligent terminal node unit is used for transmitting the task to the fog center node;
the system comprises a fog center node unit, a task scheduler and a dispatching system, wherein the fog center node unit is used for generating a dispatching scheme according to the task scheduler, and the dispatching scheme is an available distribution scheme solved for an improved particle swarm algorithm; and using the data in the available distribution scheme to train and test the BP neural network to obtain a trained neural network;
the processing unit is used for processing the task transmitted to the fog center node by taking the trained neural network as a scheduler so as to obtain an optimal solution for task execution;
and the fog center node unit is also used for distributing different tasks to different fog computing nodes for computing and caching according to the optimal solution.
The invention also provides electronic equipment which comprises a processor and a memory, wherein the memory stores program instructions, and the processor runs the program instructions to realize the fog computing resource scheduling method based on the improved particle swarm algorithm and the neural network.
As described above, the fog computing resource scheduling method based on the improved particle swarm algorithm and the neural network has the following beneficial effects:
based on the improved discrete particle swarm algorithm and the neural network algorithm, the invention aims at the defects of low optimizing precision and low convergence speed of the traditional resource scheduling, and references the advantages of high efficiency and good effect of the completion degree of the particle swarm algorithm, and then trains and tests the neural network by using the data of the distribution schemes, and the neural network is used as a scheduler, so that the task can be quickly and effectively unloaded to the fog node, and the invention has excellent performance and is easy to realize.
The invention can enable the fog node to efficiently and quickly process tasks transmitted from a large number of terminal devices, and realizes double optimization of fog calculation time delay and load.
Drawings
Fig. 1 is a work flow chart of a fog computing resource scheduling method based on an improved particle swarm algorithm and a neural network according to an embodiment of the present application.
Fig. 2 is a fog layer task allocation model used in the fog computing resource scheduling method according to the embodiment of the present application.
Fig. 3 is an effect diagram of a discrete particle swarm algorithm improved by the fog computing resource scheduling method according to the embodiment of the present application.
Fig. 4 is a comparison between the weighted sum of energy consumption and time delay of the IDPSO algorithm provided in the embodiment of the present application and ACO and RP.
Fig. 5 is a comparison between the task completion rate of the IDPSO algorithm provided in the embodiment of the present application and the ACO and RP.
Fig. 6 is a comparison between the neural network prediction value and the IDPSO output provided in the embodiment of the present application.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the drawings only show the components related to the present invention rather than the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for scheduling a resource in a fog computing system based on an improved particle swarm optimization and a neural network according to an embodiment of the present disclosure. The invention discloses a fog computing resource scheduling method based on an improved particle swarm algorithm and a neural network,
the fog computing resource scheduling method based on the improved particle swarm algorithm and the neural network is used for resource scheduling and task allocation of intelligent factory fog computing, tasks transmitted from different terminal nodes are allocated to different fog computing nodes for computing, the fog computing nodes provide computing, communication and caching functions for the task nodes, N fog computing nodes are arranged in the bus type flow line intelligent workshop, and the method specifically comprises the following steps:
and step S1, the intelligent terminal node transmits the task to the fog center node.
And step S2, the fog center node generates a scheduling scheme according to the task scheduler, wherein the scheduling scheme is used for solving an available distribution scheme for an improved particle swarm algorithm.
And step S3, the fog center node uses the data in the available distribution scheme for training and testing the BP neural network to obtain a trained neural network.
And step S4, using the trained neural network as a scheduler to process the task transmitted to the fog center node so as to obtain an optimal solution for task execution.
And step S5, the fog center node distributes different tasks to different fog computing nodes for computation and caching according to the optimal solution.
The fog center node in step S2 generates a scheduling scheme according to a task scheduler, where the scheduling scheme includes solving an available allocation scheme for an improved particle swarm algorithm:
s21, initializing particle swarm: each individual in the population of individuals is a viable solution, each individual corresponding to an N M usage matrix S, where S isij0, meaning that task j is not running on fog compute node i, S ij1, representing task j running a computation on fog compute node i;
S22, particle swarm updating iteration process: each particle comprising an individual optimum value PbestThe particle group including the global optimum GbestWhen the particle swarm is updated in an iterative mode, a roulette factor algorithm and a simulated bird foraging algorithm are added into the particle swarm, and all particles of the particle swarm tend to change towards the overall optimal value;
and S23, continuously iterating and updating the particle swarm until the particle swarm reaches the maximum iteration number, and stopping updating.
The adding roulette factor algorithm to the group of particles in the step S22 includes:
when the particle swarm is updated in an iterative way, a roulette factor theta related to the computing capacity and the storage capacity of the fog computing node i is addedijRoulette factor thetaijComprising an execution force thetaf of a computing node i for executing the task jijAnd a penalty term θ pijWherein the force θ f is executedijAnd a penalty term θ pijThe formula of (1) is:
Figure BDA0003418152970000051
wherein, CijRepresents the computing power of the fog computing node i when the task j is executed, the computing power of the fog computing node is constant, so CijIs numerically equal to Ci,KijThe storage capacity of the fog computing node i is shown, and the data volume is D after the fog computing node executes the taskjTask j of (1) is cached, so Kij=αDj
Figure BDA0003418152970000061
Wherein n is the total number of the fog computing nodes in the intelligent factory, marknumberijCalculating the use condition of the node i in the process of processing the task j;
obtaining an operator theta of the processing task j of the fog node iij
Figure BDA0003418152970000062
Wherein r is1、r2Weight parameter, operator theta, indicating the adjustment of the roulette factorijThe larger the size, the easier the fog node is to be selected.
The step S22 of adding the simulated bird foraging algorithm to the particle swarm includes:
after each iteration update of the particle swarm, the particles are updated to a new position, and the update formula is as follows:
Figure BDA0003418152970000063
Figure BDA0003418152970000064
since the position of the particle uses the matrix, add Sigmoid function to limit the updated position to 0, 1 use matrix:
Figure BDA0003418152970000065
Figure BDA0003418152970000066
wherein Pbestid is the optimal position of a single particle, GBestid is the optimal position of a particle group,
Figure BDA0003418152970000067
in order to be able to update the distance,
Figure BDA0003418152970000068
for the pre-update position, w is the inertial weight of the adjusted particle flight, c1、c2In order to be able to accelerate the vehicle,r、r2is a constant value, and is characterized in that,
Figure BDA0003418152970000069
in order to be able to update the position,
Figure BDA00034181529700000610
is the location before the update.
The iterative updating of the particle swarm in the step S22 includes an objective function, where the objective function is:
Fobject=min{γE+δT,γ+δ=1},
e represents the total energy consumed by the fog node when the task is completed according to the distribution scheme, T represents the time required by the fog node when the task is completed according to the distribution scheme, gamma and delta represent the preference degree of energy consumption and time delay in the plant, and when the two are equal, the importance degree of the energy consumption and the time delay in the plant is the same.
The fog computing resource scheduling method based on the improved particle swarm algorithm and the neural network is applied to the intelligent manufacturing industry, and tasks from different production processes and terminal equipment are distributed to fog computing nodes with different computing capacities and storage capacities for computing at a fog center node. In the method, in the process of iterative optimal solution of particle swarm, a roulette factor influenced by the computing capacity and the storage capacity of the fog nodes is added, so that the discrete particle swarm algorithm is used for solving a use matrix of the fog nodes. In order to avoid long time consumption of the improved particle swarm algorithm, 75% of data generated by the improved particle swarm algorithm is used as a training set of a neural network, and 25% of data generated by the improved particle swarm algorithm is used as a test set, so that the algorithm can efficiently and quickly obtain a resource allocation scheme with shortest time delay and energy consumption in a short time. On the premise of no loss of generality, N fog computing nodes fog with different computing capacities and storage capacities are arranged on a bus type pipeline of an intelligent factory1fog2...fogi...fognFor processing M tasks transmitted from intelligent terminal devices at different geographic positions1task2...taskj...taskmThe fog center node is connected with other fog nodes to form a fog layer. The fog center node comprisesThe central node runs a neural network according to the information of the fog nodes, and the neural network is generated by training data obtained by the improved IDPSO algorithm. Compared with the traditional random selection algorithm and the ACO algorithm based on the bionic idea, the improved DPSO algorithm can find a solution with lower energy consumption and time delay weighted value within shorter iteration times.
Training and testing of neural networks: in a bus type dynamic intelligent workshop, except a first process, other processes all need parameters of a previous process, so that the priority exists among tasks according to the processing sequence of workpieces, when a plurality of workpieces are processed at the same time, a plurality of tasks with the same priority are generated, conflicts exist among the tasks, and a neural network for solving the conflicts among the same priority is trained. The task number and the priority are used as input labels to be input, the corresponding fog nodes are used as output, the output is represented by binary coding, and if the task i runs on the first node, the operation is marked as [1,0,0,0,0,0]T. The accuracy of the neural network was verified using 75% of the data as training data and 25% of the data as test data.
The settings of the scene and the settings of the parameters are analyzed in detail as follows:
referring to fig. 2, fig. 2 is a diagram illustrating a fog layer task allocation model used in a method for scheduling fog computing resources according to an embodiment of the present disclosure. According to the reference given by the OpenFog architecture, the mist layer model adopts a combination of centralized and distributed modes. A fog center node with a task scheduling algorithm of a fog cluster receives tasks from terminal equipment, and then distributes different tasks to different fog computing nodes according to the algorithm for processing. Each mist network node is interconnected with different mist computing nodes and mist center nodes to provide a usable node list for the mist center nodes when using a task allocation algorithm, and to enable timely discovery of node exhaustion and failure.
Resource allocation and task scheduling model: task from edge device in model1task2...taskj...taskmTransmitted to a fog center node through a channel, and then the fog center node distributes tasks to fog computing nodes fog on m different geographic positions1fog2...fogi...fognDuring the period of transmitting the jth task to the ith fog node, the time delay caused by the transmission distance is
Figure BDA0003418152970000081
The energy consumption is
Figure BDA0003418152970000082
The computing energy consumption of the fog computing node i for processing the task j is expressed as
Figure BDA0003418152970000083
After the fog computing node processes the task, on one hand, the beta (beta is more than 0 and less than 1) of the task is transmitted to the terminal equipment as a processed signal to control the production process, on the other hand, the summary information is uploaded to the cloud, and the small information communication overhead can be almost ignored. In this process, the transmission delay is
Figure BDA0003418152970000084
The energy consumption is
Figure BDA0003418152970000085
Therefore, in the process that the task j is distributed to the fog computing node i for computing, the obtained summary information is uploaded to the cloud, and the control information is fed back to the terminal equipment, the time delay is
Figure BDA0003418152970000086
The energy consumption is
Figure BDA0003418152970000087
Thus, the energy consumption matrix E calculated by m tasks on n fog nodestotalSum delay matrix TtotalCan be expressed as:
Figure BDA0003418152970000088
Figure BDA0003418152970000089
considering the task completion and analyzing the reason why the fog computing task cannot be completed, first, in a normal pipeline, if the task j is completed
Figure BDA00034181529700000810
Greater than TmaxThen the task will be considered incomplete and secondly, if the machine fails at the time the task is transmitted, the task will not complete, which is simulated using a poisson distribution. The invention uses the membership matrix S to describe the completion condition of tasks on different fog nodes. When task j completes on fog node i si,jIf it can not be finished, s is 1i,j0, the following membership matrix can be obtained:
Figure BDA00034181529700000811
the fog nodes of the known fog group work simultaneously, and one fog node can only process one task at a time, because one task can only be transmitted to one fog node for calculation at a time. There are therefore constraints:
Figure BDA0003418152970000091
obtaining an energy consumption matrix E when the task is completed from the membership matrix of the task completionmatrixAnd a delay matrix:
Ematrix=S·Etotal,Tmatrix=S·Ttotal
thereby obtaining the total energy consumption E and the total time delay T of task completion as follows:
Figure BDA0003418152970000092
the task allocation aims to minimize energy consumption and delay, but in the actual process, the delay of the time with the lowest energy consumption is not necessarily the lowest, so that the preference information of one task is introduced as AsThe objective function is established as (γ, δ): fobject=min{γE+δT,γ+δ=1},
If different tasks are randomly distributed to different fog computing nodes for processing, the fog nodes with weak computing power can process big data tasks, the fog nodes are always in an occupied state, and the energy of the nodes is exhausted to die. Or the fog node with strong computing power processes the small data task, no new task is transmitted after the task is processed, and the fog node is always in an idle standby state, so that waste is caused. In order to solve the above problems, the present invention provides an improved discrete particle swarm algorithm. The improved algorithm introduces a roulette algorithm when the particle swarm is updated, and adds a particle swarm node operator theta in the particle swarm iteration processij
Referring to fig. 3, fig. 3 is a diagram illustrating an effect of a discrete particle swarm algorithm improved by the method for scheduling a resource in a fog computing system according to the embodiment of the present application. The particle swarm which can be obtained from the graph moves towards a lower time delay energy consumption weighted value in the process of updating 15 generations to 200 generations, and the fact that the improved discrete particle swarm algorithm added with the roulette factor is used for solving the use matrix of the fog node is proved, and the optimal distribution scheme under the current iteration times can be obtained.
Referring to fig. 4, fig. 4 is a comparison between the weighted sum of energy consumption delays and ACO and RP for the IDPSO algorithm provided in the embodiment of the present application. In order to prove the availability of the IDPSO algorithm in the aspects of resource allocation and task scheduling and realize double optimization of time delay and load, the objective function is set as weighted values of time delay and energy consumption, the task amount is set as 100, 200, 300, 400 and 500, the weighted values can be obtained from the graph of FIG. 4, along with the increase of data amount, the improved energy consumption performance of the particle swarm algorithm is more obvious in improvement effect, and when the task amount is 300, the weighting of the time delay and the energy consumption is reduced by 5.7% compared with that of the ACO algorithm and is reduced by 50.3% compared with that of the RA algorithm. The average optimum value is reduced by 5.18% compared with the ACO algorithm and 44.44% compared with the RA algorithm. The improved algorithm is superior to the traditional optimization algorithm in the optimization process. The energy consumption and the time delay are superior to those of the ant colony algorithm based on the bionic idea.
Referring to fig. 5, fig. 5 is a comparison between task completion rate and ACO and RP of the IDPSO algorithm according to the embodiment of the present application. The invention uses the task completion rate to measure the quality of the task completed by the algorithm, one of two factors influencing the task completion is that the task cannot be completed due to the machine fault obeying Poisson distribution, and the other factor is that the task cannot be completed when the maximum tolerance time of the task is exceeded by 80 ms. Assume that the failure rate of each fog compute node is 2%. The task amount is set to be 100, 300, 500, 700 and 900, the task completion rate is gradually increased along with the increase of the task amount through the improved particle swarm optimization and the improved ant colony optimization, the completion rate of the IDPSO algorithm provided by the invention is stabilized at 84%, and the ant colony optimization eliminates a failed machine in an optimization range every time a path is searched, so that the completion rate is slightly higher than that of the IDPSO when the number of the tasks is increased to 900.
The work of the invention on the aspect of the neural network is divided into two parts, namely, the neural network is trained by using data obtained by the IDPSO algorithm, and the trained neural network is tested.
Training a neural network: the invention adopts a BP neural network generation task scheduler, and the final target output value is close to the expected value as much as possible in the process of signal forward propagation and error backward propagation.
Construction of a neural network: the neural network in the invention has two inputs, and the input attributes are task priority and task size. And two hidden layers, wherein the adopted activation functions are a Sigmoid function and a linear activation function respectively, and each layer comprises 10 nodes. The activation function of the output layer is a linear function, the neural network is used as a scheduler, and when the task j arrives, the task j is distributed to a corresponding fog calculation node i for processing, so that the output is
Figure BDA0003418152970000101
The output is a 10-dimensional column vector,i.e. there are 10 outputs. The learning rate was set to 0.005, the maximum convergence algebra was 200, and 50 × 30 sets of data were used to train the neural network.
Testing the neural network: selecting 50 tasks generated on a pipeline, taking the priority and data quantity of the tasks as input, outputting a fog node use matrix of a task j, and performing an objective function FobjectThe test was performed using 50X 10 sets of data to evaluate the quality of the output.
Referring to fig. 6, fig. 6 is a comparison between the neural network prediction value and the IDPSO output provided in the embodiment of the present application. Compared with the IDPSO algorithm, the neural network time delay energy consumption weighted value is approximate to the output of the IDPSO algorithm, the accuracy is 97%, meanwhile, the running time of the IDPSO is 21.58s, the running time of the neural network is 2.89s, and the running speed of the neural network is improved by 86.6% compared with the running speed of the IDPSO. Each task is transmitted to the fog center node, and the scheduler produced by the neural network allocates the fog nodes for the tasks in 5.78ms, so that on one hand, the conflict of the tasks with the same priority is solved, the running time is shortened, and on the other hand, the dual optimization functions of the IDPSO on time delay and energy consumption are inherited. The method has the advantages of high running speed, high performance and easy realization.
Similar to the principle of the fog computing resource scheduling method based on the improved particle swarm algorithm and the neural network, the invention also provides a fog computing resource scheduling system based on the improved particle swarm algorithm and the neural network, which comprises the following steps:
in that
The intelligent terminal node unit is used for transmitting the task to the fog center node;
the system comprises a fog center node unit, a task scheduler and a dispatching system, wherein the fog center node unit is used for generating a dispatching scheme according to the task scheduler, and the dispatching scheme is an available distribution scheme solved for an improved particle swarm algorithm; and using the data in the available distribution scheme to train and test the BP neural network to obtain a trained neural network;
the processing unit is used for processing the task transmitted to the fog center node by taking the trained neural network as a scheduler so as to obtain an optimal solution for task execution;
and the fog center node unit is also used for distributing different tasks to different fog computing nodes for computing and caching according to the optimal solution.
The invention further provides electronic equipment which comprises a processor and a memory, wherein the memory stores program instructions, and the processor runs the program instructions to realize the fog computing resource scheduling method based on the improved particle swarm algorithm and the neural network. The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component; the Memory may include a Random Access Memory (RAM), and may also include a Non-Volatile Memory (Non-Volatile Memory), such as at least one disk Memory. The Memory may also be an internal Memory of Random Access Memory (RAM) type, and the processor and the Memory may be integrated into one or more independent circuits or hardware, such as: application Specific Integrated Circuit (ASIC). It should be noted that the computer program in the memory may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention.
In summary, the improved discrete particle swarm algorithm and the improved neural network algorithm are based on the advantages that the traditional resource scheduling optimizing precision is not high, the convergence speed is low, the high efficiency and the good effect of the particle swarm algorithm are used for reference, the neural network is trained and tested by the data of the distribution schemes and is used as a scheduler, the task can be quickly and effectively unloaded to the fog node, the performance is excellent, and the realization is easy.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (7)

1. The fog computing resource scheduling method based on the improved particle swarm algorithm and the neural network is characterized by comprising the following steps of:
s1, the intelligent terminal node transmits the task to the fog center node;
s2, the fog center node generates a scheduling scheme according to a task scheduler, and the scheduling scheme is an available distribution scheme solved for an improved particle swarm algorithm;
s3, the fog center node uses data in an available distribution scheme for training and testing a BP neural network to obtain a trained neural network;
s4, taking the trained neural network as a scheduler, and processing the task transmitted to the fog center node to obtain an optimal solution for task execution;
and S5, the fog center node distributes different tasks to different fog computing nodes for computation and caching according to the optimal solution.
2. The improved particle swarm algorithm and neural network based fog computing resource scheduling method of claim 1, wherein the fog center node in the step S2 generates a scheduling scheme according to a task scheduler, the solving of the available allocation scheme for the improved particle swarm algorithm by the scheduling scheme comprises:
s21, initializing particle swarm: each individual in the population of individuals is a viable solution, each individual corresponding to an N M usage matrix S, where S isij0, meaning that task j is not running on fog compute node i, Sij1, representing that the task j runs the calculation on the fog calculation node i;
s22, particle swarm updating iteration process: each particle comprising an individual optimum value PbestThe particle group including the global optimum GbestWhen the particle swarm is updated in an iterative mode, a roulette factor algorithm and a simulated bird foraging algorithm are added into the particle swarm, and all particles of the particle swarm tend to change towards the overall optimal value;
and S23, continuously iterating and updating the particle swarm until the particle swarm reaches the maximum iteration number, and stopping updating.
3. The method for scheduling mist computing resources based on improved particle swarm algorithm and neural network as claimed in claim 2, wherein the adding roulette factor algorithm to the particle swarm in step S22 comprises:
when the particle swarm is updated in an iterative way, a roulette factor theta related to the computing capacity and the storage capacity of the fog computing node i is addedijRoulette factor thetaijComprising an execution force thetaf of a computing node i for executing the task jijAnd a penalty term θ pijWherein the force θ f is executedijAnd a penalty term θ pijThe formula of (1) is:
Figure FDA0003418152960000011
wherein, CijRepresenting the computing power of the fog computing node i when executing task j, KijRepresenting the storage capacity of the fog computing node i;
Figure FDA0003418152960000021
wherein n is the total number of the fog computing nodes in the intelligent factory, marknumberijCalculating the use condition of the node i in the process of processing the task j;
obtaining an operator theta of the processing task j of the fog node iij
Figure FDA0003418152960000022
Wherein r is1、r2A weighting parameter indicative of an adjustment to the roulette factor.
4. The method for scheduling resources for fog computing based on improved particle swarm optimization and neural network of claim 3, wherein the step S22 of adding the simulated bird foraging algorithm to the particle swarm comprises:
after each iteration update of the particle swarm, the particles are updated to a new position, and the update formula is as follows:
Figure FDA0003418152960000023
Figure FDA0003418152960000024
since the position of the particle uses the matrix, add Sigmoid function to limit the updated position to 0, 1 use matrix:
Figure FDA0003418152960000025
Figure FDA0003418152960000026
wherein Pbestid is the optimal position of a single particle, GBestid is the optimal position of a particle group,
Figure FDA0003418152960000027
in order to be able to update the distance,
Figure FDA0003418152960000028
for the pre-update position, w is the inertial weight of the adjusted particle flight, c1、c2Is the acceleration, r2Is a constant value, and is characterized in that,
Figure FDA0003418152960000029
in order to be able to update the position,
Figure FDA00034181529600000210
is the location before the update.
5. The improved particle swarm algorithm and neural network based fog computing resource scheduling method of claim 4, wherein: the iterative updating of the particle swarm in the step S22 includes an objective function, where the objective function is:
Fobject=min{γE+δT,γ+δ=1},
e represents the total energy consumed by the fog node when the task is completed according to the distribution scheme, T represents the time required by the fog node when the task is completed according to the distribution scheme, gamma and delta represent the preference degree of energy consumption and time delay in the plant, and when the two are equal, the importance degree of the energy consumption and the time delay in the plant is the same.
6. Fog computing resource scheduling system based on improved particle swarm algorithm and neural network is characterized by comprising:
the intelligent terminal node unit is used for transmitting the task to the fog center node;
the system comprises a fog center node unit, a task scheduler and a dispatching system, wherein the fog center node unit is used for generating a dispatching scheme according to the task scheduler, and the dispatching scheme is an available distribution scheme solved for an improved particle swarm algorithm; and using the data in the available distribution scheme to train and test the BP neural network to obtain a trained neural network;
the processing unit is used for processing the task transmitted to the fog center node by taking the trained neural network as a scheduler so as to obtain an optimal solution for task execution;
and the fog center node unit is also used for distributing different tasks to different fog computing nodes for computing and caching according to the optimal solution.
7. An electronic device comprising a processor and a memory, the memory storing program instructions, characterized in that: the processor executes program instructions to realize the fog computing resource scheduling method based on the improved particle swarm algorithm and the neural network according to any one of claims 1 to 5.
CN202111552520.XA 2021-12-17 2021-12-17 Fog computing resource scheduling method based on improved particle swarm algorithm and neural network Pending CN114237889A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111552520.XA CN114237889A (en) 2021-12-17 2021-12-17 Fog computing resource scheduling method based on improved particle swarm algorithm and neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111552520.XA CN114237889A (en) 2021-12-17 2021-12-17 Fog computing resource scheduling method based on improved particle swarm algorithm and neural network

Publications (1)

Publication Number Publication Date
CN114237889A true CN114237889A (en) 2022-03-25

Family

ID=80758028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111552520.XA Pending CN114237889A (en) 2021-12-17 2021-12-17 Fog computing resource scheduling method based on improved particle swarm algorithm and neural network

Country Status (1)

Country Link
CN (1) CN114237889A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002799A (en) * 2022-04-25 2022-09-02 燕山大学 Task unloading and resource allocation method for industrial hybrid network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002799A (en) * 2022-04-25 2022-09-02 燕山大学 Task unloading and resource allocation method for industrial hybrid network
CN115002799B (en) * 2022-04-25 2024-04-12 燕山大学 Task unloading and resource allocation method for industrial hybrid network

Similar Documents

Publication Publication Date Title
CN111445111B (en) Electric power Internet of things task allocation method based on edge cooperation
CN111556461B (en) Vehicle-mounted edge network task distribution and unloading method based on deep Q network
CN112784362B (en) Hybrid optimization method and system for unmanned aerial vehicle auxiliary edge calculation
Liu et al. A reinforcement learning-based resource allocation scheme for cloud robotics
CN109818786B (en) Method for optimally selecting distributed multi-resource combined path capable of sensing application of cloud data center
Cui et al. Offloading autonomous driving services via edge computing
Swarup et al. Task scheduling in cloud using deep reinforcement learning
CN116680062B (en) Application scheduling deployment method based on big data cluster and storage medium
CN114237889A (en) Fog computing resource scheduling method based on improved particle swarm algorithm and neural network
Sellami et al. Deep reinforcement learning for energy-efficient task scheduling in SDN-based IoT network
Zhao et al. A digital twin-assisted intelligent partial offloading approach for vehicular edge computing
Abdel-Kader et al. Efficient energy and completion time for dependent task computation offloading algorithm in industry 4.0
Sun et al. Dynamic deployment and scheduling strategy for dual-service pooling-based hierarchical cloud service system in intelligent buildings
Zhao et al. Adaptive Swarm Intelligent Offloading Based on Digital Twin-assisted Prediction in VEC
CN112231117B (en) Cloud robot service selection method and system based on dynamic vector hybrid genetic algorithm
CN114217944A (en) Dynamic load balancing method for neural network aiming at model parallelism
Khanh et al. Fuzzy-based mobile edge orchestrators in heterogeneous IoT environments: An online workload balancing approach
Sheng A resource collaborative scheduling strategy based on cloud edge framework
Jeon et al. Intelligent resource scaling for container based digital twin simulation of consumer electronics
CN112906745B (en) Integrity intelligent network training method based on edge cooperation
CN112434817B (en) Method, apparatus and computer storage medium for constructing communication algorithm database
CN114531665A (en) Wireless sensor network node clustering method and system based on Laiwei flight
Li et al. H-BILSTM: a novel bidirectional long short term memory network based intelligent early warning scheme in mobile edge computing (MEC)
CN111245906B (en) Service request distribution method
CN114035919A (en) Task scheduling system and method based on power distribution network layered distribution characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination