CN111124662B - Fog calculation load balancing method and system - Google Patents

Fog calculation load balancing method and system Download PDF

Info

Publication number
CN111124662B
CN111124662B CN201911081892.1A CN201911081892A CN111124662B CN 111124662 B CN111124662 B CN 111124662B CN 201911081892 A CN201911081892 A CN 201911081892A CN 111124662 B CN111124662 B CN 111124662B
Authority
CN
China
Prior art keywords
fog
task
layer
service request
load balancing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911081892.1A
Other languages
Chinese (zh)
Other versions
CN111124662A (en
Inventor
林福宏
刘培
周成成
陆月明
许海涛
安建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Beijing University of Posts and Telecommunications
Original Assignee
University of Science and Technology Beijing USTB
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB, Beijing University of Posts and Telecommunications filed Critical University of Science and Technology Beijing USTB
Priority to CN201911081892.1A priority Critical patent/CN111124662B/en
Publication of CN111124662A publication Critical patent/CN111124662A/en
Application granted granted Critical
Publication of CN111124662B publication Critical patent/CN111124662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a fog computing load balancing method and a fog computing load balancing system, which can reduce service delay and energy consumption. The method comprises the following steps: a load balancing layer receives a service request sent by a terminal user; and performing task distribution in a centralized manner according to the task quantity of the received service request and the residual resources and processing capacity of the fog nodes in the fog layer, and if the fog nodes cannot provide service for the service request, bypassing the fog layer and directly uploading the service request to the cloud layer for processing. The invention relates to the technical field of distributed computing.

Description

Fog calculation load balancing method and system
Technical Field
The invention relates to the technical field of distributed computing, in particular to a method and a system for load balancing of fog computing.
Background
Mist calculation was proposed by cisco in 2012, with an initial definition of: a highly virtualized platform for providing computing, storage and network services between a terminal and a traditional cloud computing data center is not only arranged on the edge side of a network. The fog computing is not proposed to replace cloud computing, but is complementary to the cloud computing, and a cloud computing data center is an indispensable part in a fog computing architecture. Compared with cloud computing, the fog computing adopts a distributed computing architecture, the fog nodes are closer to the edge side of the network, and the quantity of the fog nodes is large, so that the computing, storing and network service capabilities of the fog nodes can be fully exerted, wide application services are provided for terminal users, and time delay is reduced.
While fog computing can provide lower latency services to end users, the processing power of individual fog nodes is limited, which can cause fog nodes to be overloaded and even down when there are too many user requests in an area. The current load balancing strategy of the fog nodes adopts a distributed method, a load balancing module is arranged on each fog node, the whole task scheduling is determined by negotiation between the fog nodes, the strategy has higher requirement on communication between the fog nodes, and the strategy has larger time delay and higher energy consumption.
Disclosure of Invention
The invention aims to provide a load balancing method and system for fog computing, and aims to solve the problems of large time delay and high energy consumption caused by a fog node distributed load balancing strategy in the prior art.
To solve the foregoing technical problem, an embodiment of the present invention provides a fog computing load balancing method, including:
a load balancing layer receives a service request sent by a terminal user;
and performing task distribution in a centralized manner according to the task quantity of the received service request and the residual resources and processing capacity of the fog nodes in the fog layer, and if the fog nodes cannot provide service for the service request, bypassing the fog layer and directly uploading the service request to the cloud layer for processing.
Further, the load balancing layer is composed of load balancing clusters;
the task allocation is carried out in a centralized manner according to the task quantity of the received service request and the residual resources and processing capacity of the fog nodes in the fog layer, if the fog nodes cannot provide service for the service request, the fog layer is bypassed, and the task is directly uploaded to the cloud layer for processing, wherein the task allocation comprises the following steps:
the load balancing cluster evaluates the service request by using a resource residual capacity classification evaluation algorithm, determines resources required by a task for completing the service request, and classifies the task to obtain a task category of the service request; wherein the task categories include: a data acquisition task and a calculation task;
and if the fog node cannot provide service for the service request, performing task allocation according to the determined resources required for completing the task, the task types and the residual resources of the fog node in a centralized manner, wherein if the fog node cannot provide service for the service request, the fog node bypasses a fog layer and is directly uploaded to the cloud layer for processing.
Further, the resources include: CPU, memory, hard disk storage space and network bandwidth utilization rate.
Further, the task allocation is performed in a centralized manner according to the determined resources and task types required for completing the task and the residual resources and processing capacity of the fog nodes in the fog layer, if the fog nodes cannot provide service for the service request, the fog layer is bypassed, and the step of directly uploading the service request to the cloud layer for processing includes:
if the task type of the service request is a data acquisition task, the load balancing cluster judges whether corresponding data is cached on the fog nodes in the area or not by utilizing real-time communication with the fog nodes;
if the corresponding data cache exists, judging whether the fog node with the corresponding data cache has enough residual resources by using a resource residual capacity classification evaluation algorithm;
if enough residual resources exist, the shortest path algorithm is adopted to obtain the fog node with the shortest network path, the data acquisition task is distributed to the fog node with the shortest network path, and the fog node responds to the user request, receives the fog node response and forwards the fog node response to the terminal user.
Further, the step of performing task allocation according to the determined resources and task types required for completing the task and the residual resources and processing capacity of the fog nodes in the fog layer in a centralized manner, wherein if the fog nodes cannot provide service for the service request, the fog nodes bypass the fog layer, and the step of directly uploading the service request to the cloud layer for processing further comprises the following steps:
and if the corresponding data cache is not available on the fog node or the corresponding fog node does not have enough residual resources, the load balancing cluster directly distributes the data acquisition task to the cloud layer, receives the cloud layer response and forwards the cloud layer response to the terminal user.
Further, the task allocation is performed in a centralized manner according to the determined resources and task types required for completing the task and the residual resources and processing capacity of the fog nodes in the fog layer, if the fog nodes cannot provide service for the service request, the fog layer is bypassed, and the step of directly uploading the service request to the cloud layer for processing includes:
if the task type of the service request is a calculation task, the load balancing cluster judges whether the fog node has enough residual resources by using a resource residual capacity classification evaluation algorithm;
if the residual resources are enough, determining an optimal task allocation strategy by using a task allocation optimization model to perform task allocation; the task allocation optimization model takes the total task quantity of all the fog nodes receiving the tasks, which is equal to the total task quantity of the service request, as a first constraint condition, takes the resource quantity required for processing the tasks on the fog nodes, which cannot exceed the residual resource quantity of the fog nodes, as a second constraint condition, and simultaneously takes the sum of the processing delay and the energy consumption of the fog nodes as a target function;
and after the fog node completes the calculation task, receiving the calculation result of the fog node and forwarding the calculation result to the terminal user.
Further, the task allocation optimization model is as follows:
Figure BDA0002264228690000031
Figure BDA0002264228690000032
wherein i represents a fog node receiving the task; n represents a set of mist nodes receiving the task; x is the number of i Representing the task amount migrated to the fog node i; delta. For the preparation of a coating i Is the weight of the fog node i; v. of i The service rate of the fog node i; a is i 、b i 、c i All represent coefficient parameters; x is the total task amount of the service request; j represents the resource types, namely CPU, memory, hard disk storage space and network bandwidth utilization rate; s ij Representing the demand of the resource j migrated to the fog node i; l is a radical of an alcohol ij Is the remaining amount of resource j on the fog node i.
Further, the step of performing task allocation in a centralized manner according to the determined resources and task categories required for completing the tasks and the remaining resources and processing capacity of the fog nodes in the fog layer, wherein if the fog nodes cannot provide services for the service requests, the fog nodes bypass the fog layer and are directly uploaded to the cloud layer for processing further comprises the steps of:
and if the residual resources are not enough, the load balancing cluster distributes the computing task to the cloud layer, receives the cloud layer computing result and forwards the cloud layer computing result to the terminal user.
Further, when the task allocation is performed in a centralized manner according to the task amount of the received service request and the remaining resources and processing capacity of the cloud node in the cloud layer, and if the cloud node cannot provide service for the service request, the cloud layer is bypassed, and the service request is directly uploaded to the cloud layer for processing, the method further includes:
the received service request is input to a queuing system in the load balancing layer.
An embodiment of the present invention further provides a load balancing system for fog computing, including: the system comprises a load balancing layer, an end user layer which communicates with the load balancing layer, a fog layer and a cloud layer;
the terminal user layer is used for sending a service request;
and the load balancing layer is used for receiving the service request, performing task distribution in a centralized manner according to the task amount of the received service request and the residual resources and processing capacity of the fog nodes in the fog layer, and if the fog nodes cannot provide service for the service request, bypassing the fog layer and directly uploading the service request to the cloud layer for processing.
The technical scheme of the invention has the following beneficial effects:
in the scheme, a load balancing layer receives a service request sent by a terminal user; and performing task distribution in a centralized manner according to the task quantity of the received service request and the residual resources and processing capacity of the fog nodes in the fog layer, and if the fog nodes cannot provide service for the service request, bypassing the fog layer and directly uploading the service request to the cloud layer for processing. Therefore, according to the residual resources and the processing capacity of the fog nodes, the load balancing layer performs task allocation in a centralized mode, the task allocation process is optimized, and service delay and energy consumption can be reduced.
Drawings
Fig. 1 is a schematic flow chart of a load balancing method for fog computing according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a load balancing framework for fog computing according to an embodiment of the present invention;
fig. 3 is a detailed flowchart of a load balancing method for fog calculation according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a load balancing system for fog calculation according to an embodiment of the present invention.
Detailed Description
To make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The invention provides a method and a system for balancing load of fog calculation, aiming at the problems of larger time delay and higher energy consumption caused by the existing fog node distributed load balancing strategy.
Example one
As shown in fig. 1, a method for load balancing in fog computing according to an embodiment of the present invention includes:
s101, a load balancing layer receives a service request sent by a terminal user;
and S102, performing task distribution in a centralized manner according to the task quantity of the received service request and the residual resources and processing capacity of the fog nodes in the fog layer, and if the fog nodes cannot provide service for the service request, bypassing the fog layer and directly uploading the service to the cloud layer for processing.
According to the fog computing load balancing method, a load balancing layer receives a service request sent by a terminal user; and performing task distribution in a centralized manner according to the task quantity of the received service request and the residual resources and processing capacity of the fog nodes in the fog layer, and if the fog nodes cannot provide service for the service request, bypassing the fog layer and directly uploading the service request to the cloud layer for processing. Therefore, according to the residual resources and the processing capacity of the fog nodes, the load balancing layer is used for distributing the tasks in a centralized mode, the task distribution process is optimized, and service delay and energy consumption can be reduced.
In order to implement the fog computing load balancing method provided by this embodiment, a classic fog computing load balancing architecture is improved, three layers exist in the classic fog computing load balancing architecture, and the three layers are respectively from the lowest layer to the highest layer: end user layer, fog layer, cloud layer. In this embodiment, a load balancing layer is added between the end user layer and the fog layer, and the load balancing layer is composed of load balancing clusters (for example, server clusters with strong performance).
As shown in fig. 2, the load balancing architecture for fog calculation provided in the present embodiment has four layers, from bottom to top: an end user layer, a load balancing layer, a fog layer and a cloud layer; wherein, the first and the second end of the pipe are connected with each other,
the end user layer comprises: various terminal devices, such as mobile phones, computers, embedded devices, and the like. A terminal user sends out a service request through various terminal devices;
the load balancing layer is composed of load balancing clusters (e.g., a cluster of powerful servers) between the end user layer and the fog layer.
The fog layer is composed of various fog nodes in an area, different from a server with strong performance, the processing capacity of the fog nodes is relatively weak, but the quantity of the fog nodes is large, and the fog nodes are closer to a user side and can provide service with lower time delay;
the cloud layer is provided with a cloud computing data center, so that abundant data resources and computing resources can be provided, and when the fog nodes cannot meet the requirements, the cloud layer can provide support.
On the four-layer mist calculation load balancing framework provided by the embodiment, a new mist calculation load balancing method is provided, which mainly comprises the following steps:
step one, a load balancing cluster receives a service request sent by a terminal user.
In this embodiment, the load balancing cluster communicates with the end users in the area in real time, and the user request is received by the load balancing cluster instead of the cloud node.
In this embodiment, the received service request task may also be input to a queuing system of the load balancing cluster, so that the requirement of high concurrency may be satisfied.
And step two, the load balancing cluster evaluates the user request task, estimates resources required by the task and divides the user request task into two types: data acquisition tasks and computing tasks.
In this embodiment, the load balancing cluster evaluates the service request by using a Resource Capacity Evaluation (RCE) algorithm, determines resources required by a task that completes the service request, and classifies the task to obtain a task category of the service request; wherein the task categories include: data acquisition tasks and computing tasks.
In this embodiment, the resources include: CPU, memory, hard disk storage space and network bandwidth utilization rate.
And step three, performing task distribution in a centralized manner according to the determined resources required for completing the tasks, the task types and the residual resources of the fog nodes, wherein if the fog nodes cannot provide services for the service requests, the fog nodes bypass the fog layer and are directly uploaded to the cloud layer for processing.
In this embodiment, if the task type of the service request is a data acquisition task, the load balancing cluster first determines whether corresponding data is cached on a fog node in the area by using real-time communication with the fog node; if the corresponding data cache exists, judging whether the fog node with the corresponding data cache has enough residual resources by using a resource residual capacity classification evaluation algorithm; if enough residual resources exist (namely, the two conditions are met), the shortest path algorithm is adopted to obtain the fog node with the shortest network path, the data acquisition task is distributed to the fog node with the shortest network path, the fog node responds to the user request, and the fog node response is received and forwarded to the terminal user;
if the corresponding data cache is not available on the fog node or the corresponding fog node does not have enough residual resources (i.e. the fog node does not meet the two conditions), the load balancing cluster directly distributes the data acquisition task to a cloud layer (for example, a cloud computing data center), receives cloud layer response and forwards the cloud layer response to the terminal user; therefore, the load balancing cluster is in direct communication with the cloud layer, the fog layer is avoided, and transmission delay and energy consumption of the fog layer are greatly reduced.
In this embodiment, if the task type of the service request is a computing task, the load balancing cluster determines whether the fog node has enough residual resources by using a resource residual capacity classification evaluation algorithm; if the residual resources are enough, determining an optimal task allocation strategy by using a task allocation optimization model to perform task allocation; the task allocation optimization model takes the total task quantity of all the fog nodes receiving the tasks, which is equal to the total task quantity of the service request, as a first constraint condition, takes the resource quantity required for processing the tasks on the fog nodes, which cannot exceed the residual resource quantity of the fog nodes, as a second constraint condition, and simultaneously takes the sum of the processing delay and the energy consumption of the fog nodes as a target function; after the fog node completes the calculation task, receiving the calculation result of the fog node and forwarding the calculation result to the terminal user;
if the residual resources are insufficient, the load balancing cluster distributes the computing task to the cloud layer, receives the cloud layer computing result and forwards the cloud layer computing result to the terminal user; therefore, the load balancing cluster is in direct communication with the cloud layer, the fog layer is avoided, and transmission delay and energy consumption of the fog layer are greatly reduced.
In this embodiment, the task allocation optimization model is:
Figure BDA0002264228690000071
Figure BDA0002264228690000072
wherein i represents a fog node receiving the task; n represents a set of fog nodes receiving the task; x is the number of i Representing the task amount migrated to the fog node i; delta. For the preparation of a coating i Is the weight of the fog node i; v. of i The service rate of the fog node i; a is i 、b i 、c i All represent coefficient parameters; x is the total task amount of the service request; j represents the resource types, namely CPU, memory, hard disk storage space and network bandwidth utilization rate; s ij Representing the demand of the resource j migrated to the fog node i; l is ij Is the remaining amount of resource j on the fog node i.
In the present embodiment, the objective function
Figure BDA0002264228690000073
The sum of the processing time delay and the energy consumption of the fog node is minimized;
constraint condition one
Figure BDA0002264228690000074
The sum of the task quantities on all the fog nodes for receiving the tasks is equal to the total task quantity;
constraint two (S) ij ≤L ij ) In order to process the resource amount required by the task on the fog node can not exceed the residual resource amount of the fog node, the resource amount is respectively the CPU, the memory, the hard disk storage space and the network bandwidth utilization rate.
In the embodiment, the load balancing layer is in real-time communication with the end user layer, the fog layer and the cloud layer, the load balancing layer performs task allocation in a centralized manner according to the task amount of the received service request and the residual resources and processing capacity of the fog nodes in the fog layer, and if the fog nodes cannot provide service for the service request, the fog nodes bypass the fog layer and are directly uploaded to the cloud layer for processing, so that service delay and energy consumption are effectively reduced, and the resource utilization rate is improved.
For better understanding of the present embodiment, as shown in fig. 3, the end user takes four users, user 1, user 2, user 3, and user 4, as an example, to describe in detail the fog computing load balancing method provided by the present embodiment:
the user 1, the user 2, the user 3 and the user 4 are communicated with the load balancing layer, and the service requests 1, 2, 3 and 4 of the users are uploaded to a queuing system of the load balancing layer to wait for processing;
the load balancing cluster evaluates the service requests of the users in sequence by using the RCE algorithm, and calculates resources required for completing the service request tasks of the users, wherein the resources comprise a CPU (Central processing Unit), an internal memory, a hard disk storage space and a network bandwidth utilization rate. Then classifying the request tasks into data acquisition tasks and calculation tasks, and adopting different processing strategies by a load balancing layer aiming at different tasks;
after the load balancing cluster confirms that the service requests 1 and 2 are data acquisition tasks, confirming whether corresponding data cache exists in the fog nodes in the area or not by utilizing real-time communication with the fog nodes; if a single fog node has a cache of corresponding data, checking whether the fog node is idle, namely enough residual resources exist, and if the node is idle, responding to a user request by the fog node; and judging by the load balancing cluster, the data requested by the service request 1 has data cache on the fog nodes 1 and 2, and the fog node 1 is in a non-idle state, so that the fog node 2 completes a data acquisition task, and the acquired data is sent to the load balancing layer by the fog node 2 and then returned to the user 1. For the data requested by the service request 2, if no data cache exists in the cloud node, the data bypass the cloud layer and directly request the data in the cloud layer, and the load balancing layer receives the response of the cloud layer and returns the response to the user 2;
after the load balancing cluster determines that the service requests 3 and 4 are calculation tasks, whether the fog nodes have enough residual resources is judged by using an RCE algorithm. Through judgment of the load balancing cluster, enough fog node resources can complete the service request 3, and an optimal task allocation strategy is found out by utilizing a task allocation optimization model so as to achieve the lowest service delay and energy consumption; and distributing the tasks to the fog nodes 3, 6 and 7 according to an optimal task distribution strategy, returning the result to the load balancing layer after the fog nodes 3, 6 and 7 process the service request 3, and returning the result to the user 3 by the load balancing layer.
And if the residual resources of the fog nodes cannot complete the service request 4, the load balancing cluster uploads the user request to the cloud layer, the cloud layer processes the user request, and after the processing is completed, the load balancing layer receives a response and forwards the response to the user 4.
In conclusion, the mist computing load balancing method can be used for achieving centralized scheduling of the mist nodes, comprehensively managing the mist node load, improving the resource utilization efficiency and reducing the service delay and the energy consumption.
Example two
The present invention further provides a specific embodiment of a fog computing load balancing system, and since the fog computing load balancing system provided by the present invention corresponds to the specific embodiment of the fog computing load balancing method, the fog computing load balancing system can achieve the object of the present invention by executing the flow steps in the specific embodiment of the method, the explanation in the specific embodiment of the fog computing load balancing method is also applicable to the specific embodiment of the fog computing load balancing system provided by the present invention, and will not be described in detail in the following specific embodiment of the present invention.
As shown in fig. 4, an embodiment of the present invention further provides a fog computing load balancing system, including: the system comprises a load balancing layer 11, an end user layer 12 which communicates with the load balancing layer 11, a fog layer 13 and a cloud layer 14;
an end user layer 12 for sending service requests;
and the load balancing layer 11 is used for receiving the service request, performing centralized task allocation according to the task amount of the received service request and the residual resources and processing capacity of the fog nodes in the fog layer 13, and if the fog nodes cannot provide service for the service request, bypassing the fog layer 13 and directly uploading the service request to the cloud layer 14 for processing.
According to the fog computing load balancing system provided by the embodiment of the invention, the load balancing layer is used for performing task allocation in a centralized manner according to the fog node residual resources and the processing capacity, so that the task allocation process is optimized, and the service delay and the energy consumption can be reduced.
While the foregoing is directed to the preferred embodiment of the present invention, it will be appreciated by those skilled in the art that various changes and modifications may be made therein without departing from the principles of the invention as set forth in the appended claims.

Claims (1)

1. A fog computing load balancing method is characterized by comprising the following steps:
adding a load balancing layer between the terminal user layer and the fog layer, wherein the load balancing layer receives a service request sent by a terminal user;
the method comprises the steps that task allocation is carried out in a centralized mode according to the task quantity of a received service request and the residual resources and processing capacity of fog nodes in a fog layer, if the fog nodes cannot provide service for the service request, the fog layer is bypassed, and the service request is directly uploaded to a cloud layer for processing;
wherein, the load balancing layer is composed of load balancing clusters;
the step of performing task allocation according to the task amount of the received service request and the residual resources and processing capacity of the fog nodes in the fog layer in a centralized manner, bypassing the fog layer if the fog nodes cannot provide service for the service request, and directly uploading the service request to the cloud layer for processing comprises the following steps:
the load balancing cluster evaluates the service request by using a resource residual capacity classification evaluation algorithm, determines resources required by a task for completing the service request, and classifies the task to obtain a task category of the service request; wherein the task categories include: a data acquisition task and a calculation task;
if the fog nodes can provide service for the service request, the task distribution is carried out in a centralized mode according to the determined resources required for completing the task, the task types and the residual resources of the fog nodes, wherein if the fog nodes cannot provide service for the service request, the fog nodes bypass a fog layer and are directly uploaded to the cloud layer for processing;
wherein the resources include: CPU, memory, hard disk storage space and network bandwidth utilization rate;
if the fog node can provide service for the service request, the task is centrally distributed according to the determined resources required for completing the task, the task types and the residual resources of the fog node, wherein if the fog node cannot provide service for the service request, the fog layer is bypassed, and the process of directly uploading the fog node to the cloud layer comprises the following steps:
if the task type of the service request is a data acquisition task, the load balancing cluster judges whether corresponding data is cached on the fog nodes in the area or not by utilizing real-time communication with the fog nodes;
if the corresponding data cache exists, judging whether the fog node with the corresponding data cache has enough residual resources by using a resource residual capacity classification evaluation algorithm;
if enough residual resources exist, obtaining a fog node with the shortest network path by adopting a shortest path algorithm, distributing a data acquisition task to the fog node with the shortest network path, responding a user request by the fog node, receiving a fog node response and forwarding the fog node response to a terminal user;
wherein, if the fog node can provide service for the service request, the task is centrally distributed according to the determined resources required for completing the task, the task category and the residual resources of the fog node, wherein, if the fog node can not provide service for the service request, the fog layer is bypassed, and the process of directly uploading to the cloud layer further comprises:
if the corresponding data cache is not available on the fog node or the corresponding fog node does not have enough residual resources, the load balancing cluster directly distributes the data acquisition task to the cloud layer, receives the cloud layer response and forwards the cloud layer response to the terminal user;
wherein, if the fog node can provide service for the service request, the task is centrally distributed according to the determined resources required for completing the task, the task category and the residual resources of the fog node, wherein, if the fog node can not provide service for the service request, the fog layer is bypassed, and the process of directly uploading to the cloud layer comprises the following steps:
if the task type of the service request is a calculation task, the load balancing cluster judges whether the fog node has enough residual resources by using a resource residual capacity classification evaluation algorithm;
if the residual resources are enough, determining an optimal task allocation strategy by using a task allocation optimization model to perform task allocation; the task allocation optimization model takes the total task quantity of all the fog nodes receiving the tasks, which is equal to the total task quantity of the service request, as a first constraint condition, takes the resource quantity required for processing the tasks on the fog nodes, which cannot exceed the residual resource quantity of the fog nodes, as a second constraint condition, and simultaneously takes the sum of the processing delay and the energy consumption of the fog nodes as a target function;
after the fog node completes the calculation task, receiving the calculation result of the fog node and forwarding the calculation result to the terminal user;
the task allocation optimization model is as follows:
Figure FDA0003769752400000021
Figure FDA0003769752400000022
wherein i represents a fog node receiving the task; n represents a set of fog nodes receiving the task; x is the number of i Representing the task amount migrated to the fog node i; delta i Is the weight of the fog node i; v. of i The service rate of the fog node i; a is i 、b i 、c i All represent coefficient parameters; x is the total task amount of the service request; j represents the resource types, namely CPU, memory, hard disk storage space and network bandwidth utilization rate; s. the ij Representing the demand of the resource j migrated to the fog node i; l is a radical of an alcohol ij The surplus of the resource j on the fog node i;
wherein, if the fog node can provide service for the service request, the task is centrally distributed according to the determined resources required for completing the task, the task category and the residual resources of the fog node, wherein, if the fog node can not provide service for the service request, the fog layer is bypassed, and the process of directly uploading to the cloud layer further comprises:
if the residual resources are insufficient, the load balancing cluster distributes the computing task to the cloud layer, receives the cloud layer computing result and forwards the cloud layer computing result to the terminal user;
the method comprises the following steps that task allocation is carried out in a centralized mode according to the task quantity of a received service request and the residual resources and processing capacity of the fog nodes in the fog layer, if the fog nodes cannot provide service for the service request, the fog layer is bypassed, and the method further comprises the following steps that before the fog nodes are directly uploaded to the cloud layer for processing:
the received service request is input to a queuing system in the load balancing layer.
CN201911081892.1A 2019-11-07 2019-11-07 Fog calculation load balancing method and system Active CN111124662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911081892.1A CN111124662B (en) 2019-11-07 2019-11-07 Fog calculation load balancing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911081892.1A CN111124662B (en) 2019-11-07 2019-11-07 Fog calculation load balancing method and system

Publications (2)

Publication Number Publication Date
CN111124662A CN111124662A (en) 2020-05-08
CN111124662B true CN111124662B (en) 2022-11-08

Family

ID=70495747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911081892.1A Active CN111124662B (en) 2019-11-07 2019-11-07 Fog calculation load balancing method and system

Country Status (1)

Country Link
CN (1) CN111124662B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111641973B (en) * 2020-05-29 2022-04-01 重庆邮电大学 Load balancing method based on fog node cooperation in fog computing network
CN112004058B (en) * 2020-08-25 2022-03-11 重庆紫光华山智安科技有限公司 Intelligent resource allocation method, device and equipment for multi-level domain monitoring system
CN112398917A (en) * 2020-10-29 2021-02-23 国网信息通信产业集团有限公司北京分公司 Real-time task scheduling method and device for multi-station fusion architecture
CN113014649B (en) * 2021-02-26 2022-07-12 山东浪潮科学研究院有限公司 Cloud Internet of things load balancing method, device and equipment based on deep learning
CN112905350A (en) * 2021-03-22 2021-06-04 北京市商汤科技开发有限公司 Task scheduling method and device, electronic equipment and storage medium
CN112948128A (en) * 2021-03-30 2021-06-11 华云数据控股集团有限公司 Target terminal selection method, system and computer readable medium
CN113285988B (en) * 2021-05-14 2022-07-29 南京邮电大学 Energy consumption minimization fair calculation migration method based on fog calculation
CN113656170A (en) * 2021-07-27 2021-11-16 华南理工大学 Intelligent equipment fault diagnosis method and system based on fog calculation
CN113934545A (en) * 2021-12-17 2022-01-14 飞诺门阵(北京)科技有限公司 Video data scheduling method, system, electronic equipment and readable medium
CN114584627B (en) * 2022-05-09 2022-09-06 广州天越通信技术发展有限公司 Middle station dispatching system and method with network monitoring function

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172166A (en) * 2017-05-27 2017-09-15 电子科技大学 The cloud and mist computing system serviced towards industrial intelligentization
CN108156267A (en) * 2018-03-22 2018-06-12 山东大学 Improve the method and system of website visiting time delay in a kind of mist computing architecture using caching
CN109831522A (en) * 2019-03-11 2019-05-31 西南交通大学 A kind of vehicle connection cloud and mist system dynamic resource Optimal Management System and method based on SMDP
CN110234127A (en) * 2019-06-11 2019-09-13 重庆邮电大学 A kind of mist network task discharging method based on SDN

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11140096B2 (en) * 2018-02-07 2021-10-05 Cisco Technology, Inc. Optimizing fog orchestration through edge compute resource reservation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172166A (en) * 2017-05-27 2017-09-15 电子科技大学 The cloud and mist computing system serviced towards industrial intelligentization
CN108156267A (en) * 2018-03-22 2018-06-12 山东大学 Improve the method and system of website visiting time delay in a kind of mist computing architecture using caching
CN109831522A (en) * 2019-03-11 2019-05-31 西南交通大学 A kind of vehicle connection cloud and mist system dynamic resource Optimal Management System and method based on SMDP
CN110234127A (en) * 2019-06-11 2019-09-13 重庆邮电大学 A kind of mist network task discharging method based on SDN

Also Published As

Publication number Publication date
CN111124662A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111124662B (en) Fog calculation load balancing method and system
CN107911478B (en) Multi-user calculation unloading method and device based on chemical reaction optimization algorithm
CN102611735B (en) A kind of load-balancing method of application service and system
CN107734558A (en) A kind of control of mobile edge calculations and resource regulating method based on multiserver
CN109788046B (en) Multi-strategy edge computing resource scheduling method based on improved bee colony algorithm
CN100440891C (en) Method for balancing gridding load
CN111930436A (en) Random task queuing and unloading optimization method based on edge calculation
CN110602156A (en) Load balancing scheduling method and device
WO2022057811A1 (en) Edge server-oriented network burst load evacuation method
CN108897606B (en) Self-adaptive scheduling method and system for virtual network resources of multi-tenant container cloud platform
CN111163143B (en) Low-delay task unloading method for mobile edge calculation
CN111475274A (en) Cloud collaborative multi-task scheduling method and device
CN108900626A (en) Date storage method, apparatus and system under a kind of cloud environment
CN111813330A (en) System and method for dispatching input-output
CN109947574A (en) A kind of vehicle big data calculating discharging method based on mist network
CN113011678A (en) Virtual operation platform operation control method based on edge calculation
CN113254095B (en) Task unloading, scheduling and load balancing system and method for cloud edge combined platform
CN113918240A (en) Task unloading method and device
CN112162789A (en) Edge calculation random unloading decision method and system based on software definition
CN115629865B (en) Deep learning inference task scheduling method based on edge calculation
WO2020134133A1 (en) Resource allocation method, substation, and computer-readable storage medium
WO2021120633A1 (en) Load balancing method and related device
CN104052677A (en) Soft load balancing method and apparatus of single data source
CN105282045B (en) A kind of distributed computing and storage method based on consistency hash algorithm
CN113419867A (en) Energy-saving service supply method in edge-oriented cloud collaborative computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant