CN111580978A - Edge computing server layout method and task allocation method - Google Patents

Edge computing server layout method and task allocation method Download PDF

Info

Publication number
CN111580978A
CN111580978A CN202010399084.6A CN202010399084A CN111580978A CN 111580978 A CN111580978 A CN 111580978A CN 202010399084 A CN202010399084 A CN 202010399084A CN 111580978 A CN111580978 A CN 111580978A
Authority
CN
China
Prior art keywords
node
service
edge
edge computing
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010399084.6A
Other languages
Chinese (zh)
Other versions
CN111580978B (en
Inventor
刘晶
徐雷
毋涛
赵鹏
卢莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202010399084.6A priority Critical patent/CN111580978B/en
Publication of CN111580978A publication Critical patent/CN111580978A/en
Application granted granted Critical
Publication of CN111580978B publication Critical patent/CN111580978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present disclosure provides an edge computing server layout method and a task allocation method, wherein the edge computing server layout method includes: formulating a layout strategy of the edge computing server based on the service requirements of the user corresponding to all nodes in the access network and the network state information of all nodes in the access network; and deploying edge computing servers at respective nodes in the access network based on the placement policy. The embodiment of the disclosure can at least solve the problems of overload of the edge computing server caused by unreasonable deployment of the edge computing server or unnecessary energy consumption waste caused by idle resources of the edge computing server in the related art.

Description

Edge computing server layout method and task allocation method
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a method for distributing edge computing servers and a method for allocating tasks to edge computing servers.
Background
With the continuous development of communication technology, edge computing has attracted extensive attention in academia and industry, and there is little research on the deployment scheme of edge computing servers. Currently, it is widely accepted in academia and industry that edge computing servers are already placed at certain positions, and are usually deployed near each base station by default, however, time-varying and uneven demands of users on computing, bandwidth, storage and other resources may cause overload on a part of edge computing servers, and cause idle resources on another part of edge computing servers, thereby causing unnecessary energy consumption waste.
Disclosure of Invention
The present disclosure provides an edge computing server layout method and a task allocation method, which are used to at least partially solve the problems of current edge computing server overload and resource waste.
According to an aspect of the embodiments of the present disclosure, there is provided an edge computing server layout method, including:
formulating a layout strategy of the edge computing server based on the service requirements of the user corresponding to all nodes in the access network and the network state information of all nodes in the access network; and the number of the first and second groups,
deploying edge computing servers at respective nodes in an access network based on the placement policy;
wherein the layout strategy comprises the placement position and/or the placement number of the edge computing server.
According to another aspect of the embodiments of the present disclosure, there is provided a task allocation method for an edge computing server, which is applied to an edge computing server that is configured by the edge computing server configuration method, and includes:
calculating the task load of each edge computing server;
acquiring nodes of undeployed edge computing servers with the same hop count as the node of the deployment position of each edge computing server; and the number of the first and second groups,
and selecting a corresponding edge computing server based on the task load of each edge computing server to distribute tasks for the nodes which are not deployed with the edge computing servers and have the same hop count with the nodes at the deployment position of each edge computing server.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the edge computing server layout method and the task allocation method provided by the embodiment of the disclosure, a reasonable layout strategy of the edge computing server is formulated according to the service requirements of users corresponding to all nodes in the access network and the network state information of all nodes in the access network, and then the edge computing server is deployed, so that the problems of resource idling or resource waste and the like caused by unreasonable deployment of the edge computing server in the related art can be at least solved.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the disclosure. The objectives and other advantages of the disclosure may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosed embodiments and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the example serve to explain the principles of the disclosure and not to limit the disclosure.
Fig. 1 is a schematic flowchart of an edge computing server layout method according to an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of an edge computing server layout system according to an embodiment of the present disclosure;
fig. 3 is another schematic flow chart of a layout method of an edge computing server according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a method for laying out edge computing servers according to another embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a task allocation method of an edge computing server according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, specific embodiments of the present disclosure are described below in detail with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order; also, the embodiments and features of the embodiments in the present disclosure may be arbitrarily combined with each other without conflict.
Currently, in the research of the offloading of the edge computing task, it is mostly assumed that the edge computing server is already placed at a certain position, and it is not considered that the position of the edge computing server in the network has a large influence on the service performance. Therefore, as a first step of edge computing architecture deployment, the layout of edge computing servers is fundamental and critical, and the research on where to place an edge computing server and place several edge computing servers in an access network is of great significance, and if the edge computing servers are unscientific in deployment, some edge computing servers will be overloaded when users have time-varying and uneven demands on resources such as computation, bandwidth, storage, and the like, while other edge computing servers are idle, resulting in unnecessary energy consumption waste.
In view of the above problems, the embodiments of the present disclosure provide a layout method for edge computing servers, which formulates a layout policy for the edge computing servers according to service requirements of users corresponding to all nodes in an access network and network state information of all nodes in the access network, and then performs layout on the edge computing servers, so as to solve resource idleness or resource waste caused by unreasonable deployment of the edge computing servers in the related art.
Specifically, the embodiment of the present disclosure first proposes an implementation scheme of an edge computing server layout based on edge computing and access network virtualization, then proposes a layout model that comprehensively considers end-to-end delay and energy consumption, wherein the delay and energy consumption are specifically decomposed, and finally proposes a layout algorithm that combines edge services and network state awareness. In this embodiment, an edge computing server is deployed in an access network, and an edge computing infrastructure and the access network are virtualized by using SDN and NFV technologies, so that a service flow can be redirected and forwarded inside the access network through an SDN controller without passing through a core network, and a service-bearing virtual machine or container is dynamically created, scaled, and deleted on the edge computing server through an NFV orchestrator. And the service flow is transmitted by utilizing the high-bandwidth low-delay characteristic of the optical fiber link in the access network. The method can enhance the unloading capacity of the edge computing, reduce unnecessary energy consumption cost, and simultaneously, the time delay for completing the task forwarding of the user request by utilizing the optical link of the access network is lower than that of the task forwarding through the central cloud of the core network. The user terminal can be directly connected to the edge computing server through a one-hop wireless link, or can reach the current access device through the wireless link and then reach one edge computing server through the optical fiber link. The access scenario in which the user terminal requests to the edge computing server may be divided into two cases, that is, the case where the edge computing server is placed near a device within an accessible range and the case where the edge computing server is placed near a device within an inaccessible range.
The method comprises the steps of firstly, utilizing SDN and NFV virtualization technologies to carry out global view and centralized control on the whole network and dispatch requests and service flows of each user, dynamically allocating and planning network, computing and storage resources, and considering energy consumption and end-to-end service delay of the whole network and server at the position of an edge computing server. In order to save energy consumption and reduce time delay, the embodiment selectively places the edge computing server at some nodes (such as base stations, APs and other devices) in the converged access network for the access of the mobile users, rather than in the vicinity of all the base stations. Assuming that the total computing resources of all edge compute servers placed for processing mobile user requests are unchanged, the computing resources of a single edge compute server vary with the number of sites placed. The problem solved in this embodiment is: the position and the number of the edge computing servers are determined by the layout scheme, and how to distribute the tasks of the base station and the user thereof to the specific edge computing servers is solved by the task distribution scheme.
In order to more clearly understand the following technical solutions, the present embodiment first explains the network model and the service model used in the present embodiment:
1) network model
Topology of access network using an undirected graph
Figure BDA0002488781990000041
Represents a converged access network topology wherein
Figure BDA0002488781990000042
Denotes a node device in the access network, denoted by i 1iIncluding the number of wireless access devices
Figure BDA0002488781990000043
(AP, BS, etc.) and optical access device
Figure BDA0002488781990000044
(ONU/ONT, etc.).
Figure BDA0002488781990000045
Denotes a link in the access network, with K ═ 1kIncluding the number of wireless links
Figure BDA0002488781990000046
And an optical fiber link
Figure BDA0002488781990000047
Bandwidth usage per link
Figure BDA0002488781990000048
Indicating, real-time available bandwidth
Figure BDA0002488781990000049
And (4) showing. Let hijI, j ∈ N, denotes node N in the access networkiTo node njThe shortest distance between them, expressed in hops.
By using
Figure BDA0002488781990000051
Denotes a set of edge compute servers deployed near a node device, with Q ═ 1qRepresenting edge compute servers mqStorage capacity of
Figure BDA0002488781990000052
Represents mqIn terms of CPU computation speed (cycles/sec). One edge computing server can be deployed near only one network node, and one network node can also be deployed with only one edge computing server.
2) Business model
Emerging edge services are diverse, and the requirements of transmission data volume, time delay and computing resources of different types of services are different. Order to
Figure BDA0002488781990000053
Representing a node niThe R-th service request with the service type V in the service range has | V | service types, and R ═ 1i| represents a node niNumber of user requests within service range. Wherein
Figure BDA0002488781990000054
And
Figure BDA0002488781990000055
respectively represent a node niThe computational power (expressed in CPU cycles), the amount of data transferred (expressed in bits per second) and the maximum tolerated delay (expressed in milliseconds) required to service the r-th request for traffic v. The service types can be classified according to the size of the calculated amount, the size of the uploaded data amount, the time delay requirement, the energy consumption requirement and the like. The computational resources of one edge compute server may not meet the computational requirements of all traffic of several base stations pre-assigned to that server, and may be reallocated by the controller to an edge compute server with sufficient resources.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a layout method of an edge computing server according to an embodiment of the present disclosure, where the method includes step S101 and step S102.
In step S101, a layout policy of the edge computing server is formulated based on the service requirements of the user corresponding to all nodes in the access network and the network state information of all nodes in the access network.
It should be noted that all nodes in the access network include both access and forwarding devices such as a base station, an access node AP, an ONU, an SDN switch, and the like, and also include an edge computing server node, where the user service requirement acquired here is that the user service requirement is an access and forwarding device node such as a base station, an AP, and the like, but does not include an edge computing server node, and the user does not directly send a request to the edge computing server; the obtained network state information may include all network state information of the edge computing and the access network.
In step S102, edge compute servers are deployed at respective nodes in the access network based on the placement policy.
In this embodiment, before step S101, the method further includes the following steps:
acquiring service requirements of each node in an access network corresponding to a user; and the number of the first and second groups,
acquiring network state information of each node in an access network;
the service requirement comprises a user request and flow characteristics, and the network state information comprises network topology and resource information.
In this embodiment, an edge computing server layout scheme is implemented in a control and management layer based on a global view and centralized management, and accordingly, this embodiment further provides an edge computing server layout system, please refer to fig. 2, where fig. 2 is a schematic structural diagram of an edge computing server layout system, where the system includes a service request and traffic characteristic collection and analysis component 21, a layout algorithm component 22, a network topology and resource discovery component 23, and a deployment component 24.
In particular, as shown in fig. 2 and 3, the following steps Sa-Sf are implemented by the network topology and resource discovery component 23, the service request and traffic characteristics collection and analysis component 21 and the topology algorithm component 22:
step Sa: the service request and traffic characteristic collection and analysis component 21 obtains service requirements of users corresponding to each node in the access network, wherein the service requirements include user requests and traffic characteristics.
It will be appreciated that, heretofore, the user corresponding to each node sends service requirements to the service request and traffic characterization and analysis component 21.
Specifically, service requests sent by all user terminals corresponding to each site location (i.e., node location, such as a base station, an AP, and the like) are first sent to a service request and traffic characteristic collection and analysis component, where the user requests include a calculation amount, a priority, and a QoS delay requirement required for calculating an offloading task, or a traffic size and a bandwidth requirement of a content distribution request, and the like. The traffic request and traffic characteristics collection and analysis component 21 classifies an access network wide traffic request over a period of time, aggregates the demand for providing computing or storing traffic via the edge computing server, and analyzes and summarizes the traffic characteristics distribution from a temporal and spatial perspective.
And Sb: the service request and traffic characteristics collection and analysis component 21 sends the user request and traffic characteristics to the placement algorithm component 22.
Step Sc: the network topology and resource discovery component 23 obtains network status information of each node within the access network.
Further, before the step Sc, the method further comprises the step Sd: the network topology and resource discovery component 23 sends a status request message to each node in the access network every fixed period, each node in the access network replies the message, and the network topology and resource discovery component 23 identifies and manages the network topology and resources and updates the database regularly.
Specifically, the network topology and resource discovery component 23 first periodically sends a request message for network topology and resource information to each device node in the access network, the user terminal accesses the device node in the network through the wireless access network, and the device nodes communicate with each other through the optical access network. The device nodes in the access network comprise access and forwarding devices such as a base station, an access node AP, an ONU (optical network unit), an SDN (software defined network) switch and the like, and also comprise edge computing server nodes. The present embodiment generalizes proprietary hardware devices using NFV, implements required functions by deploying software modules on a generic server, performs a global view on all devices and links in a network using SDN, and further virtualizes, slices, and isolates network, computing, and storage resources for resource sharing and dynamic allocation. Each device node then replies to the network topology and resource discovery component with a content message containing topology, port available bandwidth and computation, storage available resources, etc. Wherein the network topology information is obtained by Link Layer Discovery Protocol (LLDP). The network topology and resource discovery component identifies and manages the topology and resources according to the content of the received message, and if the topology and resources are changed, the topology and resources are updated to the network state information database in time.
Step Se: the network topology and resource discovery component 23 sends the network topology and resource information to the placement algorithm component 22.
The layout algorithm component 22 collects and analyzes the relevant information sent by the component 21 based on the network topology and resource discovery component 23 and the service request and traffic characteristics, executes the step S101 to specify the layout policy of the edge computing server, then executes the step Sf to issue the layout policy to the physical network, and executes the step S102 to deploy the corresponding edge computing server by the physical network. In practical application, the layout algorithm component takes the information of access network topology and resources, service requirements, flow distribution and the like sent by the two components as input parameters, and outputs the position information deployed by the edge computing server according with requirements according to different layout strategies.
It is understood that the physical network in fig. 3 corresponds to the access network and the edge computing network in fig. 2, and the deployment component 24 is included in the physical network, and the deployment component 24 receives the layout policy and performs layout of the edge computing server according to the layout policy. In this embodiment, a corresponding edge computing server is placed in an access network according to a deployment scheme obtained by a layout algorithm, and then a control and management layer allocates network and computing resources for a service of a user.
Specifically, the layout algorithm component 22 takes the information of the access network topology and resources, the service requirements, the traffic distribution, and the like sent by the two components 21 and 23 as input parameters, designs different layout strategies to output the required position and/or quantity information of the edge computing server deployment, and finally deploys the obtained layout scheme in the corresponding access network.
In this embodiment, the layout policy in step S101 determines the placement position of the edge computing server based on the service requirements of the user corresponding to all nodes in the access network and the network state information of all nodes in the access network, and in other embodiments, the layout policy in step S101 may further include: the number of the edge computing servers to be placed is determined based on the service requirements of the users corresponding to all the nodes in the access network and the network state information of all the nodes in the access network, which is detailed in another embodiment of the present disclosure and is not described herein again.
Further, because the types of edge services are various, the requirements for bandwidth, storage, or computational resources are different, and the requirements for distinguishing different services are different in emphasis, this embodiment provides a layout algorithm based on edge services in combination with network state awareness to formulate a layout policy, where the algorithm selects the most suitable q site locations to place an edge computation server according to the preference degrees of users for different edge services and the network state evaluation results. Specifically, after acquiring service requirements (i.e., step Sa) of users corresponding to each node in the access network, the method further includes the following steps:
analyzing the preference degree of each node to the edge service based on the service requirement of each node corresponding to the user in the access network;
calculating the influence of the preference degree of each node on the edge service in the converged access network based on the preference degree of each node on the edge service;
the determining the placement position of the edge computing server based on the service requirements of the user corresponding to all nodes in the access network and the network state information of all nodes in the access network comprises the following steps:
evaluating the suitability degree of placing an edge computing server at each node based on the influence of the preference degree of each node on the edge service in the converged access network and the network state information of each node; and the number of the first and second groups,
and taking the nodes with the maximum suitable degree of the edge computing servers in the evaluation result as the placing positions of the edge computing servers.
It can be understood that, in practical application, because one edge computing server needs to be shared among a plurality of sites (i.e., node positions), service data needs to be transmitted in the converged access network, in order to perform overall evaluation on the distribution of the task amount, the present embodiment calculates the influence of the preference degree of the task amount of each site in the converged access network, and then evaluates the suitability degree of the node suitable for placing the edge computing server according to the influence of the preference degree of each node on the edge service in the converged access network and the network state information of each node. The preset number of deployment nodes is the deployment number of the edge computing servers obtained by combining the actual edge service condition by the technicians in the field, and M corresponding to the following formula is a known quantity.
Further, each user has different preferences for different types of edge services, and the preferences of users in the coverage area of each base station (or AP) are also different, in this embodiment, the preference degree of each node for the edge services is analyzed based on the service requirements of the users corresponding to each node in the access network, including the following steps:
analyzing the service type preference probability of each node, the computing capacity required by the service and the bandwidth demand based on the service demand of the user corresponding to each node in the access network;
and calculating the preference degree of each node for the edge service based on the service category preference probability of each node, the required computing capacity of the service and the bandwidth demand.
Further, P is used in the present embodimentR(i) Representing a node device ni(degree of task volume) preference of users within range to different edge services, PR(i) Probability p of traffic class for each siteiR(v) With service request calculation, weighted sum of bandwidth requirements, in particular said per node based service class preference probability, required computing power of the service and bandwidth requirementCalculating the preference degree of each node to the edge service according to the following formula:
Figure BDA0002488781990000091
in the formula, PR(i) Representing a node niFor the preference degree of the edge service, | V | represents the total amount of the service type of the edge service, | Ri| represents a node niTraffic request volume, piR(v) Representing a node niThe preference probability that a service request generated by a user within the access range belongs to a service class with a service type v,
Figure BDA0002488781990000092
representing a node niThe traffic type of the r-th traffic request is the bandwidth requirement of v,
Figure BDA0002488781990000093
representing a node niThe service type of the r-th service request is the calculation capacity required by the service of v;
Figure BDA0002488781990000094
Figure BDA0002488781990000095
Figure BDA0002488781990000096
wherein p (v) represents the probability that the edge service requested by the user belongs to the service type v (is the weighted sum of the service types requested by each user), p (v | R) represents the probability that the R-th service request belongs to the service type v, | R | represents the total amount of the service requests, p (R) represents the probability of generating R service requests, and p (i, R) is the node niProbability of generating r service requests, pi(r) at node n for the r-th service requestiThe probability of generation.
To say thatIt should be noted that p (r) represents the probability of generating r service requests, and may be equal probability generation, i.e.
Figure BDA0002488781990000101
Is a constant; it is also possible to conform to the zipf distribution,
Figure BDA0002488781990000102
wherein sigma is a distribution parameter, the larger sigma is, the denser the distribution is, and C is a constant; wherein p (i, r) is a node niThe probability of generating r service requests within the range, in the actual calculation, if the user request is in niInternally generating, let pi(r) ═ 1, otherwise pi(r)=0。
In this embodiment, the influence of the preference degree of each node on the edge service in the converged access network is calculated based on the preference degree of each node on the edge service, and is obtained according to the following formula:
Figure BDA0002488781990000103
in the formula, CP(ni) Representing a node niThe influence of the preference level for edge traffic in converged access networks,
Figure BDA0002488781990000104
representing all nodes in the access network, PR(j) Representing a node njThe degree of preference for the edge traffic,jkrepresenting a slave node n in a converged access networkjTo node nkThe number of shortest paths of (a) to (b),
Figure BDA0002488781990000105
representing a slave node njTo node nkThrough niThe number of shortest paths of (2).
Further, the present embodiment evaluates the appropriateness of a node based on a network topology feature that each node has and a preference characteristic of a user in the node for an edge service, where the network topology feature includes a centrality of a network node (i.e., a node, a network device such as a base station, an AP, etc.), a centrality of an intermediary, and a distance between the node and a deployed edge computing server, where an influence of the preference of the node on the edge service in a converged access network, the centrality of the intermediary, and the distance from the deployed edge computing server are four features that evaluate the appropriateness of the node, and the appropriateness of the node is comprehensively evaluated by calculating total weights of the four features, and specifically, the method further includes the following steps:
obtaining the intermediary centrality and the degree centrality of each node and the distance influence between the node and a deployed edge calculation server based on the network state information of each node;
the method for evaluating the suitability degree of placing the edge computing server at each node based on the influence of the preference degree of each node on the edge service in the converged access network and the network state information of each node specifically comprises the following steps:
respectively calculating the influence of the preference degree of each node on the edge service in the converged access network, the mediation centrality, the degree centrality and the total weight of the characteristics of the distance influence between the node and the deployed edge calculation server; and the number of the first and second groups,
and evaluating the suitability degree of placing the edge computing server at each node based on the characteristic total weight of each node.
It should be noted that the total feature weight of each node is the weighted sum of the influence of the preference degree of the node on the edge service in the converged access network, the mediation centrality, the centrality and the distance from the deployed edge computing server on each feature weight.
In this embodiment, the influence of the mediation centrality and the degree centrality of each node and the distance between the node and the deployed edge calculation server is obtained according to the following formula:
Figure BDA0002488781990000111
Figure BDA0002488781990000112
Figure BDA0002488781990000113
in the formula, CB(ni) Representing a node niThe mediation centrality of (3); d (n)i) Representing a node niDegree of centrality, degree (n)i) Representing a node niDegree of (c), hiqRepresenting a node niDistance impact from deployed edge compute servers, M represents the total number of deployed edge compute servers, q represents the q deployed edge compute servers,
Figure BDA00024887819900001111
representing a node niThe inverse of the power of the distance impact from the deployed edge computation server, where 0 < θ < 1;
it should be noted that the distance influence is larger as the distance from the node to all the edge computing servers is closer. When no edge computing server is deployed in the network, the distance influence of all nodes is 1; when the edge calculation server has been deployed, the distance impact is then the inverse of the power of the sum of all shortest distances, 0 < θ < 1.
Respectively calculating the influence of the preference degree of each node on the edge service of the node on the edge service in the converged access network, the mediation centrality, the degree centrality and the total weight of the characteristics of the distance influence of the node on the deployed edge calculation server, and obtaining the total weight according to the following formula:
Figure BDA0002488781990000114
ωI=1-HI/|I|-∑IHI(10)
Figure BDA0002488781990000115
Figure BDA0002488781990000116
Figure BDA0002488781990000117
in the formula, WiRepresenting a node niCharacteristic total weight of (c), ωIRepresenting a node niIs determined by the weight of each of the features of (a),
Figure BDA0002488781990000118
representing a node niAnd I ═ CB,d,CPH, i ∈ N is represented by a feature set { C }B(ni),d(ni),CP(ni),hiqThe conversion is carried out to obtain the product,
Figure BDA0002488781990000119
represents each feature, min, normalized by min-max extreme normalizationIAnd maxIRespectively representing a characteristic minimum and a characteristic maximum, HIThe entropy of each of the features is represented,
Figure BDA00024887819900001110
node niProbability of each feature.
It is to be understood that the present embodiment will represent the above-described features as F ═ C by the set FB(ni),d(ni),CP(ni),hiqStandardizing each characteristic by utilizing a min-max extreme value
Figure BDA0002488781990000121
And (3) carrying out normalization:
Figure BDA0002488781990000122
in order to measure the information quantity represented by each feature, the entropy weight of each feature is calculated by adopting an entropy weight method, and the probability of four features of each node is firstly calculated
Figure BDA0002488781990000123
Computing the entropy H of each featureIThen, thenDeriving the weight ω of each featureIFinally, each node n is calculatediThe weight for all features is the weighted sum W of each feature valueiI.e. node niThe total weight of features of (1).
Referring to fig. 4, fig. 4 is a flow chart of a layout method for edge calculation servers according to another embodiment of the present disclosure, and in order to reasonably plan the deployment number of the edge calculation servers and further optimize a layout scheme, unlike the previous embodiment, this embodiment further includes a step S401, and the step S101 is further divided into a step S402:
in step S401, the minimum value of the time delay and the energy consumption of the edge service processing is set.
The method for making the layout policy of the edge computing server based on the service requirements of the user corresponding to all nodes in the access network and the network state information of all nodes in the access network (i.e., step S101) includes step S402.
In step S402, the placement number of the edge computing servers is determined based on the service requirements of the users corresponding to all nodes in the access network and the network state information of all nodes in the access network.
In this embodiment, an end-to-end delay and energy consumption weighting and minimizing model is provided, which includes a network model, a service model, a delay model, and an energy consumption model, where the network model and the service model are set forth in advance in the above solutions, and in this embodiment, a delay model is specifically decomposed into an access delay, a network transmission delay, and a processing delay, and an energy consumption model is specifically decomposed into a network energy consumption and a processing energy consumption, and then a tradeoff problem between the delay and the energy consumption is defined, where weights of the two models may be adjusted according to a service type and a user requirement. Specifically, step S402 includes the steps of:
s402a, analyzing the average service response time delay and the average service transmission energy consumption of all nodes based on the service requirements of the users corresponding to all nodes in the access network and the network state information of all nodes in the access network;
s402b, determining the deployment number of the edge computing servers according to the minimum value of the delay and the energy consumption of the edge service processing, the average service response delay and the average service transmission energy consumption of all the nodes.
The minimum value of the time delay and the energy consumption of the edge service processing is set, and the minimum value is obtained according to the following formula:
min REq=αTavg+βEavg(14)
in the formula, min RFqRepresents the set minimum value of time delay and energy consumption of the edge service processing, α and β represent weighting coefficients which are constant, α + β is equal to 1, and TavgRepresenting the average response delay of traffic of all nodes, EavgRepresenting the average transmission energy consumption of the service of all the nodes;
it should be noted that equation (14) is a weighted sum of the delay and the energy consumption normalized by the minimum min-max, and is called a Reward Function (Reward Function). The α + β is 1, and the weights of the delay and the energy consumption can be adjusted according to the user requirement and the service type, for example, α is correspondingly increased for delay sensitive services, and β is increased for IoT services with high energy consumption.
In this embodiment, a delay model is established, wherein the response delay of the user requesting a service includes the access delay from the user to the access node
Figure BDA0002488781990000131
To the edge calculation Server mqNetwork transmission delay
Figure BDA0002488781990000132
And processing delay of edge computing server
Figure BDA0002488781990000133
The average response time delay of the user service request is the average value of the sum of the three, and the analysis of the average response time delay T of the services of all the nodesavg(i.e., a delay model) according to the following equation:
Figure BDA0002488781990000134
Figure BDA0002488781990000135
Figure BDA0002488781990000136
Figure BDA0002488781990000137
in the formula, TavgRepresents the average response time delay of all nodes, N represents the total number of nodes, and Ri| represents a node niThe amount of service requests of (a) is,
Figure BDA0002488781990000138
which represents all the nodes in the access network,
Figure BDA0002488781990000139
representing the total amount of service requests;
Figure BDA00024887819900001310
representing a node niThe access delay of the service request of (2),
Figure BDA00024887819900001311
representing a node niThe service type of the r-th service request is the bandwidth demand of v, rikRepresenting a node niThe uploading data rate of the users in the service range on the wireless link is obtained according to the Shannon theorem by the calculation formula, wherein bAlwkFor a wireless link lwkOf the channel bandwidth, pwkIndicating a radio link lwkTransmission power of gi,rRepresenting the gain, σ, between the user terminal and the access node device2Representing the channel background noise;
Figure BDA00024887819900001312
representing service request slave node niTo the edge calculation Server mqThe network transmission delay of (a) is,
Figure BDA00024887819900001313
representing all edge compute servers of the deployment to be determined,
Figure BDA00024887819900001314
it is meant that all links in the access network,
Figure BDA00024887819900001315
represents niTo be assigned to the edge compute server mq,xjqRepresenting edge compute servers mqDeployed at node nj,bAlkRepresents a link lkAvailable bandwidth of;
Figure BDA00024887819900001316
representing the processing delay of the service request by the edge computing server,
Figure BDA00024887819900001317
representing edge compute servers mqThe computational resources allocated to the service request r,
Figure BDA0002488781990000141
edge computing server m to which all nodes are allocatedqThe processing delay of (2).
It should be noted that, the edge computing server is not deployed at each node niTraffic data flows need to be forwarded between node devices to reach the edge compute servers that meet the requirements. Network transmission delay
Figure BDA0002488781990000142
Indicating that a service request is coming from an access node n via a particular set of linksiReach the designated mqTime delay of (2); in equation (17), a binary variable xiq∈ {0, 1} denotes an edge calculation server mqWhether or not to be deployed at node niIf deployed at ni,xiq1 is ═ 1; otherwise xiq0. Binary variable
Figure BDA0002488781990000143
Represents niWhether the r-th user request in the range is distributed to the edge computing server mqIf it is allocated to mq
Figure BDA0002488781990000144
Otherwise
Figure BDA0002488781990000145
bAlkRepresents a link lkThe available bandwidth of (a) is,
Figure BDA0002488781990000146
represents niThe r-th user request in the range is to be distributed to the edge computing server, xjqThen represents mqDeployed at node njThe position of (a). Therefore, the formula represents the transmission delay of the task which needs to be forwarded through the access network among all the nodes; in equation (18), the service request
Figure BDA0002488781990000147
Processing delay of
Figure BDA0002488781990000148
Defined as the ratio of the CPU cycles required for the service to the CPU frequency allocated by the edge computing server, wherein the continuous variable
Figure BDA0002488781990000149
Representing edge compute servers mqThe proportion of computing resources allocated to each service request.
Figure BDA00024887819900001410
Representing edge compute servers mqThe computational resources allocated to the service request r. If m isqIf the available resources are not enough, the service request is rejected, and no queuing waiting time delay exists.
Figure BDA00024887819900001411
Representing the edge compute servers m allocated across all nodesqThe processing delay of (2).
Analyzing the average transmission energy consumption E of the services of all the nodesavg(i.e., energy consumption model) according to the following formula:
Figure BDA00024887819900001412
Figure BDA00024887819900001413
Figure BDA00024887819900001414
in the formula, EavgRepresenting the average transmission energy consumption of the services of all nodes, Q representing the total number of edge computing server deployments to be determined, EircRepresenting edge compute server processing node niProcessing energy consumption of the r-th service request, EirtRepresenting a node niThe network transmission power consumption for processing the r-th service request, η is a constant coefficient,
Figure BDA00024887819900001415
node niThe computing power required by the service with the service type v of the r-th service request represents pok、PwkRespectively representing optical links lokAnd a radio link lwkThe transmission power of (1).
It should be noted that Q is an unknown quantity, that is, the deployment number of the edge computing servers that needs to be calculated in this embodiment is calculated, and Q that obtains the minimum weighted sum of time delay and energy consumption is selected by calculating the weighted sum of time delay and energy consumption of each Qmin(ii) a Energy consumption of one service comprises the processing energy consumption E of the edge computing serverircAnd energy consumption of network transmission EirtThe average energy consumption of the edge computing server and the FiWi access network is defined as formula (19); the factors affecting the energy consumption of the server are many, such as the states of a CPU, a memory, a hard disk, a network card and the like, and among the factors, the CPU is the most important energy consumption deviceEach service request is calculated at the edge server mqHas a specific relationship with its allocated CPU cycle, so
Figure BDA0002488781990000157
The consumed energy of the edge computing server is defined as formula (20), wherein η is a constant coefficient of CPU of the edge computing server, and in formula (21), the traffic is transmitted to mqThe energy consumed is defined as the product of the transmission power of the link and the transmission time, where the transmission time includes the access delay and the network transmission delay, and in addition, pk={pok,pwkDenotes the transmission power of the optical link and the radio link k, and is typically constant.
In addition, some constraints on the relevant parameters involved in equation (14): s.t.xiq∈{0,1}(22);
Figure BDA0002488781990000151
Figure BDA0002488781990000152
Figure BDA0002488781990000153
Wherein the constraint (22) represents an edge calculation server mqWhether or not to be deployed at node niThe decision variables at (1) are binary variables of 0; (23) indicating that each edge computing server can only be deployed at one node device; (24) indicating that at most one edge computing server is deployed at one node device; (25) representing a node niService-wide service requests
Figure BDA0002488781990000154
Whether or not to be allocated to mqA binary decision variable of (d); (26) indicating that each service request can be distributed to only one edge computing server; (27) representing edge compute servers mqThe sum of the computational resources allocated to all services should be less than mqTotal computing resources
Figure BDA0002488781990000155
(28) Indicating that the delay of each service should be less than its maximum tolerated delay; (29) guarantee all passing links lkDoes not exceed its bandwidth.
For better understanding of the present embodiment, the scheme based on the present embodiment is further exemplified by a computer algorithm:
Figure BDA0002488781990000156
Figure BDA0002488781990000161
as can be appreciated, qmin calculates the total number of deployments Q of servers for the edge,
Figure BDA0002488781990000162
node for computing deployment locations of servers for Q edges
Figure BDA0002488781990000163
And (4) collecting. The edge service perception is based on the preference degree of different site users to different service types and resources; the network state perception is based on three factors of node degree centrality, intermediary centrality and influence of an edge computing server on the position distance of a station of an access network. And finally, selecting a station position with the largest information entropy weight to deploy an edge computing server for the first time based on the perception information.
Based on the same technical concept, the embodiment of the present disclosure correspondingly provides a task allocation method for an edge computing server, which is applied to the edge computing server that is configured by the configuration method for the edge computing server, as shown in fig. 5, and the method includes steps S5010 to S503.
Most of the research on the layout of the edge computing servers aims at reducing the deployment cost, aims at the problem of where and how to place the edge computing servers to minimize the cost of edge computing server providers, and in some related technologies, minimizes the number of edge computing servers while ensuring the QoS (such as access delay), gives an Integer Linear Programming (ILP) formula of the problem, and then converts the formula into a minimum dominating set problem of graph theory, and provides a solution based on a greedy algorithm. Also in some related art, an online support framework Tentacle is proposed that provides edge computing server decisions for service providers to optimize the overall performance and cost of the edge infrastructure. Although in the above scheme the energy consumption of a single server is low, in order to avoid a hole in the coverage, the number of edge computing servers may be very large, and the total energy consumption will certainly be huge. Therefore, the power saving problem in the mobile edge calculation will be a huge challenge. In order to reduce the total energy consumption of the layout of the edge computing server, some schemes provide a prediction-mapping-optimization heuristic method based on resource demand prediction, which is used for placing the server in the edge computing, the algorithm divides a task into a plurality of subtasks, then realizes the mapping of the subtasks of the server, completes the information interaction between the server and a data source through the proposed data naming mechanism, and obtains a final server layout strategy, but the scheme for reducing the total energy consumption of the edge computing server causes too long processing time delay.
When an edge computing server is placed near a device within the accessible range of a user terminal, three cases can be divided: firstly, the wireless link between the user and the current access device is in a good state, the edge computing server has resources capable of meeting the user request, and when the QoS requirement of the service request is met, the request of the user terminal is directly connected to the edge computing server. Secondly, when the wireless link between the user and the current access device is congested or failed and the edge computing server still has resources capable of meeting the user request, the request of the user terminal can reach another access device through other accessible wireless links and reach the server through an optical fiber link; or connect to other reachable servers that satisfy the condition. Thirdly, when the edge computing server cannot meet the resources requested by the user, the request of the user terminal reaches other edge computing servers which have sufficient resources and meet the QoS requirement through a wireless link and an optical fiber link with good network state.
Therefore, the layout of the edge computing server still has a challenge in balancing the delay and the energy consumption, and the edge computing server is not combined with the service requirement of the user. In this embodiment, based on the above problems, a base station and a task allocation algorithm are provided, where the algorithm allocates the base station with the largest computation and bandwidth demand to the edge computation server with the smallest task, specifically refer to steps S501 to S503 in fig. 5.
In step S501, the task load of each edge computing server is calculated.
Specifically, the task load of the edge computing server, i.e., the computation and content transmission demand required by the edge computing server to handle the traffic with the location node, i.e., the computation and bandwidth demand of the base station (i.e., node) co-located with the edge computing server, can be computed according to equations (1) - (4) mentioned in the above embodiment.
In step S502, a node of an undeployed edge computing server having the same hop count as the distance from each edge computing server deployment location node is acquired.
It can be understood that the node hop count is related to the service processing delay, and in this embodiment, all nodes having the same hop count as the node where the edge computing server has been deployed are found, and the nodes where the edge computing server has not been deployed are allocated according to the minimum service processing delay, that is, the edge computing server is selected to perform task allocation according to the node having the same hop count with the minimum hop count.
In step S503, a corresponding edge computing server is selected based on the task load of each edge computing server to allocate a task to a node which is not deployed with an edge computing server and has the same hop count as the node at the deployment position of each edge computing server.
E.g. hop count hijIncrease from 1 to hMaxWherein is given byijRepresenting the minimum number of hops, h, of two nodes in the networkMaxRepresenting the maximum number of hops in all node pairs in the network by looking for 1 to hMaxIn each kind of hop count, all nodes of the same hop count finally realize to select a proper edge calculation server to allocate tasks to the relevant nodes according to the task load of each edge calculation server.
Specifically, the selecting a corresponding edge computing server based on the task load of each edge computing server allocates tasks to the nodes of undeployed edge computing servers with the same hop count from the node of the deployment position of each edge computing server (i.e., step S503), including the following steps:
according to the task load of each edge computing server, sequencing each edge computing server in an ascending order;
calculating the calculation resource and bandwidth demand of all nodes which are not provided with the edge calculation server and have the same hop count with the node of the deployment position of each edge calculation server;
based on the computing resources and bandwidth demand of all the nodes which do not deploy the edge computing servers, sequencing the nodes which do not deploy the edge computing servers in a descending order;
selecting the edge computing server with the minimum task load as the edge computing server which carries out ascending sequencing and the nodes which carry out descending sequencing and are not provided with the edge computing server to distribute tasks for the nodes which have the maximum computing resource and bandwidth demand and are not provided with the edge computing server;
and repeating the steps until the task distribution of all the nodes which are not provided with the edge computing server is completed.
In one embodiment, the method further comprises the steps of:
judging whether the selected edge computing server meets the task requirement of the undeployed edge computing server node for task allocation;
and if the task load of each edge computing server cannot be met, selecting other edge computing servers capable of meeting the task requirements again according to the task load of each edge computing server to distribute the tasks to the corresponding nodes.
In this embodiment, an undeployed node with the same hop count as a deployed node is first obtained, and then a task is allocated under constraint conditions of time delay, calculation, bandwidth resources, and the like, so as to avoid that an edge calculation server cannot meet a service requirement, this embodiment further determines whether the allocation scheme can process each service request in the range of the base station, if the current edge calculation server does not meet the request condition, finds a set of other edge calculation servers that meet the condition, redirects the task request to the edge calculation server with the minimum calculation task amount, and if the edge calculation server that meets the condition cannot be found, rejects the task.
For better understanding of the present embodiment, the scheme based on the present embodiment is further exemplified by a computer algorithm:
Figure BDA0002488781990000191
Figure BDA0002488781990000201
it should be noted that the above algorithm definition
Figure BDA0002488781990000202
When the defined time delay is exceeded, the edge computing server is considered to be incapable of meeting the service requirement;
Figure BDA0002488781990000203
represents mq′Has enough computing power to complete the service, wherein
Figure BDA0002488781990000204
Represents mqThe computational resources allocated to the service are,
Figure BDA0002488781990000205
representing edge compute servers mq′The computing resources of (a) are,
Figure BDA0002488781990000206
represents mq′The task load of (2).
In summary, the edge computing server layout method and the task allocation method provided by the embodiments of the present disclosure provide an edge computing server layout scheme considering user requirements for different edge services, and a layout model balancing end-to-end delay and energy consumption, and design a layout algorithm based on edge services and network state awareness, and finally selectively deploy an edge computing server at an optimal site position rather than near each base station according to an algorithm result, so as to achieve the purpose of minimizing the weighted sum of end-to-end delay and energy consumption.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (12)

1. An edge computing server placement method, comprising:
formulating a layout strategy of the edge computing server based on the service requirements of the user corresponding to all nodes in the access network and the network state information of all nodes in the access network; and the number of the first and second groups,
deploying edge computing servers at respective nodes in an access network based on the placement policy;
wherein the layout strategy comprises the placement position and/or the placement number of the edge computing server.
2. The method of claim 1, before formulating a layout policy of the edge computing server based on service requirements of the user corresponding to all nodes in the access network and network status information of all nodes in the access network, further comprising:
acquiring service requirements of each node in an access network corresponding to a user; and the number of the first and second groups,
acquiring network state information of each node in an access network;
the service requirement comprises a user request and flow characteristics, and the network state information comprises network topology and resource information.
3. The method of claim 2, after obtaining service requirements of users corresponding to each node in the access network, further comprising:
analyzing the preference degree of each node to the edge service based on the service requirement of each node corresponding to the user in the access network; and the number of the first and second groups,
calculating the influence of the preference degree of each node on the edge service in the converged access network based on the preference degree of each node on the edge service;
the determining the placement position of the edge computing server based on the service requirements of the user corresponding to all nodes in the access network and the network state information of all nodes in the access network comprises the following steps:
evaluating the suitability degree of placing an edge computing server at each node based on the influence of the preference degree of each node on the edge service in the converged access network and the network state information of each node; and the number of the first and second groups,
and taking the nodes with the maximum suitable degree of the edge computing servers in the evaluation result as the placing positions of the edge computing servers.
4. The method of claim 3, wherein analyzing the preference of each node for edge services based on the service requirements of the user corresponding to each node in the access network comprises:
analyzing the service type preference probability of each node, the computing capacity required by the service and the bandwidth demand based on the service demand of the user corresponding to each node in the access network; and the number of the first and second groups,
and calculating the preference degree of each node for the edge service based on the service category preference probability of each node, the required computing capacity of the service and the bandwidth demand.
5. The method of claim 4, wherein the degree of preference of each node for the edge service is calculated based on the service class preference probability of each node, the required computation power of the service and the bandwidth requirement, and is obtained according to the following formula:
Figure FDA0002488781980000021
in the formula, PR(i) Representing a node niFor the preference degree of the edge service, | V | represents the total amount of the service type of the edge service, | Ri| represents a node niTraffic request volume, piR(v) Representing a node niTraffic class preference probability with a traffic type of v,
Figure FDA0002488781980000022
representing a node niThe traffic type of the r-th traffic request is the bandwidth requirement of v,
Figure FDA0002488781980000023
representing a node niThe service type of the r-th service request is the calculation capacity required by the service of v;
Figure FDA0002488781980000024
Figure FDA0002488781980000025
Figure FDA0002488781980000026
wherein p (v) indicates that the edge service requested by the user belongs to the industryProbability of type v, p (v | R) represents the probability that the R-th service request belongs to type v, | R | represents the total number of service requests, p (R) represents the probability of generating R service requests, and p (i, R) is node niProbability of generating r service requests, pi(r) at node n for the r-th service requestiThe probability of generation.
6. The method of claim 5, wherein the influence of the preference degree of each node for the edge service in the converged access network is calculated based on the preference degree of each node for the edge service, and is obtained according to the following formula:
Figure FDA0002488781980000031
in the formula, CP(ni) Representing a node niThe influence of the preference level for edge traffic in converged access networks,
Figure FDA0002488781980000032
representing all nodes in the access network, PR(j) Representing a node njThe degree of preference for the edge traffic,jkrepresenting a slave node n in a converged access networkjTo node nkThe number of shortest paths of (a) to (b),
Figure FDA0002488781980000033
representing a slave node njTo node nkAnd passes through node niThe number of shortest paths of (2).
7. The method of claim 6, further comprising:
obtaining the intermediary centrality and the degree centrality of each node and the distance influence between the node and a deployed edge calculation server based on the network state information of each node;
the method for evaluating the suitability degree of placing the edge computing server at each node based on the influence of the preference degree of each node on the edge service in the converged access network and the network state information of each node specifically comprises the following steps:
respectively calculating the influence of the preference degree of each node on the edge service in the converged access network, the mediation centrality, the degree centrality and the total weight of the characteristics of the distance influence between the node and the deployed edge calculation server; and the number of the first and second groups,
and evaluating the suitability degree of placing the edge computing server at each node based on the characteristic total weight of each node.
8. The method of claim 7, wherein the mediation centrality, the degree centrality, and the node to deployed edge calculation server distance impact of each node are obtained according to the following equations:
Figure FDA0002488781980000034
Figure FDA0002488781980000035
Figure FDA0002488781980000036
in the formula, CB(ni) Representing a node niThe mediation centrality of (3); d (n)i) Representing a node niDegree of centrality, degree (n)i) Representing a node niDegree of (c), hiqRepresenting a node niDistance impact from deployed edge compute servers, M represents the total number of deployed edge compute servers, q represents the q-th edge compute server deployed,
Figure FDA0002488781980000041
representing a node niThe inverse of the power of the distance impact from the deployed edge computation server, where 0 < θ < 1;
respectively calculating the influence of the preference degree of each node on the edge service in the converged access network, the mediation centrality, the degree centrality and the total weight of the characteristics of the distance influence between the node and the deployed edge calculation server, and obtaining the total weight according to the following formula:
Wi=∑IωIfi I
Figure FDA0002488781980000042
Figure FDA0002488781980000043
Figure FDA0002488781980000044
norm(fi I)=(fi I-minI)/(maxI-minI)
in the formula, WiRepresenting a node niCharacteristic total weight of (c), ωIRepresenting a node niEach feature weight of fi IRepresenting a node niAnd I ═ CB,d,CPH, i ∈ N is represented by a feature set { C }B(ni),d(ni),CP(ni),hiqConversion to give, norm (f)i I) Represents each feature, min, normalized by min-max extreme normalizationIAnd maxIRespectively representing a characteristic minimum and a characteristic maximum, HIThe entropy of each of the features is represented,
Figure FDA0002488781980000045
node niProbability of each feature.
9. The method of claim 1, further comprising:
setting the minimum value of time delay and energy consumption of edge service processing;
the determining the placement number of the edge computing servers based on the service requirements of the users corresponding to all the nodes in the access network and the network state information of all the nodes in the access network comprises the following steps:
analyzing the average service response time delay and the average service transmission energy consumption of all nodes based on the service requirements of users corresponding to all nodes in the access network and the network state information of all nodes in the access network;
and determining the deployment number of the edge computing servers according to the minimum value of the time delay and the energy consumption of the edge service processing, the average service response time delay and the average service transmission energy consumption of all the nodes.
10. The method according to claim 9, wherein the minimum value of the delay and the energy consumption of the edge service processing is set according to the following formula:
min RFq=αTavg+βEavg
in the formula, min RFqRepresents the set minimum value of time delay and energy consumption of the edge service processing, α and β represent weighting coefficients, α + β is equal to 1, and TavgRepresenting the average response delay of traffic of all nodes, EavgRepresenting the average transmission energy consumption of the service of all the nodes;
the average service response time delay of all the nodes is analyzed and obtained according to the following formula:
Figure FDA0002488781980000051
Figure FDA0002488781980000052
Figure FDA0002488781980000053
Figure FDA0002488781980000054
in the formula, TavgRepresents the average response time delay of all nodes, N represents the total number of nodes, and Ri| represents a node niThe amount of service requests of (a) is,
Figure FDA0002488781980000055
which represents all the nodes in the access network,
Figure FDA0002488781980000056
representing the total amount of service requests;
Figure FDA0002488781980000057
representing a node niThe access delay of the service request of (2),
Figure FDA0002488781980000058
representing a node niThe service type of the r-th service request is the bandwidth demand of v, rikRepresenting a node niUser upload data rate over a wireless link, bAlwkFor a wireless link lwkOf the channel bandwidth, pwkIndicating a radio link lwkTransmission power of gi,rRepresenting the gain, σ, between the user terminal and the access node device2Representing the channel background noise;
Figure FDA0002488781980000059
representing service request slave node niTo the edge calculation Server mqThe network transmission delay of (a) is,
Figure FDA00024887819800000510
representing all edge compute servers of the deployment to be determined,
Figure FDA00024887819800000511
it is meant that all links in the access network,
Figure FDA00024887819800000512
represents niTo be assigned to the edge compute server mq,xjqRepresenting edge compute servers mqDeployed at node njA is bAlkRepresents a link lkAvailable bandwidth of;
Figure FDA00024887819800000513
representing the processing delay of the service request by the edge computing server,
Figure FDA00024887819800000514
representing edge compute servers mqThe computational resources allocated to the service request r,
Figure FDA00024887819800000515
representing the edge compute server m to which all nodes are assignedqThe processing delay of (2);
the average transmission energy consumption of the services of all the nodes is analyzed and obtained according to the following formula:
Figure FDA0002488781980000061
Figure FDA0002488781980000062
Figure FDA0002488781980000063
in the formula, EavgRepresenting the average transmission energy consumption of the services of all nodes, Q representing the total number of edge computing server deployments to be determined, EircRepresenting edge compute server processing node niProcessing energy consumption of the r-th service request, EirtRepresenting a node niThe network transmission power consumption for processing the r-th service request, η is a constant coefficient,
Figure FDA0002488781980000064
node niThe computing power required by the service with the service type v of the r-th service request represents pok、pwkRespectively representing optical links lokAnd a radio link lwkThe transmission power of (1).
11. A task allocation method for an edge computing server, applied to an edge computing server that performs layout based on the layout method for an edge computing server according to any one of claims 1 to 10, the method comprising:
calculating the task load of each edge computing server;
acquiring nodes of undeployed edge computing servers with the same hop count as the node of the deployment position of each edge computing server; and the number of the first and second groups,
and selecting a corresponding edge computing server based on the task load of each edge computing server to distribute tasks for the nodes which are not deployed with the edge computing servers and have the same hop count with the nodes at the deployment position of each edge computing server.
12. The method according to claim 11, wherein the selecting a corresponding edge computing server based on the task load of each edge computing server allocates tasks to the nodes of undeployed edge computing servers with the same hop count from the node of the deployment location of each edge computing server comprises:
according to the task load of each edge computing server, sequencing each edge computing server in an ascending order;
calculating the calculation resource and bandwidth demand of all nodes which are not provided with the edge calculation server and have the same hop count with the node of the deployment position of each edge calculation server;
based on the computing resources and bandwidth demand of all the nodes which do not deploy the edge computing servers, sequencing the nodes which do not deploy the edge computing servers in a descending order;
selecting the edge computing server with the minimum task load as the edge computing server which carries out ascending sequencing and the nodes which carry out descending sequencing and are not provided with the edge computing server to distribute tasks for the nodes which have the maximum computing resource and bandwidth demand and are not provided with the edge computing server;
and repeating the steps until the task distribution of all the nodes which are not provided with the edge computing server is completed.
CN202010399084.6A 2020-05-12 2020-05-12 Edge computing server layout method and task allocation method Active CN111580978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010399084.6A CN111580978B (en) 2020-05-12 2020-05-12 Edge computing server layout method and task allocation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010399084.6A CN111580978B (en) 2020-05-12 2020-05-12 Edge computing server layout method and task allocation method

Publications (2)

Publication Number Publication Date
CN111580978A true CN111580978A (en) 2020-08-25
CN111580978B CN111580978B (en) 2023-06-30

Family

ID=72110870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010399084.6A Active CN111580978B (en) 2020-05-12 2020-05-12 Edge computing server layout method and task allocation method

Country Status (1)

Country Link
CN (1) CN111580978B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112202603A (en) * 2020-09-25 2021-01-08 南京大学 Interactive service entity placement method in edge environment
CN112261667A (en) * 2020-10-19 2021-01-22 重庆大学 FIWI network media access control system and method based on edge calculation
CN112787920A (en) * 2021-03-03 2021-05-11 厦门大学 Underwater acoustic communication edge calculation time delay and energy consumption optimization method for ocean Internet of things
CN113115366A (en) * 2021-03-19 2021-07-13 北京邮电大学 Method, device and system for placing different service businesses into mobile edge node
CN113159620A (en) * 2021-05-11 2021-07-23 中国矿业大学 Mine mobile crowd sensing task distribution method based on weighted undirected graph
CN113259469A (en) * 2021-06-02 2021-08-13 西安邮电大学 Edge server deployment method, system and storage medium in intelligent manufacturing
CN113472844A (en) * 2021-05-26 2021-10-01 北京邮电大学 Edge computing server deployment method, device and equipment for Internet of vehicles
CN113595801A (en) * 2021-08-09 2021-11-02 湘潭大学 Deployment method of edge cloud network server based on task flow and timeliness
CN113727358A (en) * 2021-08-31 2021-11-30 河北工程大学 KM and greedy algorithm-based edge server deployment and content caching method
CN113986486A (en) * 2021-10-15 2022-01-28 东华大学 Joint optimization method for data caching and task scheduling in edge environment
CN114006764A (en) * 2021-11-02 2022-02-01 北京天融信网络安全技术有限公司 Deployment method and device of safety network element based on super-fusion system
CN114039868A (en) * 2021-11-09 2022-02-11 广东电网有限责任公司江门供电局 Value added service management method and device
CN114390549A (en) * 2020-10-19 2022-04-22 上海华为技术有限公司 User service method, access network equipment and system
CN114510247A (en) * 2020-11-16 2022-05-17 中国电信股份有限公司 Service deployment method, system and equipment of multi-access edge computing equipment
CN114944983A (en) * 2021-02-09 2022-08-26 深圳织算科技有限公司 Method and device for determining edge computing node position and electronic equipment
CN115277394A (en) * 2022-07-26 2022-11-01 江南大学 Fitness-based edge server deployment method in mobile edge network
WO2023044673A1 (en) * 2021-09-23 2023-03-30 西门子股份公司 Method and apparatus for deploying industrial edge application, and computer-readable storage medium
WO2024007221A1 (en) * 2022-07-06 2024-01-11 华为技术有限公司 Communication method and apparatus

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108494612A (en) * 2018-01-19 2018-09-04 西安电子科技大学 A kind of network system and its method of servicing that mobile edge calculations service is provided
CN108933815A (en) * 2018-06-15 2018-12-04 燕山大学 A kind of control method of the Edge Server of mobile edge calculations unloading
WO2019117793A1 (en) * 2017-12-12 2019-06-20 Sony Mobile Communications Inc Edge computing relocation
CN110187973A (en) * 2019-05-31 2019-08-30 浙江大学 A kind of service arrangement optimization method towards edge calculations
CN110290011A (en) * 2019-07-03 2019-09-27 中山大学 Dynamic Service laying method based on Lyapunov control optimization in edge calculations
CN110308995A (en) * 2019-07-08 2019-10-08 童晓雯 A kind of edge cloud computing service system edges cloud node deployment device
CN110769059A (en) * 2019-10-28 2020-02-07 中国矿业大学 Collaborative service deployment and business distribution method for regional edge computing Internet of things

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019117793A1 (en) * 2017-12-12 2019-06-20 Sony Mobile Communications Inc Edge computing relocation
CN108494612A (en) * 2018-01-19 2018-09-04 西安电子科技大学 A kind of network system and its method of servicing that mobile edge calculations service is provided
CN108933815A (en) * 2018-06-15 2018-12-04 燕山大学 A kind of control method of the Edge Server of mobile edge calculations unloading
CN110187973A (en) * 2019-05-31 2019-08-30 浙江大学 A kind of service arrangement optimization method towards edge calculations
CN110290011A (en) * 2019-07-03 2019-09-27 中山大学 Dynamic Service laying method based on Lyapunov control optimization in edge calculations
CN110308995A (en) * 2019-07-08 2019-10-08 童晓雯 A kind of edge cloud computing service system edges cloud node deployment device
CN110769059A (en) * 2019-10-28 2020-02-07 中国矿业大学 Collaborative service deployment and business distribution method for regional edge computing Internet of things

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王汝言;聂轩;吴大鹏;李红霞;: "社会属性感知的边缘计算任务调度策略" *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112202603A (en) * 2020-09-25 2021-01-08 南京大学 Interactive service entity placement method in edge environment
CN112261667A (en) * 2020-10-19 2021-01-22 重庆大学 FIWI network media access control system and method based on edge calculation
CN114390549B (en) * 2020-10-19 2024-03-26 上海华为技术有限公司 User service method, access network equipment and system
CN114390549A (en) * 2020-10-19 2022-04-22 上海华为技术有限公司 User service method, access network equipment and system
CN114510247A (en) * 2020-11-16 2022-05-17 中国电信股份有限公司 Service deployment method, system and equipment of multi-access edge computing equipment
CN114944983A (en) * 2021-02-09 2022-08-26 深圳织算科技有限公司 Method and device for determining edge computing node position and electronic equipment
CN112787920A (en) * 2021-03-03 2021-05-11 厦门大学 Underwater acoustic communication edge calculation time delay and energy consumption optimization method for ocean Internet of things
CN112787920B (en) * 2021-03-03 2021-11-19 厦门大学 Underwater acoustic communication edge calculation time delay and energy consumption optimization method for ocean Internet of things
CN113115366A (en) * 2021-03-19 2021-07-13 北京邮电大学 Method, device and system for placing different service businesses into mobile edge node
CN113159620A (en) * 2021-05-11 2021-07-23 中国矿业大学 Mine mobile crowd sensing task distribution method based on weighted undirected graph
CN113159620B (en) * 2021-05-11 2023-08-18 中国矿业大学 Mine mobile crowd sensing task distribution method based on weighted undirected graph
CN113472844A (en) * 2021-05-26 2021-10-01 北京邮电大学 Edge computing server deployment method, device and equipment for Internet of vehicles
CN113259469A (en) * 2021-06-02 2021-08-13 西安邮电大学 Edge server deployment method, system and storage medium in intelligent manufacturing
CN113595801A (en) * 2021-08-09 2021-11-02 湘潭大学 Deployment method of edge cloud network server based on task flow and timeliness
CN113727358A (en) * 2021-08-31 2021-11-30 河北工程大学 KM and greedy algorithm-based edge server deployment and content caching method
CN113727358B (en) * 2021-08-31 2023-09-15 河北工程大学 Edge server deployment and content caching method based on KM and greedy algorithm
WO2023044673A1 (en) * 2021-09-23 2023-03-30 西门子股份公司 Method and apparatus for deploying industrial edge application, and computer-readable storage medium
CN113986486A (en) * 2021-10-15 2022-01-28 东华大学 Joint optimization method for data caching and task scheduling in edge environment
CN114006764A (en) * 2021-11-02 2022-02-01 北京天融信网络安全技术有限公司 Deployment method and device of safety network element based on super-fusion system
CN114006764B (en) * 2021-11-02 2023-09-26 北京天融信网络安全技术有限公司 Deployment method and device of safety network element based on super fusion system
CN114039868A (en) * 2021-11-09 2022-02-11 广东电网有限责任公司江门供电局 Value added service management method and device
CN114039868B (en) * 2021-11-09 2023-08-18 广东电网有限责任公司江门供电局 Value added service management method and device
WO2024007221A1 (en) * 2022-07-06 2024-01-11 华为技术有限公司 Communication method and apparatus
CN115277394A (en) * 2022-07-26 2022-11-01 江南大学 Fitness-based edge server deployment method in mobile edge network
CN115277394B (en) * 2022-07-26 2024-10-01 江南大学 Edge server deployment method based on fitness in mobile edge network

Also Published As

Publication number Publication date
CN111580978B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN111580978A (en) Edge computing server layout method and task allocation method
Islam et al. A survey on task offloading in multi-access edge computing
Shantharama et al. LayBack: SDN management of multi-access edge computing (MEC) for network access services and radio resource sharing
Sun et al. Autonomous resource slicing for virtualized vehicular networks with D2D communications based on deep reinforcement learning
CN110098969B (en) Fog computing task unloading method for Internet of things
CN102546379B (en) Virtualized resource scheduling method and system
US20040054766A1 (en) Wireless resource control system
Ibrahim et al. Heuristic resource allocation algorithm for controller placement in multi-control 5G based on SDN/NFV architecture
Xu et al. QoS-aware VNF placement and service chaining for IoT applications in multi-tier mobile edge networks
Bu et al. Enabling adaptive routing service customization via the integration of SDN and NFV
Al-Tarawneh Bi-objective optimization of application placement in fog computing environments
CN111953510A (en) Smart grid slice wireless resource allocation method and system based on reinforcement learning
Wang et al. Optimizing network slice dimensioning via resource pricing
Li et al. Service home identification of multiple-source IoT applications in edge computing
Tseng et al. Link-aware virtual machine placement for cloud services based on service-oriented architecture
Chu et al. Joint service caching, resource allocation and task offloading for MEC-based networks: A multi-layer optimization approach
Meng et al. Joint heterogeneous server placement and application configuration in edge computing
Tzanakaki et al. A converged network architecture for energy efficient mobile cloud computing
Wang et al. Task allocation mechanism of power internet of things based on cooperative edge computing
Bu et al. Towards delay-optimized and resource-efficient network function dynamic deployment for VNF service chaining
Gohar et al. Minimizing the cost of 5G network slice broker
Mesodiakaki et al. One: Online energy-efficient user association, VNF placement and traffic routing in 6G HetNets
Jasim et al. Efficient load migration scheme for fog networks
Chakravarthy et al. Software-defined network assisted packet scheduling method for load balancing in mobile user concentrated cloud
Araldo et al. EdgeMORE: improving resource allocation with multiple options from tenants

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant