CN112153147A - Method for placing chained service entities based on entity sharing in mobile edge environment - Google Patents

Method for placing chained service entities based on entity sharing in mobile edge environment Download PDF

Info

Publication number
CN112153147A
CN112153147A CN202011028903.2A CN202011028903A CN112153147A CN 112153147 A CN112153147 A CN 112153147A CN 202011028903 A CN202011028903 A CN 202011028903A CN 112153147 A CN112153147 A CN 112153147A
Authority
CN
China
Prior art keywords
service
entity
server
entities
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011028903.2A
Other languages
Chinese (zh)
Inventor
葛季栋
梁瑜
张胜
牛长安
骆斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202011028903.2A priority Critical patent/CN112153147A/en
Publication of CN112153147A publication Critical patent/CN112153147A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a chain service entity placing method based on service entity sharing in mobile edge calculation, which comprises the following steps: (A) constructing a system, cost and time delay model; the system model comprises a plurality of edge application services, users and edge servers which are formed by service entity chains; the time delay model comprises network transmission and calculation time delay of a user request service; the network transmission delay comprises time delay between a user and a server and time delay between servers; the calculation time delay comprises the sum of the calculation time delays of all entities in the service entity chain on different servers; (B) and obtaining a placement scheme of the chained service entity by adopting a greedy algorithm based on entity sharing according to the objective function and the constraint condition of the chained service entity placement problem.

Description

Method for placing chained service entities based on entity sharing in mobile edge environment
Technical Field
The invention relates to the field of mobile edge computing and edge server deployment and distribution, in particular to a method for placing a chained service entity in mobile edge computing.
Background
With the development of cloud computing and the gradual maturity of the technology of the fifth generation mobile communication network (5G), the explosive growth of internet data and the popularization of various mobile terminal devices, the problem that various edge application requests of users cannot be timely and effectively processed by a traditional cloud computing mode is highlighted. The mobile edge computing is a new technology, and pushes computing and network control from a traditional cloud computing center to a network edge according to the concept that computing and data are closer to a user side, so that the mobile edge computing is widely considered as an effective way for solving the problems of network congestion and long communication delay in the traditional cloud computing. By placing a large number of edge servers near the user's network edge, mobile edge computing can effectively meet latency-sensitive mobile application requirements.
The edge service entity placement problem (ESEP) is a fundamental problem in mobile edge computing, which studies how to choose placement service entities on network edge servers to achieve better quality of service (QoS) and lower economic cost. A service entity can be seen as a bundle of personal data of users and processing logic for the data, responsible for user status, interactions between users, and handling some compute intensive tasks such as scene rendering, object recognition and tracking, etc. By deploying the service entity on the edge server, the mobile user sends different requests to the service entity to fulfill its application requirements.
Since many mobile edge applications typically have the same or similar service components and modules, we can fully consider resource sharing between service entities when deploying them, thereby saving resources and costs. For example, common mobile application Virtual Reality (VR) and Augmented Reality (AR), both of which require the object to complete object recognition in an image with another service component. Therefore, how to deploy the edge service application in a service entity chain manner by sharing the service entity in the mobile edge computing is a problem that needs to be solved and is worthy of study, so that the total user delay is minimized.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a method for placing a chained service entity based on entity sharing in mobile edge computing based on service entity resource sharing, and simultaneously provides edge computing equipment.
The technical scheme is as follows: in order to solve the above technical problem, the present invention provides a method for placing a chained service entity based on entity sharing in edge computing, which comprises the following steps:
step A: constructing a system model, a cost model and a time delay model in the mobile edge calculation; the system model comprises a plurality of edge application services, users and edge servers which are formed by service entity chains; the delay model comprises network transmission delay and calculation delay which are spent by a user for requesting application service; the network transmission delay comprises transmission delay between a user and a server and transmission delay between the server and the server; the computation delay comprises computation delays of various entities in the service entity chain on different servers.
And B: and obtaining a placement scheme of the chained service entity by adopting a greedy strategy algorithm according to an objective function and a constraint condition of the chained service entity placement problem based on entity sharing in the mobile edge calculation.
Further, the cost model in step a includes a sum of costs of service entities constrained to be placed, and a maximum number of service entities allowed to be placed per edge server; the edge application service generally has service entities which can be commonly shared between entity chains; the edge in step B computes an objective function for the mid-chain service entity placement problem with the goal of minimizing the minimum latency for the user application to request completion.
Further, in the step a, the system model of the mobile edge computing environment includes m edge servers, n users, W edge application services, and a service entity chain L forming each edge application servicew(ii) a Wherein for each edge application service, a user can complete the application service by requesting service entities in the chain of service entities in turn; there are shareable service entities between chains of service entities, using E ═ E1,e2,...,eKDenotes all services of all usersA set of volumes; each service entity ekAllowing multiple placements on the same server so that multiple users can share usage at the same time, but the number of placements of entities on the same server must satisfy the constraints.
Wherein d(s)i,sj) For servers siAnd server sjTransmission delay between, d(s)i,uj) For servers siAnd user ujThe transmission delay therebetween.
Further, the user ui(i∈[1,n]) Calculated time delay t of completing its requested application servicecComprises the following steps:
Figure BSA0000220400790000021
wherein L isiFor user uiThe number of service entities in the chain of service entities that need to be requested. f (i, h) is user uiSubscript of the requesting h-th entity in the set of serving entities, G(i,f(i,h))For user uiThe complexity of the computation performed by the requesting h-th entity.
Where y (i, h) represents user uiSubscript, X, of the server placed by the h-th entity of the request(k,y(i,h))Presentation Placement at Server sy(i,h)Service entity e ofkNumber of (a), (b), f)y(i,h)Representing the total amount of computing resources of the server. Let us assume that the computational resources on the server are evenly allocated to the service entities placed on it
Figure BSA0000220400790000022
Is shown at server sy(i,h)To the computational resources allocated to each service entity.
The user ui(i∈[1,n]) Network transmission delay t for completing its application requestnComprises the following steps:
Figure BSA0000220400790000023
wherein d (u)i,sy(i,1)) For user uiAnd uiServer s where the requested 1 st service entity is locatedy(i,1)Transmission delay between, d(s)y(i,h),sy(i,h+1)) For servers sy(ih)And server sy(i,h+1)The transmission delay therebetween.
Further, the objective function of the placement problem of the chained service entities in the edge computation in step B is:
Figure BSA0000220400790000024
the constraint conditions include:
(1) the sum of the service entity placement costs on all edge servers must not exceed C:
Figure BSA0000220400790000025
(2) each edge server computes Q service entities simultaneously at most:
Figure BSA0000220400790000026
wherein C places a cost sum upper bound for the service entities of all edge servers:
Figure BSA0000220400790000027
representing a service entity ekPlaced in the server sjThe cost of (a); xkjPresentation Server sjPlaced service entity ekOf not more than Q, Xkj=0,1,2,...,Q。
Further, the greedy policy algorithm in step B selects a suitable server to place the service entity using the minimum delay delta Δ, where:
Δ=θ+tn+tc
wherein, delta represents the increase of calculation and transmission time delay in the global system when a service entity is newly placed on the server; wherein t isnNetwork transmission delay, t, indicating a newly placed serving entitycRepresenting the computational delay of the newly placed service entity. And θ represents the increment of the computation delay of the service entity already deployed in the system, because the newly placed entity reduces the computation resources occupied by the deployed entity, thereby increasing the computation time of the deployed entity.
Figure BSA0000220400790000031
Where r (a, k, j) denotes the a-th entity e in a chain of service entitieskIs placed in the server sjSubscript of the user above.
Further, the greedy policy algorithm in step B includes:
(1) processing each user u circularlyiEdge application service aiFetching its corresponding service entity chain
Figure BSA0000220400790000034
Cycling through the service entity chain
Figure BSA0000220400790000035
Each entity in (a).
(2) For each service entity e in the chain of service entitiesihAnd circulating each server, and calculating the increment delta of the calculation and transmission delay in the global system if the current service entity is deployed on the server.
(3) Selecting a server with the minimum delta and the number of the service entities meeting the constraint condition, and enabling the service entity eihOn the server and updates the total placement cost and the entity status of the server placement. The loop is terminated until all users have been processed.
(4) And putting the placing schemes of the service entity chains of all the users into a placing set to obtain and return the placing schemes of the chain service entities.
Further, the step (4) further includes calculating the total response time of all users according to the response time of each user and returning to output. The greedy strategy algorithm has a time complexity of
Figure BSA0000220400790000032
Where m is the number of edge servers, n is the number of users,
Figure BSA0000220400790000033
is the length of the average service entity chain.
The invention also provides an edge computing device, which comprises: a processor and a memory storing computer executable instructions which, when executed by the processor, implement the steps of any of the methods described above.
Has the advantages that: the invention provides a chain type service entity placing method taking minimized user response time as a target by a service entity sharing-based mode, which obtains a placing scheme of a chain type service entity by constructing a system model, a time delay model and a cost model of a mobile edge network environment, combining an objective function and constraint conditions of a service entity chain placing problem and sharing a greedy strategy algorithm of the service entity, and can better optimize an entity chain placing result while sharing entity resources.
Drawings
FIG. 1 is a schematic block diagram of a method for placing chained service entities based on entity sharing in mobile edge computing according to this example
FIG. 2 is a schematic diagram (with inter-server distances labeled) illustrating the distribution of edge servers and users in a mobile edge computing environment in this example without loss of generality
Service entity chain of application service in the example of fig. 3
FIG. 4 example relationships for a user requesting a service
Computing resources of the Server in the example of FIG. 5
Computational complexity of the service entity in the example of FIG. 6
Detailed Description
The present invention will be described in further detail with reference to examples, which are not intended to limit the scope of the present invention.
The invention provides a method for placing a chained service entity based on entity sharing in mobile edge computing, which comprises the steps of firstly constructing a system model, a time delay model and a cost model of an edge computing environment; and obtaining a placement scheme of the chained service entities by combining an objective function and a constraint condition of the placement problem of the chained service entities in the mobile edge calculation and an algorithm based on a greedy strategy. In this example, as shown in fig. 1, the following steps are specifically included:
(1) modeling a chained service entity placement problem in mobile edge computing.
The method comprises the steps of constructing a system model, a time delay model and a cost model of the edge computing environment. Firstly, modeling is carried out on an edge network environment, and the system model comprises an edge server in a network, a user and an edge application service to be executed by the user and a service entity chain of the application service.
(2) The definition is given to the chain service entity placement problem in mobile edge computing:
the problem is described first and then an objective function and various constraints are defined.
(3) The greedy algorithm based on service entity sharing is provided by the embodiment, the scheme for placing the chained service entities is obtained, resources can be saved and the cost can be effectively reduced in the aspect of solving the problem.
(4) The temporal complexity of the algorithm is analyzed.
In this example, the process of modeling the placement problem of the chained service entities in edge computing specifically includes:
(1) the set S ═ S is used in this example1,s2,...,smDenotes a server containing m edge servers, using the set U ═ U }1,u2,...,unDenotes n users in the mobile edge network, using a ═ a1,A2,...,AwDenotes W edgesAnd (5) applying the service. Each edge application service is served by a chain of service entities L of the edge application servicewFor each edge application service, a user can complete the application service by sequentially requesting service entities in a service entity chain; there are shareable service entities between chains of service entities, using E ═ E1,e2,...,eKRepresents the set of all service entities of all users; each service entity ekAllowing multiple placements on the same server so that multiple users can share usage at the same time, but the number of placements of entities on the same server must satisfy the constraints
Figure BSA0000220400790000041
Wherein XkjPresentation Server sjService entity e placed onkThe number of the cells.
The delay model comprises the calculation delay of the service entity on the edge server; the transmission delay comprises transmission delay between servers and between a server and a user.
(2) Establishing a time delay model: the time delay in the network model is composed of the calculation time of the service entity on the edge server and the transmission time delay between servers and users, namely user uiTotal time delay T ofiComprises the following steps:
Ti=tc+tn
for calculating time delay tc: by LiRepresenting user uiThe number of service entities in the entity chain corresponding to the application service needing to be requested, f (i, h) is the user uiSubscript of the requesting h-th entity in the set of serving entities, G(i,f(i,h))For user uiRequesting h-th entity ef(i,h)The complexity of performing the calculations. User u is represented by y (i, h)iSubscript, X, of the server placed by the h-th entity of the request(k,y(i,h))Presentation Placement at Server sy(i,h)Service entity e ofkNumber of (a), (b), f)y(i,h)Presentation Server sy(i,h)The total amount of computing resources. Assuming that the computing resources on the server are evenly allocated to the service entities placed on it, then
Figure BSA0000220400790000051
Is shown at server sy(i,h)The computational resources allocated to each service entity are calculated, and thus the computation time of each service entity, and hence user u, is calculated by dividing the computational complexity of each entity by the computational resources available to the single service entityiThe total service entity computation time of (a) is expressed as:
Figure BSA0000220400790000052
for transmission delay tn: the network transmission delay is mainly determined by the network environment, the data size, the transmission distance and other factors. In the system model, fluctuations of network environment and different sizes of transmission data are ignored, and the transmission delay is assumed to depend on the transmission distance only. With d (u)i,sj) Representing user uiAnd server sjBy d(s)i,sj) Presentation Server siAnd sjAnd when i equals j, the propagation delay d(s)i,sj) 0. The user uiNetwork transmission delay t to complete its requestnComprises the following steps:
Figure BSA0000220400790000053
wherein d (u)i,sy(i,1)) For user uiAnd uiServer s where the requested 1 st service entity is locatedy(i,1)Transmission delay between, d(s)y(i,h),sy(i,h+1)) For servers sy(i,h)And server sy(i,h+1)The transmission delay therebetween.
(3) Establishing a cost model: in this example, the cost model includesConstraining the maximum number of allowed service entities per edge server Q, and the cost of placement on all edge servers not to exceed a given budget C
Figure BSA0000220400790000057
Representing a service entity ekPlaced in the server sjAnd therefore the constraint model includes:
the sum of the service entity placement costs on all edge servers must not exceed C:
Figure BSA0000220400790000054
each edge server computes Q service entities simultaneously at most:
Figure BSA0000220400790000055
the following table lists the symbols mentioned herein and their meanings:
Figure BSA0000220400790000056
Figure BSA0000220400790000061
in this example, the process of defining the placement problem of the chained service entities based on entity sharing in the mobile edge calculation is as follows:
(1) description problem (chained service entity placement problem based on entity sharing in mobile edge computation): given a mobile edge computing network comprising a set S of edge servers, a set U of users and a set W of application services, any user UiRequested application service wiThe placement of service entities can be accomplished by deploying chains of service entities with the goal of minimizing user response time, as represented by the chains of service entities.
(2) Defining an objective function and a constraint condition:
Figure BSA0000220400790000062
the constraint conditions include:
the sum of the service entity placement costs on all edge servers must not exceed C:
Figure BSA0000220400790000063
and each edge server simultaneously computes up to Q service entities:
Figure BSA0000220400790000064
based on the system model, the time delay model and the cost model, a placing scheme of the chain type service entity is obtained through a greedy strategy algorithm based on entity sharing according to a target function and a constraint condition of a service entity chain placing problem.
The specific operation steps of the algorithm are as follows:
(1) processing each user u circularlyiEdge application service aiFetching its corresponding service entity chain
Figure BSA0000220400790000065
Cycling through the service entity chain
Figure BSA0000220400790000066
Each entity in (a).
(2) For each service entity e in the chain of service entitiesihAnd circulating each server, and calculating an increase value delta of calculation and transmission delay in the global system if the current service entity is deployed on the server, wherein:
Δ=θ+tn+tc
wherein t isnNetwork transmission delay, t, indicating a newly placed serving entitycRepresents the computation delay of the newly placed service entity, and θ represents the increment of the computation delay of the service entity already deployed in the system, because the newly placed entity reduces the computation resources occupied by the deployed entity, thereby increasing the computation time of the deployed entity.
Figure BSA0000220400790000071
Where r (a, k, j) denotes the a-th entity e in a chain of service entitieskIs placed in the server sjSubscript of the user above.
(3) Selecting a server with the minimum delta and the number of the service entities meeting the constraint condition, and enabling the service entity eihOn the server and updates the total placement cost and the entity status of the server placement. The loop is terminated until all users have been processed.
(4) And putting the placing schemes of the service entity chains of all the users into a placing set to obtain and return the placing schemes of the chain service entities.
In this example, the greedy policy algorithm has a temporal complexity of
Figure BSA0000220400790000072
Where m is the number of edge servers, n is the number of users,
Figure BSA0000220400790000073
is the length of the average service entity chain.
The following describes, with reference to fig. 2, a scenario for placing a service entity chain in an edge computing network environment related to the present example and an algorithm provided by the present example, without loss of generality:
FIG. 2 provides a simulated moving edge computing network environment in which each circle represents an edge server node, the characters within the circles represent the number of edge servers, the line between each two circles represents a link between the two edge servers, and the numbers on the line represent the distance between the two edge servers. Setting Q5 represents the maximum number of entities allowed to be placed on the server.
Table 1 shows 3 application services provided in an example system, represented by a chain of service entities, the set of application services W ═ a1,a2,a3U-U set of users1,u2,...,u6The correspondence relationship of the application service requested by each user is shown in table 2, and the server set S ═ S1,s2,s3}. Table 3 lists the amount of computing resources per server and table 4 lists the computational complexity of the entity service requested by each user.
According to the algorithm presented in the present invention, first, user u is selected1Start, u1The entity chain of the requested application service is e1→e2→e3First consider placing entity e1From the server s1Start loop to server s3The method comprises the following steps:
inspection server s1: according to user u1To the server s1Calculates the propagation delay d (e) from the distance1,s1) 0.2, compute entity e1At server s1Is calculated as
Figure BSA0000220400790000074
Because of the server entity s1If no other entity is deployed, θ is 0. Therefore, the total delay increment Δ is 0+0.2+0.2 is 0.4.
Inspection server s2: according to user u1To the server s2Calculates the propagation delay d (e) from the distance1,s2) 0.3, compute entity e1At server s2Is calculated as
Figure BSA0000220400790000075
Because of the server entity s2If no other entity is deployed, θ is 0. Therefore, the total delay increment Δ is 0+0.3+0.13 or 0.43.
Inspection server s3: according to user u1To the server s3Calculates the propagation delay d (e) from the distance1,s3) 0.4, compute entity e1At server s3Is calculated as
Figure BSA0000220400790000076
Because of the server entity s3If no other entity is deployed, θ is 0. Therefore, the total delay increment Δ is 0+0.4+0.1 is 0.5.
Therefore, the server s that generates the minimum delay increment Δ of 0.4 is selected finally1Deployment e1At server s1Up, update X 1,11. This completes the placement of the entity once, and this process is repeated cyclically for each entity in the entity chain of each user in turn until the entity chain deployment of all users is completed.
While the invention has been described in connection with the preferred embodiments, it should be understood that the invention is not limited to the disclosed embodiments, but is intended to cover various changes and modifications within the spirit and scope of the appended claims.

Claims (9)

1. A method for placing chained service entities based on entity sharing in a mobile edge environment is characterized by comprising the following steps:
step A: constructing a system model, a cost model and a time delay model in the mobile edge calculation; the system model comprises a plurality of edge application services, users and edge servers which are formed by service entity chains; the edge application service generally has service entities which can be commonly shared between entity chains; the delay model comprises network transmission delay and calculation delay spent by application service requested by a user; the network transmission delay comprises transmission delay between a user and a server and transmission delay between the server and the server; the computational delays include computational delays of respective entities in a chain of serving entities on different servers.
And B: and obtaining a placement scheme of the chained service entities by adopting a greedy strategy algorithm according to an objective function and a constraint condition of the chained service entity placement problem in the mobile edge calculation.
2. The method of claim 1, wherein the method comprises:
the cost model in step a includes constraining the sum of the service entity placement costs across all edge servers and constraining the number of service entities placed at most simultaneously per edge server.
The mobile edge in step B computes an objective function of the mid-chain serving entity placement problem with the goal of minimizing user response time.
3. The method of claim 1, wherein the method comprises: in the step A, the system model in the mobile edge computing environment comprises m edge servers, n users, W edge application services and a service entity chain L forming each edge application servicew(ii) a Wherein shareable service entities exist between chains of service entities, using E ═ E1,e2,...,eKDenotes the set of all service entities of all users.
Each service entity ekAllowing multiple placements on the same server so that multiple users can share resources at the same time, but the number of placements of entities on the same server must satisfy the constraints.
4. The method of claim 2, wherein the method comprises: the user ui(i∈[1,n]) Calculated time delay t of completing its requested application servicecComprises the following steps:
Figure RE-FSB0000190532540000011
wherein | Li| is user uiThe number of service entities in the chain of service entities that need to be requested. f (i, h) is user uiSubscript of the requesting h-th entity in the set of serving entities, G(i,f(i,h))For user uiThe complexity of the computation performed by the requesting h-th entity.
Where y (i, h) represents user uiSubscript, X, of the server placed by the h-th entity of the request(k,y(i,h))Presentation Placement at Server sy(i,h)Service entity e ofkNumber of (a), (b), f)y(i,h)Representing the total amount of computing resources of the server. The present invention assumes that the computational resources on the server are evenly distributed to the service entities placed on it
Figure RE-FSB0000190532540000012
Is shown at server sy(i,h)To the computational resources allocated to each service entity.
The user ui(i∈[1,n]) Network transmission delay tnComprises the following steps:
Figure RE-FSB0000190532540000013
wherein d (u)i,sy(i,1)) For user uiAnd uiServer s where the requested 1 st service entity is locatedy(i,1)Transmission delay between, d(s)y(i,h),sy(i,h+1)) For servers sy(i,h)And server sy(i,h+1)The transmission delay therebetween.
Thus the user uiTotal time delay T ofiComprises the following steps:
Ti=tc+tn
5. the method of claim 2, wherein the method comprises:
the objective function of the placement problem of the chained service entities in the mobile edge calculation in the step B is as follows:
Figure RE-FSB0000190532540000021
the constraint conditions include:
(1) the sum of the service entity placement costs on all edge servers must not exceed C:
Figure RE-FSB0000190532540000022
(2) each edge server computes Q service entities simultaneously at most:
Figure RE-FSB0000190532540000023
wherein C is the service entity of all edge servers to place the upper limit of the sum of the costs;
Figure RE-FSB0000190532540000024
representing a service entity ekPlaced in the server sjThe cost of (a); xkjPresentation Server sjPlaced service entity ekOf not more than Q, Xkj=0,1,2,...,Q。
6. The method of claim 1, wherein the method comprises:
the greedy policy algorithm in the step B selects a suitable server to place the service entity by using the minimum delay increment Δ, wherein:
Δ=θ+tn+tc
wherein, delta represents the increase of calculation and transmission time delay in the global system when a service entity is newly placed on the server; wherein t isnShow newNetwork transmission delay of placed service entity, tcRepresenting the computational delay of the newly placed service entity. θ represents the increment of the computation delay of the service entity already deployed in the system, because the newly placed entity reduces the occupied resources of the deployed entity, thereby increasing the computation time of the deployed entity.
Figure RE-FSB0000190532540000025
Where r (a, k, j) denotes the a-th entity e in a chain of service entitieskIs placed in the server sjSubscript of the user above.
7. The method of claim 1, wherein the method comprises: the greedy strategy algorithm in the step B comprises the following steps:
(1) processing each user u circularlyiApplication service A ofiRequesting, fetching its corresponding chain of service entities
Figure RE-FSB0000190532540000026
Cycling through the service entity chain
Figure RE-FSB0000190532540000027
Each entity in (a).
(2) For each service entity e in the chain of service entitiesihAnd circulating each server, and calculating the increment delta of the calculation and transmission delay in the global system if the current service entity is deployed on the server.
(3) Selecting a server with the minimum delta and the number of the service entities meeting the constraint condition, and enabling the service entity eihOn the server and updates the total placement cost and the status of the server placement entity. And sequentially circulating until all the users are processed, and stopping the circulation.
(4) And putting the placing schemes of the service entity chains of all the users into a placing set to obtain and return the placing schemes of the chain service entities.
8. The method for placing chained service entities in edge computing according to claim 1, wherein: the time complexity of the heuristic algorithm is
Figure RE-FSB0000190532540000031
Where m is the number of edge servers, n is the number of users,
Figure RE-FSB0000190532540000032
is the length of the average service entity chain.
9. An edge computing device, comprising: a processor and a memory storing computer executable instructions which, when executed by the processor, implement the steps of the method of any of claims 1 to 8.
CN202011028903.2A 2020-09-25 2020-09-25 Method for placing chained service entities based on entity sharing in mobile edge environment Pending CN112153147A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011028903.2A CN112153147A (en) 2020-09-25 2020-09-25 Method for placing chained service entities based on entity sharing in mobile edge environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011028903.2A CN112153147A (en) 2020-09-25 2020-09-25 Method for placing chained service entities based on entity sharing in mobile edge environment

Publications (1)

Publication Number Publication Date
CN112153147A true CN112153147A (en) 2020-12-29

Family

ID=73897641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011028903.2A Pending CN112153147A (en) 2020-09-25 2020-09-25 Method for placing chained service entities based on entity sharing in mobile edge environment

Country Status (1)

Country Link
CN (1) CN112153147A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114172951A (en) * 2021-12-07 2022-03-11 中国联合网络通信集团有限公司 MEC sharing method, communication device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019052376A1 (en) * 2017-09-12 2019-03-21 华为技术有限公司 Service processing method, mobile edge computing device, and network device
CN110290011A (en) * 2019-07-03 2019-09-27 中山大学 Dynamic Service laying method based on Lyapunov control optimization in edge calculations
CN110968920A (en) * 2019-11-29 2020-04-07 江苏方天电力技术有限公司 Method for placing chain type service entity in edge computing and edge computing equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019052376A1 (en) * 2017-09-12 2019-03-21 华为技术有限公司 Service processing method, mobile edge computing device, and network device
CN110290011A (en) * 2019-07-03 2019-09-27 中山大学 Dynamic Service laying method based on Lyapunov control optimization in edge calculations
CN110968920A (en) * 2019-11-29 2020-04-07 江苏方天电力技术有限公司 Method for placing chain type service entity in edge computing and edge computing equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHIQI CHEN.ETC: ""A Novel Algorithm for NFV Chain Placement in Edge Computing Environments"", 《2018 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM)》 *
胡海洋 等: ""基于协作相容性的工作流任务分配优化方法"", 《计算机研究与发展》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114172951A (en) * 2021-12-07 2022-03-11 中国联合网络通信集团有限公司 MEC sharing method, communication device and storage medium

Similar Documents

Publication Publication Date Title
CN110262845B (en) Block chain enabled distributed computing task unloading method and system
CN110662238B (en) Reinforced learning scheduling method and device for burst request under edge network
CN113950066A (en) Single server part calculation unloading method, system and equipment under mobile edge environment
CN106056529B (en) Method and equipment for training convolutional neural network for picture recognition
CN112084038B (en) Memory allocation method and device of neural network
CN108304256B (en) Task scheduling method and device with low overhead in edge computing
CN111711962B (en) Cooperative scheduling method for subtasks of mobile edge computing system
CN113867843B (en) Mobile edge computing task unloading method based on deep reinforcement learning
CN110008015B (en) Online task dispatching and scheduling method with bandwidth limitation in edge computing system
CN111352731A (en) Method, system, apparatus and medium for distributing tasks in edge computing network
CN111601327A (en) Service quality optimization method and device, readable medium and electronic equipment
US11023825B2 (en) Platform as a service cloud server and machine learning data processing method thereof
CN116263681A (en) Mobile edge computing task unloading method, device, equipment and storage medium
CN112153147A (en) Method for placing chained service entities based on entity sharing in mobile edge environment
CN112989251B (en) Mobile Web augmented reality 3D model data service method based on collaborative computing
CN113032113B (en) Task scheduling method and related product
Ghasemi et al. Energy-efficient mapping for a network of dnn models at the edge
CN115955685A (en) Multi-agent cooperative routing method, equipment and computer storage medium
CN116755829A (en) Method for generating host PCIe topological structure and method for distributing container resources
JP2020137073A (en) Application arrangement device and application arrangement program
CN114201727A (en) Data processing method, processor, artificial intelligence chip and electronic equipment
CN109746918B (en) Optimization method for delay of cloud robot system based on joint optimization
CN115700482A (en) Task execution method and device
CN113747504A (en) Method and system for multi-access edge computing combined task unloading and resource allocation
CN117201319B (en) Micro-service deployment method and system based on edge calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201229