CN109684083B - Multistage transaction scheduling allocation strategy oriented to edge-cloud heterogeneous environment - Google Patents
Multistage transaction scheduling allocation strategy oriented to edge-cloud heterogeneous environment Download PDFInfo
- Publication number
- CN109684083B CN109684083B CN201811512330.3A CN201811512330A CN109684083B CN 109684083 B CN109684083 B CN 109684083B CN 201811512330 A CN201811512330 A CN 201811512330A CN 109684083 B CN109684083 B CN 109684083B
- Authority
- CN
- China
- Prior art keywords
- transaction
- resource
- transactions
- cloud
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 22
- 238000013468 resource allocation Methods 0.000 claims abstract description 16
- 230000005540 biological transmission Effects 0.000 claims description 18
- 238000011156 evaluation Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 7
- 238000000034 method Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000003247 decreasing effect Effects 0.000 claims description 4
- 230000001419 dependent effect Effects 0.000 claims description 3
- 238000007670 refining Methods 0.000 claims description 3
- 238000011161 development Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 230000002068 genetic effect Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000035772 mutation Effects 0.000 description 2
- 201000004569 Blindness Diseases 0.000 description 1
- 101100136092 Drosophila melanogaster peng gene Proteins 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- RONWGALEIBILOG-VMJVVOMYSA-N quinine sulfate Chemical compound [H+].[H+].[O-]S([O-])(=O)=O.C([C@H]([C@H](C1)C=C)C2)C[N@@]1[C@@H]2[C@H](O)C1=CC=NC2=CC=C(OC)C=C21.C([C@H]([C@H](C1)C=C)C2)C[N@@]1[C@@H]2[C@H](O)C1=CC=NC2=CC=C(OC)C=C21 RONWGALEIBILOG-VMJVVOMYSA-N 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention provides a multi-stage transaction scheduling allocation strategy facing to edge-cloud isomerism, which comprises the following implementation steps: information for all transactions generated at all data sources is first collected, including the data size of the transaction, the size of the transaction's computational load, the size and data source of the received data, the size and data destination of the sent data, etc. And secondly, forming a complete transaction dependency graph by utilizing the transaction information, wherein the complete transaction dependency graph is represented by a directed acyclic graph. Then, whether the transaction is processed in the cloud or the edge server is judged, the priority of the transaction is judged by using a reasonable transaction priority heuristic algorithm, and a transaction queue with the priority from high to low is formed. And finally, determining resource nodes for executing the transaction according to the load balance among the resource nodes, the transaction waiting time, the transaction emergency degree and the resource node power consumption, and finding out an optimal resource allocation scheme, thereby achieving the purpose of improving the system efficiency.
Description
Technical Field
The invention belongs to the field of wide area network architecture, and particularly relates to a multi-stage transaction scheduling distribution strategy facing to edge-cloud heterogeneous.
Background
With the continuous development of internet technology, the era of big data comes. Meanwhile, the rapid development of the internet of things technology and the promotion of cloud services make the cloud computing model not well solve the existing problems. According to the estimation of the Cisco global cloud index, nearly 500 hundred million things are connected to the Internet in 2019, and the total data flow of a global data center is estimated to reach 10.4 ZB. In this highly information age, the internet faces multiple and serious challenges: the method has the advantages that a large amount of data is redundant, the cloud processing capacity enters a bottleneck, the network bandwidth reaches an upper limit, the transaction processing efficiency is reduced, the cloud power load is increased, the transaction processing delay is increased, and the like. The above problems are all caused by the contradiction between the development of cloud computing into the bottleneck and the increasing QoS requirements for processing traffic. The edge computing makes up for various defects of the traditional cloud computing model, and the concept of the edge computing completely adapts to the decentralization which is the basic morphological requirement of the development of the Internet to the time of the Internet of things. The idea of edge computing is to extend cloud computing to network edge processing, so that data acquired from an edge terminal can be computed at a place close to an edge, meanwhile, data which is not required to be stored for a long time does not need to be transmitted to a cloud service center for backup, occupation of network bandwidth is relatively reduced, and power consumption and load of a cloud end are reduced. The distributed computing mode can make up for the defects of a centralized computing mode, gets rid of the constraint of a centralized network environment, and guarantees the data security of the edge nodes and the users thereof. However, the edge calculation has the following disadvantages: the edge computing nodes lack the strong computing efficiency of the cloud computing model and do not have enough resources to deal with complex and huge data sets and computing transactions; the edge computing model is also difficult to be compatible with heterogeneous transactions, namely, a plurality of intelligent information processing modes cannot be integrated on a computing resource node at one edge; the edge type processing reduces the load and power consumption of cloud processing transactions, and simultaneously, the problem of load imbalance among edge nodes is solved.
The model combining edge computing and cloud computing can completely exert the advantages of the whole network architecture. The edge cloud cooperation architecture transmits the transactions with large calculation amount, small data amount and low delay sensitivity to the cloud end for execution, and reserves the transactions with small calculation amount, large data amount and high delay sensitivity in the edge server for execution, so that the edge cooperation is used as a novel network architecture, and becomes an optimal choice. In this era of everything association, the combination of edge computing and cloud computing will become an overall trend of network architecture development. Harshit Gupta et al propose a simulator named iFogSim for simulating a heterogeneous environment combining internet of things and fog, and measuring the influence of resource management technology on delay, network congestion, energy consumption and cost. Furthermore, the extensibility of the simulation toolkit is verified under different circumstances in terms of RAM consumption and execution time. As a side cloud collaborative simulator which is a standard in the current academic community, many researchers have conducted intensive research on the basis of the iFogSim simulator, and attempt to seek technical breakthrough in the heterogeneous network framework of the "edge cloud".
Under the edge cloud cooperative architecture, various advantages of edge computing and cloud computing can be fully exerted in a reasonable resource allocation and transaction scheduling mode. Faced with this NP-complete problem, some scholars at the present stage have been working on the algorithms for reasonable resource allocation and transaction scheduling in the architecture of edge and cloud composition. Korean quine, xie peng, etc. have studied an Improved Genetic Algorithm (IGA) that introduces a fitness value judgment into a parental mutation operation, overcoming the blindness of a basic genetic algorithm (SGA) in the mutation operation, but the genetic algorithm, as a random search technique, has not reduced its complexity, and although the result of the distribution is more reasonable, may cause a higher time overhead in the resource distribution stage. Zhang Li Ming and Zhang Li di people choose the problem that the random scheduling of affairs leads to some important affairs to delay the processing when single, priority is the same to current table scheduling algorithm priority, have proposed a kind of double priority affairs scheduling algorithm (DPSA), but carry on the algorithm test only to the particular affairs dependency relation in the course of time, can't guarantee the general adaptability of the algorithm, do not consider the load balancing problem among the marginal node in the course at the same time. Xuan-Qui Pham et al proposes a heuristic algorithm, which is suitable for processing edge nodes of various transaction types, and mainly aims to realize balance between completion time and overhead cost of cloud resources, wherein the heuristic algorithm has a faster resource allocation speed, but does not consider load balance of the edge nodes. Mohammed Islam Naas et al proposed an extension to iFogSim, parallelizing the Floyd-Warshall algorithm, which is used in iFogSim to compute all shortest paths between nodes to simulate data transmission. To be able to model and simulate scenarios using strategies aimed at optimizing data placement in the Fog and IoT contexts, optimizing transaction execution time and memory utilization, but this study overlooked the interdependencies between transactions.
Disclosure of Invention
The invention provides a multi-stage transaction scheduling allocation strategy facing to edge-cloud isomerism, aiming at solving the task scheduling problem of dependent transactions. The invention aims at the practical situation that the following basic principles exist: (1) dependencies may exist between transactions retrieved from different data source arrangements. Assuming that A, B, C three transactions exist, the transaction C needs the result of the completion of the execution of the two transactions, namely transaction a and transaction B, as the input data set for calculation, and this is the case that there is a transaction dependency relationship between transaction C and transactions a and B. (2) The execution of part of the transaction depends on cloud storage data, namely, the transaction can be executed only when data required by the transaction is downloaded from the cloud to the resource node executing the transaction. (3) Transactions are atomic transactions, i.e., each transaction size is the smallest fundamental unit and is not re-divisible.
The multi-stage transaction scheduling management strategy oriented to the edge-cloud heterogeneous collaborative network computing model comprises the following steps: information for all transactions generated at all data sources is first collected, including the data size of the transaction, the size of the transaction's computational load, the size and data source of the received data, the size and data destination of the sent data, etc. And secondly, forming a complete transaction dependency graph by utilizing the transaction information, wherein the graph is represented by a directed acyclic graph. Then, a reasonable transaction priority heuristic algorithm is utilized to judge the priority of the transaction and form a transaction queue with the priority from high to low. And finally, determining resource nodes for executing the transaction according to the load balance among the resource nodes, the transaction waiting time, the transaction emergency degree and the resource node power consumption, and finding out an optimal resource allocation scheme, thereby achieving the purpose of improving the system efficiency.
In order to achieve the purpose, the invention adopts the following technical scheme.
A multi-stage transaction scheduling allocation strategy facing to edge-cloud heterogeneous environment is characterized by comprising the following steps:
step 1, collecting and sorting information of all to-be-processed transactions, wherein the to-be-processed information comprises a transaction tkIs calculated byMaximum delay tolerance timeTo other transactions tlAmount of data transferredNumber of predecessor transactions NpAnd the number of subsequent transactions NsSet of leading transactions P (t)k) And a subsequent set of transactions S (t)k). Wherein said entry transaction is absent a predecessor transaction, the predecessor transaction being aggregated by said transaction tkAll the leading affairs are formed, and the leading affairs refer to execution affairs tkA transaction that must be executed first. The egress transaction is absent of a successor transaction, which is aggregated by the transaction tkIs composed of all subsequent transactions, said subsequent transactions being at transaction tkA transaction that can only be executed after completion of execution. Arranging all collected transaction information into a transaction set T ═ { T }1,t2,t3,...,tk,...,tnAnd a set of transaction dependency relationships E, E containing dependencies between transactions,representing a transaction tkAnd transaction tlThe transaction set and the transaction dependency set are utilized to form a T-DAG graph.
And 2, associating all resource nodes under the whole edge-cloud coordination framework, including the cloud server and the edge server, in pairs to form an R-CG graph. All points in the R-CG constitute a resource set R ═ { R ═ R1,r2,r3,...,ri,...,rmEach point on the graph represents a resource node riEach resource node riThe information to be saved includes: computing power of resource nodesAnd calculating powerAll edges in the R-CG graph form a resource association setEach edgeNeed to preserveThe information of (1) includes: resource riAnd resource rjBandwidth for data transmission therebetweenAnd resource riAnd rjThe distance between
Step 3, refining the transaction information stored in the step 1, and determining an entry transaction set T executed by a transaction scheduling algorithmentryAnd an egress transaction set TexitWherein the ingress transaction does not have a predecessor transaction and the egress transaction does not have a successor transaction.
And 4, judging whether the transaction is executed in the edge server or the cloud. The determination of the transaction execution positions is made in the order in which the transactions are received from the edge device. If the judgment result is that the transaction is executed at the cloud end, the transaction waits to be sent to the cloud end for processing according to the execution sequence, and the transaction is put into a transaction priority queue Q, and the execution time of the transaction is calculated after the depended leading transaction (on the premise that the two transactions do not have the dependency relationship, the transaction processed at the cloud end is higher than the transaction processed at the edge end by default); if the judgment result is that the transaction is executed in the edge server, the priority of the transaction is calculated by a heuristic transaction scheduling algorithm in the invention, and a non-decreasing transaction priority queue Q is generated. And (5) when all the transactions judge the priority, entering step 5, otherwise, repeatedly executing step 4.
The method for judging the transaction execution position comprises the following steps:
first, a transaction t is calculatedkPrediction of execution timeAnd estimated transmission time of transactionsThe estimated execution time and transmission time of the transaction are shown in formulas (1) and (2):
wherein, ω iscloudRepresenting the computing power of the cloud server, rj∈P(ri) Representing the execution of a transaction tlResource node r ofjAt the execution of transaction tkResource node r ofjIn the preamble set of (a) to (b),representing a slave resource node riAverage of all bandwidths to the cloud.
Secondly, comparing the relation between the transaction estimated execution time at the cloud end and the estimated transmission time of the transaction: when in useWhen, transaction t will bekTransmitting the data to a cloud server for execution; otherwise, it is executed at the edge server.
Wherein, the heuristic transaction scheduling algorithm calculates the transaction tkFormula (3) for priority is as follows:
wherein,representing the average working capacity of all resource nodes;represents all other possible potentials rjNode resource node to resource node riAverage bandwidth of; r isj∈P(ri) Representing a resource node rjAt riIn the preamble set of (1);representing a resource node riTo all other possible potentials rjAverage bandwidth of node resource nodes; r isj∈S(ri) Representing a resource node rjAt riIn the subsequent set of (2);representing a transaction tkLeading transaction t oflA priority value of; t is tk≡TentryRepresenting a transaction tkIs a transaction entry, tk≠TentryRepresenting a transaction tkIs not a transaction entry.
And 5, calculating the evaluation function of each transaction at different resource nodes according to the sequence of the transaction priority queue Q, wherein the resource node with the minimum evaluation function is the optimal resource node for executing the current transaction.
Therein, transaction tkThe evaluation function performed is as follows:
wherein,for a transaction tkI.e., the waiting time cannot exceed the transaction maximum delay tolerance time.For a transaction tkThe waiting time of (c). The following areAndis calculated byFormula (II):
wherein r isj∈S(ri) Representing the execution of a transaction tlResource node r ofjAt the execution of transaction tkResource node r ofjIn the subsequent set of (a) of (b),in order to calculate the power consumption,for transmission power consumption, qhRepresents the h transaction, p, in the transaction priority queue QsendRepresenting the power of the data transmission per unit length and per unit time. The above formula calculates the evaluation functions on different resource nodes according to the transaction sequence of the previously arranged transaction priority queue Q, so that the resource node with the minimum current transaction evaluation function is the optimal resource node for executing the current transaction. When the transactions in the priority queue are all completed with resource allocation, ending the resource allocation phase, and generating a transaction-resource allocation mapping scheme; if not, continuously repeating the step 5.
And 6, executing the transaction on the optimal resource node according to the transaction-resource allocation mapping scheme established in the steps 4 and 5 and the sequence of the transaction priority queue Q.
Compared with the prior art, the invention has the following advantages:
the multi-stage transaction scheduling management strategy under the edge-cloud heterogeneous cooperative network computing model can be applied to heterogeneous network architectures with all edges and cloud ends combined, and meanwhile, the method is suitable for common situations that processing transactions have relevant dependencies. The strategy adopts multi-level judgment priority according to multiple factors, and reasonably schedules the transaction. Compared with other algorithms, the method has the advantages that the efficiency of executing the affairs is guaranteed, the resource performance is fully exerted, reasonable distribution among the resource nodes is guaranteed, the power consumption is reduced, and the service quality of the whole framework is improved.
Drawings
In order to make the purpose of the present invention more comprehensible, the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a flow chart of the present invention;
FIG. 2 is an edge-cloud heterogeneous network architecture diagram;
FIG. 3 is a transaction allocation diagram.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The invention relates to a multi-level transaction scheduling allocation strategy facing edge-cloud heterogeneous, and as shown in fig. 2, an edge-cloud heterogeneous network architecture takes a cloud server and four edge servers as an example, and needs to process 10 transactions with dependency relationships in total. Before transaction scheduling, the whole network architecture can acquire relevant information of all transactions, and an optimal transaction scheduling allocation scheme is made according to the transaction information, network bandwidth, resource node computing capacity and power.
Step 1, collecting all the information of the transaction to be processed received by the edge server for sorting. Assuming that 10 transaction node requests sent by an edge node are received, a transaction set T ═ T is formed according to the sequence of the transaction requests1,t2,t3,...,t10And transaction dependency setArrange receivedTransactions As shown in the T-DAG graph of FIG. 3, the numbers on the directed edges represent the amount of data transferred from a leading transaction to a subsequent transaction to the transaction. Suppose a transaction T in a T-DAG graph4As an example, the information shown in the figure is: transaction t4To transaction t8Amount of data transferredTransaction t4To transaction t9Amount of data transferredTransaction t4The leading transaction set P (t)4)={t1},Np(t4) 1 is ═ 1; transaction t4S (t) of the successor transaction set4)={t8,t9},Ns(t4)=2。
And 2, associating the cloud servers in the whole edge-cloud heterogeneous network with all the edge servers in pairs to form an R-CG graph of the computing resource node group, as shown in fig. 2. Resource set R of all points in R-CG ═ { R1,r2,r3,r4,rcloud}. All edges in the R-CG graph form a resource association setEach resource node has a certain working capacity And powerEach undirected edge contains a resource riAnd resource rjBandwidth for data transmission therebetweenAnd transmission distance
And 3, refining on the basis of storing the transaction information in the step 1, and determining an entry transaction set and an exit transaction set executed by a transaction scheduling algorithm. As shown in FIG. 3, a whole batch of an entry transaction set T with dependency transactionsentry={t1}, egress transaction set Texit={t10}。
And 4, on the premise of following the execution sequence of the transaction, judging whether the transaction is executed in the edge server or the cloud according to the relevant information of the transaction. First, a transaction t is calculatedkPrediction of execution timeAnd estimated transmission time of transactionsThe transaction estimated execution time and transmission time are shown in the formulas (1) and (2) in the claims. When in useWhen the transaction is finished, namely A is 100, the transaction t is carried outkIf the transaction is transmitted to the cloud server to be executed, the execution time of the transaction is calculated; otherwise, the transaction is executed at the edge server, the priority of the transaction is calculated by a heuristic algorithm in the invention, and a non-decreasing transaction priority queue Q is generated. From transaction t1The priority of all the affairs is calculated one by one, and the basic principle is that a smaller priority value is placed in the front of the priority, and a larger priority value is placed behind the priority, but the priority dependency relationship of the affairs is observed. The transaction priority heuristic formula is shown as formula (3) in the claims. Transactions according to calculated priorityIn a non-decreasing order into the transaction priority queue Q.
Step 5, under the condition of determining the transaction priority in step 4, calculating and weighing each transactionEvaluation functions at different resource nodes. The evaluation function comprehensively considers the sensitivity degree of transaction delayTransaction latencyCalculating power consumptionAnd transmission power consumptionSum ofWhen the evaluation function reaches the minimum, the current transaction t is illustratedkAt resource node riThe mapping of the best transactions and resources to achieve multi-objective trade-offs is performed. The evaluation function of resource allocation is shown in equation (4) of the claims. Solving for the variables in equation (4) is reasonably explained by equations (5) - (9). And the resource allocation evaluation stage is carried out according to the transaction sequence of the transaction priority queue Q, so that the resource node with the minimum current transaction evaluation function is the optimal resource node for executing the current transaction. By iterating the mapping from the affairs to the resource nodes through the formula, all the affairs can be allocated to the optimal resource nodes under the condition of keeping the affair dependency relationship and the affair priority, and the optimal solution of the affair scheduling allocation under the edge-cloud heterogeneous environment is realized.
And 6, on the premise of generating the transaction priority queue Q, executing the transactions on the optimal resource node according to the transaction-resource allocation mapping scheme established in the steps 4 and 5 and the sequence of the transaction priority queue Q.
Claims (1)
1. A multi-stage transaction scheduling allocation strategy facing to edge-cloud heterogeneous environment is characterized by comprising the following steps:
step 1, collecting and sorting information of all to-be-processed transactions, wherein the to-be-processed transactions are to be processedIncludes a transaction tkIs calculated byMaximum delay tolerance timeTo other transactions tlAmount of data transferredWherein if the transaction tkIs an entry transaction, the amount of data transferred isThe information of the transaction further includes: number of predecessor transactions Np(tk) And the number of subsequent transactions Ns(tk) Set of leading transactions P (t)k) And a subsequent set of transactions S (t)k) Wherein said entry transaction is absent a predecessor transaction, the predecessor transaction being aggregated by said transaction tkAll the leading affairs are formed, and the leading affairs refer to execution affairs tkTransactions that must be executed first; the egress transaction is absent of a successor transaction, which is aggregated by the transaction tkIs composed of all subsequent transactions, said subsequent transactions being at transaction tkTransactions that can only be executed after completion of execution; arranging all collected transaction information into a transaction set T ═ { T }1,t2,t3,...,tk,...,tnAnd a set of transaction dependencies E, E containing dependencies between transactions, whereRepresenting a transaction tkAnd transaction tlThe transaction set and the transaction dependency relationship set are utilized to form a T-DAG graph;
step 2, associating every two resource nodes under the whole edge-cloud coordination framework, including a cloud server and an edge server, to form an R-CG graph; in R-CGAll points constitute a resource set R ═ R1,r2,r3,...,ri,...,rmEach point on the graph represents a resource node riEach resource node riThe information to be saved includes: computing power of resource nodesAnd calculating powerAll edges in the R-CG graph form a resource association setEach edgeThe information to be saved includes: resource riAnd resource rjBandwidth for data transmission therebetweenAnd resource riAnd rjThe distance between
Step 3, refining the transaction information stored in the step 1, and determining an entry transaction set T executed by a transaction scheduling algorithmentryAnd an egress transaction set TexitWherein the ingress transaction does not have a predecessor transaction and the egress transaction does not have a successor transaction;
step 4, on the premise of following the transaction execution sequence, judging whether the transaction is executed in the edge server or the cloud end; determining transaction execution locations in an order in which transactions are received from the edge device; firstly, defining an empty priority transaction queue, if the judgment result is that the transaction is executed at the cloud end, waiting for the transaction to be sent to the cloud end for processing according to the execution sequence, putting the transaction into a transaction priority queue Q, and calculating the execution time of the transaction after the transaction is put into the priority queue on the premise that the two transactions are not dependent on the transaction priority queue and the transaction priority is higher than that of the transaction processed at the edge by default after the two transactions are depended on and the two transactions are not dependent on each other; if the judgment result is that the transaction is executed in the edge server, calculating the priority of the transaction through a heuristic transaction scheduling algorithm, and generating a non-decreasing transaction priority queue Q; when all the transactions judge that the priority is over, entering step 5, otherwise, repeatedly executing step 4;
wherein the transaction t is judgedkThe method of performing the location is as follows:
first, a transaction t is calculatedkPredicting execution time at cloudAnd estimated transmission time of transactionsThe estimated execution time and transmission time of the transaction are shown in formulas (1) and (2):
wherein, ω iscloudRepresenting the computing power of the cloud server, rj∈P(ri) Representing the execution of a transaction tlResource node r ofjAt the execution of transaction tkResource node r ofjIn the preamble set of (a) to (b),representing a slave resource node riThe mean of all bandwidths to the cloud;
secondly, comparing the relation between the transaction estimated execution time at the cloud end and the estimated transmission time of the transaction: when in useWhen, transaction t will bekTransmitting the data to a cloud server for execution; otherwise, executing at the edge server;
wherein, the heuristic transaction scheduling algorithm calculates the transaction tkFormula (3) for priority is as follows:
wherein,representing the average working capacity of all resource nodes;represents all other possible potentials rjNode resource node to resource node riAverage bandwidth of; r isj∈P(ri) Representing a resource node rjAt riIn the preamble set of (1);representing a resource node riTo all other possible potentials rjAverage bandwidth of node resource nodes; r isj∈S(ri) Representing a resource node rjAt riIn the subsequent set of (2);representing a transaction tkLeading transaction t oflA priority value of; t is tk≡TentryRepresenting a transaction tkIs a transaction entry, tk≠TentryRepresenting a transaction tkIs not a transaction entry;
step 5, calculating the evaluation function of each transaction at different edge nodes according to the sequence of the transaction priority queue Q, wherein the resource node with the minimum evaluation function is the resource node for executing the current transactionAn optimal resource node; therein, transaction tkThe evaluation function performed is as follows:
wherein,
wherein,for a transaction tkThe pending urgency of (i.e. the waiting time cannot exceed the transaction maximum delay tolerance time);for a transaction tkThe waiting time of (c); the following areAndthe calculation formula of (2):
wherein r isj∈S(ri) Representing the execution of a transaction tlResource node r ofjAt the execution of transaction tkResource node r ofjIn the subsequent set of (a) of (b),in order to calculate the power consumption,for transmission power consumption, qhRepresents the h transaction, p, in the transaction priority queue QsendRepresents the power of the data transmission in unit length and unit time; the above formula calculates evaluation functions on different resource nodes according to the transaction sequence of the previously arranged transaction priority queue Q, so that the resource node with the minimum current transaction evaluation function is the optimal resource node for executing the current transaction; when the transactions in the priority queue are all completed with resource allocation, ending the resource allocation phase, and generating a transaction-resource allocation mapping scheme; if not, continuously repeating the step 5;
and 6, under the premise that the transaction priority queue Q is arranged in the steps 4 and 5, starting to execute the transactions according to the sequence in the transaction priority queue Q, and executing the transactions on the optimal resource node according to the resource allocation mapping scheme generated in the steps 4 and 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811512330.3A CN109684083B (en) | 2018-12-11 | 2018-12-11 | Multistage transaction scheduling allocation strategy oriented to edge-cloud heterogeneous environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811512330.3A CN109684083B (en) | 2018-12-11 | 2018-12-11 | Multistage transaction scheduling allocation strategy oriented to edge-cloud heterogeneous environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109684083A CN109684083A (en) | 2019-04-26 |
CN109684083B true CN109684083B (en) | 2020-08-28 |
Family
ID=66187558
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811512330.3A Active CN109684083B (en) | 2018-12-11 | 2018-12-11 | Multistage transaction scheduling allocation strategy oriented to edge-cloud heterogeneous environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109684083B (en) |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110493304B (en) * | 2019-07-04 | 2022-11-29 | 上海数据交易中心有限公司 | Edge computing system and transaction system |
CN110473015A (en) * | 2019-08-09 | 2019-11-19 | 南京智骋致想电子科技有限公司 | A kind of smart ads system and advertisement placement method |
CN111061547B (en) * | 2019-10-24 | 2023-04-11 | 中国科学院计算技术研究所 | Task scheduling method and system for heterogeneous system |
CN110856183B (en) * | 2019-11-18 | 2021-04-16 | 南京航空航天大学 | Edge server deployment method based on heterogeneous load complementation and application |
CN111090507B (en) * | 2019-11-25 | 2023-06-09 | 南京航空航天大学 | Task scheduling method and application based on cloud edge fusion server network architecture |
CN111404729B (en) * | 2020-03-04 | 2021-08-31 | 腾讯科技(深圳)有限公司 | Edge cloud cooperative system management method and device |
CN111399911B (en) * | 2020-03-24 | 2021-11-02 | 杭州博雅鸿图视频技术有限公司 | Artificial intelligence development method and device based on multi-core heterogeneous computation |
CN111597025B (en) * | 2020-05-14 | 2024-02-09 | 行星算力(深圳)科技有限公司 | Edge calculation scheduling algorithm and system |
CN111736990B (en) * | 2020-06-11 | 2024-04-02 | 武汉美和易思数字科技有限公司 | Teaching and scientific research platform resource allocation method and device based on load balancing |
CN111901435B (en) * | 2020-07-31 | 2021-09-17 | 南京航空航天大学 | Load-aware cloud-edge collaborative service deployment method |
CN113572848B (en) * | 2020-08-18 | 2022-07-08 | 北京航空航天大学 | Online service placement method with data refreshing based on value space estimation |
CN112118135A (en) * | 2020-09-14 | 2020-12-22 | 南昌市言诺科技有限公司 | Minimum resource configuration method and device for cloud edge cooperative architecture industrial internet platform |
CN112181617B (en) * | 2020-09-17 | 2024-05-17 | 东北大学 | Efficient scheduling algorithm based on specific index structure |
CN112395089A (en) * | 2020-11-19 | 2021-02-23 | 联通智网科技有限公司 | Cloud heterogeneous computing method and device |
CN112492612B (en) * | 2020-11-23 | 2023-07-21 | 中国联合网络通信集团有限公司 | Resource allocation method and device |
CN112650585A (en) * | 2020-12-24 | 2021-04-13 | 山东大学 | Novel edge-cloud collaborative edge computing platform, method and storage medium |
CN113094246B (en) * | 2021-03-30 | 2022-03-25 | 之江实验室 | Edge heterogeneous computing environment simulation system |
CN113141321B (en) * | 2021-03-31 | 2023-01-13 | 航天云网数据研究院(广东)有限公司 | Data transmission method based on edge calculation and electronic equipment |
CN113794646B (en) * | 2021-09-13 | 2024-04-02 | 国网数字科技控股有限公司 | Monitoring data transmission system and method for energy industry |
CN113742048B (en) * | 2021-11-03 | 2022-08-02 | 北京中科金马科技股份有限公司 | Hotel cloud service system and service method thereof |
CN114827142B (en) * | 2022-04-11 | 2023-02-28 | 浙江大学 | Scheduling method for ensuring real-time performance of containerized edge service request |
CN114885028B (en) * | 2022-05-25 | 2024-01-23 | 国网北京市电力公司 | Service scheduling method, device and computer readable storage medium |
CN116708451B (en) * | 2023-08-08 | 2023-10-20 | 广东奥飞数据科技股份有限公司 | Edge cloud cooperative scheduling method and system |
CN116775315B (en) * | 2023-08-22 | 2024-01-02 | 北京遥感设备研究所 | Multi-core CPU concurrent transaction allocation method based on dependency graph |
CN117714475B (en) * | 2023-12-08 | 2024-05-14 | 江苏云工场信息技术有限公司 | Intelligent management method and system for edge cloud storage |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102104631A (en) * | 2011-02-28 | 2011-06-22 | 南京邮电大学 | Method for ensuring quality of service of knowledge system based on cloud computing technology |
CN106534333A (en) * | 2016-11-30 | 2017-03-22 | 北京邮电大学 | Bidirectional selection computing unloading method based on MEC and MCC |
CN107087019A (en) * | 2017-03-14 | 2017-08-22 | 西安电子科技大学 | A kind of end cloud cooperated computing framework and task scheduling apparatus and method |
CN108540406A (en) * | 2018-07-13 | 2018-09-14 | 大连理工大学 | A kind of network discharging method based on mixing cloud computing |
CN108920280A (en) * | 2018-07-13 | 2018-11-30 | 哈尔滨工业大学 | A kind of mobile edge calculations task discharging method under single user scene |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130132948A1 (en) * | 2011-11-21 | 2013-05-23 | Adiseshu Hari | Personal cloud computing and virtual distributed cloud computing system |
-
2018
- 2018-12-11 CN CN201811512330.3A patent/CN109684083B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102104631A (en) * | 2011-02-28 | 2011-06-22 | 南京邮电大学 | Method for ensuring quality of service of knowledge system based on cloud computing technology |
CN106534333A (en) * | 2016-11-30 | 2017-03-22 | 北京邮电大学 | Bidirectional selection computing unloading method based on MEC and MCC |
CN107087019A (en) * | 2017-03-14 | 2017-08-22 | 西安电子科技大学 | A kind of end cloud cooperated computing framework and task scheduling apparatus and method |
CN108540406A (en) * | 2018-07-13 | 2018-09-14 | 大连理工大学 | A kind of network discharging method based on mixing cloud computing |
CN108920280A (en) * | 2018-07-13 | 2018-11-30 | 哈尔滨工业大学 | A kind of mobile edge calculations task discharging method under single user scene |
Non-Patent Citations (1)
Title |
---|
基于合作博弈的网格资源管理算法;方娟,许涛;《中国计算机大会-2010-投稿》;20101231;第1-13页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109684083A (en) | 2019-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109684083B (en) | Multistage transaction scheduling allocation strategy oriented to edge-cloud heterogeneous environment | |
Ning et al. | Deep reinforcement learning for intelligent internet of vehicles: An energy-efficient computational offloading scheme | |
Cui et al. | A novel offloading scheduling method for mobile application in mobile edge computing | |
CN103179052B (en) | A kind of based on the central virtual resource allocation method and system of the degree of approach | |
CN102281290B (en) | Emulation system and method for a PaaS (Platform-as-a-service) cloud platform | |
CN108092706B (en) | Mapping method | |
CN113282409B (en) | Edge calculation task processing method and device and computer equipment | |
CN114595049A (en) | Cloud-edge cooperative task scheduling method and device | |
Wen et al. | Load balancing job assignment for cluster-based cloud computing | |
CN111984419A (en) | Complex task computing and transferring method for marginal environment reliability constraint | |
CN111131447A (en) | Load balancing method based on intermediate node task allocation | |
CN106502790A (en) | A kind of task distribution optimization method based on data distribution | |
Fizza et al. | PASHE: Privacy aware scheduling in a heterogeneous fog environment | |
Nguyen et al. | Flexible computation offloading in a fuzzy-based mobile edge orchestrator for IoT applications | |
CN113709249A (en) | Safe balanced unloading method and system for driving assisting service | |
CN113360245A (en) | Internet of things equipment task downloading method based on mobile cloud computing deep reinforcement learning | |
CN105049315A (en) | Improved virtual network mapping method based on virtual network partition | |
Proietti Mattia et al. | A latency-levelling load balancing algorithm for Fog and edge computing | |
CN116501483A (en) | Vehicle edge calculation task scheduling method based on multi-agent reinforcement learning | |
CN113079053B (en) | Virtual resource reconfiguration method and system based on particle swarm theory under network slice | |
Trabelsi et al. | Global aggregation node selection scheme in federated learning for vehicular ad hoc networks (VANETs) | |
Li et al. | Task allocation based on task deployment in autonomous vehicular cloud | |
CN114138466A (en) | Task cooperative processing method and device for intelligent highway and storage medium | |
Silva et al. | Task offloading optimization in mobile edge computing based on deep reinforcement learning | |
Li et al. | Energy-efficient offloading based on hybrid bio-inspired algorithm for edge–cloud integrated computation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |