CN113918321A - Reliable edge-cloud computing service delay optimization method for information physical system - Google Patents

Reliable edge-cloud computing service delay optimization method for information physical system Download PDF

Info

Publication number
CN113918321A
CN113918321A CN202111048618.1A CN202111048618A CN113918321A CN 113918321 A CN113918321 A CN 113918321A CN 202111048618 A CN202111048618 A CN 202111048618A CN 113918321 A CN113918321 A CN 113918321A
Authority
CN
China
Prior art keywords
edge
base station
backup
cloud server
service delay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111048618.1A
Other languages
Chinese (zh)
Other versions
CN113918321B (en
Inventor
曹坤
贾韵凝
刘志全
翁健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
Original Assignee
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan University filed Critical Jinan University
Priority to CN202111048618.1A priority Critical patent/CN113918321B/en
Publication of CN113918321A publication Critical patent/CN113918321A/en
Application granted granted Critical
Publication of CN113918321B publication Critical patent/CN113918321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a reliable edge-cloud computing service delay optimization method facing an information physical system, which is a service delay optimization method combining a static stage and a dynamic stage. In the dynamic phase, a dynamic mechanism of adaptive backup is proposed to avoid redundant data transmission and execution, thereby achieving additional energy saving and service delay enhancement. The invention solves the problem of minimizing the edge cloud computing service delay of the coupling CPS under the constraint of energy budget and reliability requirements, and effectively reduces the system service delay by combining a static stage and a dynamic stage to optimize the service delay.

Description

Reliable edge-cloud computing service delay optimization method for information physical system
Technical Field
The invention belongs to the technical field of edge computing of computer networks, and particularly relates to a reliable edge-cloud computing service delay optimization method for an information physical system.
Background
In recent years, with the development of information technology, cyber-physical systems (CPS) have been widely used, such as automatic automobile systems, health care monitoring, process control systems, and the like, as shown in fig. 1. For CPS applications, service delay management is very important to provide a high quality experience for the end user. Edge cloud computing combines edge computing and cloud computing, is considered to be a promising computing mode, and can realize low service delay for terminal users in the CPS. However, existing CPS-oriented perceptual delay-oriented edge cloud computing methods fail to consider both energy budget and reliability requirements, which can greatly reduce the sustainability of CPS applications.
An cyber-physical system is a system that deeply interleaves physical objects and software components by collecting intelligent sensing, computing, control and networking technologies. In recent years, advances in information technology have driven the deployment of many emerging CPS applications, such as automotive systems, healthcare monitoring, and process control systems. For these CPS applications, service latency is the greatest concern designed to provide a high quality experience for the end user. Computing combining edge computing and cloud computing is considered a promising computing paradigm that can achieve low service latency for end users in CPS.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention provides a reliable edge-cloud computing service delay optimization method for an information physical system.
The second purpose of the invention is to provide an information physical fusion system.
In order to achieve the first purpose, the invention adopts the following technical scheme:
a reliable edge-cloud computing service delay optimization method facing an cyber-physical system comprises the following steps:
step 1: modeling service delay of a base station based on computation unloading transmission delay and execution delay, and setting a service delay target of edge cloud computing according to energy budget and reliability characteristics, wherein the service delay target of the edge cloud computing comprises a static stage target and a dynamic stage target, the static stage target is used for searching optimal computation unloading mapping and task backup quantity, and the dynamic stage target is used for avoiding transmission and execution of redundant tasks during operation;
step 2: calculating system service delay, and converting a service delay target of edge cloud computing into 5 scheduling constraint conditions, wherein the 5 scheduling constraint conditions comprise a first scheduling constraint condition, a second scheduling constraint condition, a third scheduling constraint condition, a fourth scheduling constraint condition and a fifth scheduling constraint condition;
the first scheduling constraint condition is that each base station only allows to forward the computing task to one edge/cloud server, the second scheduling constraint condition is that the workload of any edge/cloud server cannot exceed the maximum processing capacity of the edge/cloud server, the third scheduling constraint condition is that the energy consumed by the whole system cannot exceed a given energy threshold, the fourth scheduling constraint condition is that the task backup quantity of each base station cannot exceed the maximum backup quantity specified by the system, and the fifth scheduling constraint condition is that the reliability of the system with fault tolerance is higher than a preset reliability threshold;
and step 3: obtaining the backup number according to a backup number calculation formula by using an error adaptive factor, wherein the error adaptive factor is used for representing the uncertainty of the average arrival rate caused by the occurrence of bit errors and soft errors, and the backup number calculation formula is specifically represented as follows:
Figure BDA0003251874750000021
in the formula
Figure BDA0003251874750000022
Denotes the jth base station
Figure BDA0003251874750000023
With the mth edge/cloud server
Figure BDA0003251874750000024
Backup ofThe number of the components is equal to or less than the total number of the components,
Figure BDA0003251874750000025
representing slave base stations
Figure BDA0003251874750000026
To edge/cloud server
Figure BDA0003251874750000027
The average fault-tolerant arrival rate of (c),
Figure BDA0003251874750000028
represents an average arrival rate in the best case where no error occurs in the process of calculating the forwarding of the base station,
Figure BDA0003251874750000029
to represent
Figure BDA00032518747500000210
And
Figure BDA00032518747500000211
a rounded value in orientation after division;
and 4, step 4: in a static stage, determining the optimal calculation unloading mapping and task backup number through Monte Carlo simulation and integer linear programming to minimize system service delay;
and 5: and in the dynamic stage, determining that a backup task is successfully transmitted and executed once based on an online backup self-adaptive dynamic strategy, traversing all base stations, respectively finding out all edge/cloud servers which are in communication connection with the base stations, finding out the updated task backup quantity of each base station after traversing, and executing backup in all task backups of each base station.
As a preferred technical solution, in step 4, the determining the optimal calculation unloading mapping and task backup number to minimize the system service delay through monte carlo simulation and integer linear programming specifically includes: and searching for an optimal calculation unloading mapping by repeating the Monte Carlo simulation process by using an error adaptive factor and an ILP algorithm, obtaining the backup quantity of each base station and each edge/cloud server according to a backup quantity calculation formula, obtaining the system reliability by using Monte Carlo simulation, and screening and outputting the optimal calculation unloading mapping and the task backup quantity of each base station in a static optimization stage according to 5 scheduling constraint conditions.
As a preferred technical solution, in step 5, once the first successful backup is detected, the transmission and execution of the other task backups are cancelled.
As a preferred technical scheme, the step 1 specifically comprises the following steps:
step A1: modeling the calculation unloading transmission delay of the base station, calculating the jth base station according to the Poisson distribution satisfied by the calculation task services sent to the base station by a plurality of terminal users
Figure BDA00032518747500000212
And mth edge/cloud server
Figure BDA00032518747500000213
Communication delay between:
Figure BDA00032518747500000214
wherein Dj,mIs composed of
Figure BDA0003251874750000031
And
Figure BDA0003251874750000032
distance between, xi is the electromagnetic wave propagation speed, WjFor end user at jth base station
Figure BDA0003251874750000033
Total amount of task data on, Cj,mIs composed of
Figure BDA0003251874750000034
And
Figure BDA0003251874750000035
bandwidth of communication between, order
Figure BDA0003251874750000036
Is k edge servers and cloud servers
Figure BDA0003251874750000037
The set of (a) and (b),
Figure BDA0003251874750000038
the concrete expression is as follows:
Figure BDA0003251874750000039
step A2: modeling the computation unloading execution delay of the base station, quantifying the execution delay connected to the edge/cloud server and the base station based on an M/G/1 queue model, and making the execution time of the task on the edge/cloud server follow an average value mumStandard deviation of deltamA general probability distribution function of;
computing
Figure BDA00032518747500000310
And
Figure BDA00032518747500000311
the execution delay in between:
Figure BDA00032518747500000312
wherein a plurality of end users transmit to a base station
Figure BDA00032518747500000313
The computational tasks of (a) are subject to a poisson distribution,
Figure BDA00032518747500000314
is the jth base station
Figure BDA00032518747500000315
The average arrival rate of the tasks is calculated,
Figure BDA00032518747500000316
to represent
Figure BDA00032518747500000317
Supported calculation speed, μmAnd deltamSeparately representing tasks at edge/cloud servers
Figure BDA00032518747500000318
Mean and standard deviation of the probability distribution function obeyed by the upper execution time, phimIs to remove
Figure BDA00032518747500000319
Outside base station mapping
Figure BDA00032518747500000320
Sum of task arrival rates of (a);
step A3: computing base station
Figure BDA00032518747500000321
With edge/cloud servers
Figure BDA00032518747500000322
Total service delay when establishing a connection:
Figure BDA00032518747500000323
Figure BDA00032518747500000324
step A4: the calculated system service delay is expressed as the average service delay of all base stations:
Figure BDA00032518747500000325
in the formula
Figure BDA00032518747500000326
A communication connection state identification is represented and,
Figure BDA00032518747500000327
a binary decision variable of 0 or 1 when
Figure BDA00032518747500000328
Determining and
Figure BDA00032518747500000329
when the communication is carried out,
Figure BDA00032518747500000330
otherwise
Figure BDA00032518747500000331
Step A5: calculating the jth base station
Figure BDA00032518747500000332
Energy consumption of (2):
Figure BDA00032518747500000333
in the formula
Figure BDA00032518747500000334
Indicating a base station
Figure BDA00032518747500000335
Energy dissipation in the transmission of the responsible end-user computing tasks,
Figure BDA00032518747500000336
is a base station
Figure BDA00032518747500000337
A power consumption constant of;
step A6: computing mth edge/cloud server
Figure BDA00032518747500000338
Energy consumption of (2):
Figure BDA0003251874750000041
in the formula
Figure BDA0003251874750000042
Representing edge/cloud servers
Figure BDA0003251874750000043
The amount of energy that is consumed,
Figure BDA0003251874750000044
is a static power constant, αmFor edge/cloud servers
Figure BDA0003251874750000045
Parameter of power consumption, αmBeing constants associated with the processor architecture, vmFor edge/cloud servers
Figure BDA0003251874750000046
The processor supply voltage of (a);
step A7: combining step A5 and step A6, calculating system energy consumption:
Figure BDA0003251874750000047
step A8: calculating slave base station
Figure BDA0003251874750000048
To edge/cloud server
Figure BDA0003251874750000049
Reliability of transmission:
Figure BDA00032518747500000410
in the formula
Figure BDA00032518747500000411
Represents from
Figure BDA00032518747500000412
To edge/cloud server
Figure BDA00032518747500000413
A constant bit error rate of the link of (1);
step A9: compute edge/cloud server
Figure BDA00032518747500000414
Average failure occurrence rate of (2):
Figure BDA00032518747500000415
in the formula CmAnd
Figure BDA00032518747500000416
respectively the m-th edge/cloud server
Figure BDA00032518747500000417
First and second fault occurrence parameters of CmAnd
Figure BDA00032518747500000418
are all constants, when practically used, CmAnd
Figure BDA00032518747500000419
depending on the hardware architecture of the actual device.
As a preferred technical solution, the step 2 specifically comprises the following steps:
step B1: service delay target establishing undirected graph based on edge cloud computing
Figure BDA00032518747500000420
Undirected graph
Figure BDA00032518747500000421
Computing system service latency for describing topological relationships between base stations and edge/cloud servers
Figure BDA00032518747500000422
Step B2: ensuring a base station based on a first scheduling constraint
Figure BDA00032518747500000423
Mapping to edge/cloud server exactly and only one edge/cloud server:
Figure BDA00032518747500000424
step B3: ensuring that each edge/cloud server satisfies a maximum processing capacity constraint based on a second scheduling constraint:
Figure BDA00032518747500000425
step B4: and ensuring the satisfaction of the energy upper limit constraint based on a third scheduling constraint condition:
Figure BDA00032518747500000426
step B5: and ensuring that the backup quantity constraint is met based on a fourth scheduling constraint condition:
Figure BDA00032518747500000427
in the formula
Figure BDA00032518747500000428
A maximum number of backups specified for the system;
step B6: and ensuring that the system reliability constraint is met based on a fifth scheduling constraint condition:
Figure BDA0003251874750000051
in the formula
Figure BDA0003251874750000052
Representing a preset system reliability threshold.
As a preferred technical solution, the specific steps of step 3 include:
step C1: for the worst case, the base station
Figure BDA0003251874750000053
Co-completion
Figure BDA0003251874750000054
A backup is provided with
Figure BDA0003251874750000055
In the best case where no errors occur in the process of calculating the forwarding at the base station,
Figure BDA0003251874750000056
the average arrival rate of the base station at that time, i.e. the average arrival rate of the best case,
Figure BDA0003251874750000057
the average arrival rate is the worst case in which no error occurs in the process of calculating and forwarding of the base station;
step C2: introducing an error adaptation factor phi representing the uncertainty of the average arrival rate due to the occurrence of bit errors and soft errors from the base station
Figure BDA0003251874750000058
To edge/cloud server
Figure BDA0003251874750000059
The average fault-tolerant arrival rate of (c) is:
Figure BDA00032518747500000510
step C3: according to average fault-tolerant arrival rate
Figure BDA00032518747500000511
And obtaining the backup quantity based on a backup quantity calculation formula, wherein the backup quantity calculation formula is expressed as:
Figure BDA00032518747500000512
in the formula
Figure BDA00032518747500000513
Denotes the jth base station
Figure BDA00032518747500000514
With the mth edge/cloud server
Figure BDA00032518747500000515
The number of backups.
As a preferred technical solution, the step 4 specifically comprises the following steps:
step D1: undirected graph
Figure BDA00032518747500000516
Wherein
Figure BDA00032518747500000517
Respectively representing position information and link communication information, undirected graph
Figure BDA00032518747500000518
For describing base stations and edge/cloud servicesThe topological relation between the devices is that the devices,
Figure BDA00032518747500000519
and
Figure BDA00032518747500000520
for use as undirected graphs
Figure BDA00032518747500000521
The input of (1);
step D2: will be provided with
Figure BDA00032518747500000522
The value is 0, i.e.:
Figure BDA00032518747500000523
step D3: will phistartAssigned a value of 0, phiendAssigned a value of 1, i.e. phistart←0,Φend←1;
Step D4: judgment of
Figure BDA00032518747500000524
Whether the result is true or not;
if yes, go to step D5;
otherwise, go to step D12;
step D5: will phistart+(Φendstart) Per 2 is assigned to Φ, i.e., < ← Φstart+(Φendstart)/2;
Step D6: for each
Figure BDA00032518747500000525
Each of (1)
Figure BDA00032518747500000526
Calculating the number of backups using C3
Figure BDA00032518747500000527
Step D7: processing an ILP plan with 5 scheduling constraint conditions by adopting an ILP solver, wherein the 5 scheduling constraint conditions are the 5 scheduling constraint conditions in the step 2;
step D8: obtaining the reliability of the current system by Monte Carlo simulation
Figure BDA0003251874750000061
Step D9: judgment of
Figure BDA0003251874750000062
Whether the result is true or not;
if so, will phistartAssigned a value of phi +1, i.e. phistartC, going to step D10;
otherwise, will phiendAssigned a value of phi-1, i.e. phiendC, going to step D10;
step D10: and outputting the optimal calculation unloading mapping and the task backup quantity of each base station in the static optimization stage.
As a preferred technical solution, the step D8 specifically includes:
step D8-1: calculating a base station using exponential distribution
Figure BDA0003251874750000063
The system execution reliability of (a), the system execution reliability being expressed as:
Figure BDA0003251874750000064
step D8-2: calculating base station based on system execution reliability
Figure BDA0003251874750000065
System backup reliability of when
Figure BDA0003251874750000066
Back up as a base station
Figure BDA0003251874750000067
When reservedThe computing system backup reliability is expressed as:
Figure BDA0003251874750000068
step D8-3: obtaining the characteristics of system reliability according to the system backup reliability of all base stations;
the system reliability is characterized by the product of system backup reliability of all base stations establishing connection with the edge/cloud server in the system, which is specifically expressed as:
Figure BDA0003251874750000069
as a preferred technical solution, the step 5 specifically comprises the following steps:
step E1: j is assigned as 1, namely j ← 1;
step E2: judging whether J is equal to or less than J, if so, executing a step E3, otherwise, exiting;
step E3: assigning m to be 0, namely m ← 0;
step E4: judging whether m is less than or equal to k, if so, executing a step E5, otherwise, executing a step E13;
step E5: judgment of
Figure BDA00032518747500000610
If yes, performing a step E6, otherwise performing a step E12;
step E6: assigning i to be 1, namely i ← 1;
step E7: judgment of
Figure BDA00032518747500000611
If yes, executing a step E8, otherwise, executing a step E12;
step E8: determining whether the transfer was successfully propagated, if so, performing step E9, otherwise, performing step E11;
step E9: slave base station
Figure BDA0003251874750000071
Executing backup in all task backups;
step E10: updating
Figure BDA0003251874750000072
Namely, it is
Figure BDA0003251874750000073
Step E12 is executed;
step E11: updating i, i ← i + 1;
step E12: updating m, namely m ← m + 1;
step E13: update j, i.e., j ← j +1, and proceed to step E2.
In order to achieve the second object, the invention adopts the following technical scheme:
the cyber-physical system is a CPS formed by typical edge/cloud computing coupling and comprises a plurality of terminal users, a plurality of base stations, a plurality of heterogeneous edge servers and a cloud server, wherein the plurality of heterogeneous edge servers and the cloud server form the edge/cloud server, the plurality of terminal users are in wireless connection with adjacent base stations, and the edge/cloud server is in wireless connection with the adjacent base stations.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) in the technical problem of minimizing the service delay of the edge cloud computing embedded CPS application program, particularly in the service delay optimization process, optimization algorithms are respectively carried out in a static stage and a dynamic stage by considering the energy budget and the reliability requirements of the CPS application program; in a static stage, Monte Carlo simulation and Integer Linear Programming (ILP) technology are utilized to find the optimal calculation unloading mapping and task backup quantity, and in a dynamic stage, a backup self-adaptive mechanism is adopted to avoid transmission and execution of redundant tasks during operation; the invention effectively reduces the service delay of the system by combining the static state and the dynamic state for optimization.
Drawings
FIG. 1 is a schematic diagram of an application of an cyber-physical system in edge/cloud computing assistance in the prior art;
fig. 2 is a flowchart of steps of a reliable edge-cloud computing service delay optimization method for an cyber-physical system according to embodiment 1 of the present invention;
fig. 3 is a schematic diagram of the location deployment of the shanghai telecommunication base station in embodiment 2 of the present invention;
fig. 4 is a schematic diagram illustrating a comparative effect of the reliable edge-cloud computing service delay optimization method for an cyber-physical system, GAES, RTWI, on system service delay at a fixed edge server location and under different base station workloads in embodiment 2 of the present invention;
fig. 5 is a schematic diagram illustrating a comparison effect of the reliable edge-cloud computing service delay optimization method for an cyber-physical system, GAES, RTWI, on system service delay under a fixed base station workload and different edge server locations in embodiment 2 of the present invention;
fig. 6 is a schematic diagram illustrating a comparison effect between a reliable edge-cloud computing service delay optimization method for an cyber-physical system and a GAES and RTWI benchmark solution on task scheduling feasibility in embodiment 2 of the present invention.
Detailed Description
In the description of the present disclosure, it should be noted that the terms "first", "second", and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Also, the use of the terms "a," "an," or "the" and similar referents do not denote a limitation of quantity, but rather denote the presence of at least one. The word "comprising" or "comprises", and the like, means that the element or item appearing before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
In the description of the present disclosure, it is to be noted that the terms "mounted," "connected," and "connected" are to be construed broadly unless otherwise explicitly stated or limited. For example, the connection can be fixed, detachable or integrated; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present disclosure can be understood in specific instances by those of ordinary skill in the art. In addition, technical features involved in different embodiments of the present disclosure described below may be combined with each other as long as they do not conflict with each other.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
Example 1
As shown in fig. 2, the present embodiment provides a reliable edge-cloud computing service delay optimization method for an cyber-physical system, where the method includes the following steps:
step 1: modeling service delay of a base station based on computation unloading transmission delay and execution delay, and setting a service delay target of edge cloud computing according to energy budget and reliability characteristics, wherein the service delay target of the edge cloud computing comprises a static stage target and a dynamic stage target, the static stage target is used for searching optimal computation unloading mapping and task backup quantity, and the dynamic stage target is used for avoiding transmission and execution of redundant tasks during operation;
step 2: calculating system service delay, and converting a service delay target of edge cloud computing into 5 scheduling constraint conditions, wherein the 5 scheduling constraint conditions comprise a first scheduling constraint condition, a second scheduling constraint condition, a third scheduling constraint condition, a fourth scheduling constraint condition and a fifth scheduling constraint condition;
the method comprises the following steps that a first scheduling constraint condition is that each base station only allows a computing task to be forwarded to one edge/cloud server, a second scheduling constraint condition is that the workload of any edge/cloud server cannot exceed the maximum processing capacity of the edge/cloud server, a third scheduling constraint condition is that the energy consumed by the whole system cannot exceed a given energy threshold, a fourth scheduling constraint condition is that the task backup quantity of each base station cannot exceed the maximum backup quantity specified by the system, and a fifth scheduling constraint condition is that the reliability of the system with fault tolerance is higher than a preset reliability threshold;
and step 3: obtaining the backup number according to a backup number calculation formula by using an error adaptive factor, wherein the error adaptive factor is used for expressing uncertainty of an average arrival rate caused by occurrence of bit errors and soft errors, and the backup number calculation formula is specifically expressed as follows:
Figure BDA0003251874750000091
in the formula
Figure BDA0003251874750000092
Denotes the jth base station
Figure BDA0003251874750000093
With the mth edge/cloud server
Figure BDA0003251874750000094
The number of backups of (a) is,
Figure BDA0003251874750000095
representing slave base stations
Figure BDA0003251874750000096
To edge/cloud server
Figure BDA0003251874750000097
The average fault-tolerant arrival rate of (c),
Figure BDA0003251874750000098
indicating computational forwarding at the base stationThe average arrival rate in the best case without error in the process of (2),
Figure BDA0003251874750000099
to represent
Figure BDA00032518747500000910
And
Figure BDA00032518747500000911
a rounded value in orientation after division;
and 4, step 4: in a static stage, minimizing system service delay by determining the optimal calculation unloading mapping and task backup quantity;
searching for an optimal calculation unloading mapping by repeating a Monte Carlo simulation process by using an error adaptive factor and an ILP algorithm, obtaining the backup quantity of each base station and each edge/cloud server according to a backup quantity calculation formula, obtaining the system reliability by using Monte Carlo simulation, and screening and outputting the optimal calculation unloading mapping and the task backup quantity of each base station in a static optimization stage according to 5 scheduling constraint conditions;
and 5: and in the dynamic stage, determining that a backup task is successfully transmitted and executed once based on an online backup self-adaptive dynamic strategy, traversing all base stations, respectively finding out all edge/cloud servers which are in communication connection with the base stations, finding out the updated task backup quantity of each base station after traversing, and executing backup in all task backups of each base station. In actual application, once the first successful backup is detected, the transmission and execution of other task backups are cancelled.
In this embodiment, the step 1 specifically includes:
step A1: modeling the calculation unloading transmission delay of the base station, calculating the jth base station according to the Poisson distribution satisfied by the calculation task services sent to the base station by a plurality of terminal users
Figure BDA00032518747500000912
And mth edge/cloud server
Figure BDA00032518747500000913
Communication delay between:
Figure BDA00032518747500000914
wherein Dj,mIs composed of
Figure BDA00032518747500000915
And
Figure BDA00032518747500000916
distance between, xi is the electromagnetic wave propagation speed, WjFor end user at jth base station
Figure BDA00032518747500000917
Total amount of task data on, Cj,mIs composed of
Figure BDA00032518747500000918
And
Figure BDA00032518747500000919
bandwidth of communication between, order
Figure BDA00032518747500000920
Is k edge servers and cloud servers
Figure BDA00032518747500000921
The set of (a) and (b),
Figure BDA00032518747500000922
the concrete expression is as follows:
Figure BDA00032518747500000923
step A2: modeling the computation offload execution delay of the base station, quantifying the execution delay connected to the edge/cloud server and the base station based on the M/G/1 queue model, and tasking atExecution time on edge/cloud server obeys an average value mumStandard deviation of deltamA general probability distribution function of;
computing
Figure BDA0003251874750000101
And
Figure BDA0003251874750000102
the execution delay in between:
Figure BDA0003251874750000103
wherein a plurality of end users transmit to a base station
Figure BDA0003251874750000104
The computational tasks of (a) are subject to a poisson distribution,
Figure BDA0003251874750000105
is the jth base station
Figure BDA0003251874750000106
The average arrival rate of the tasks is calculated,
Figure BDA0003251874750000107
to represent
Figure BDA0003251874750000108
Supported calculation speed, μmAnd deltamSeparately representing tasks at edge/cloud servers
Figure BDA0003251874750000109
Mean and standard deviation of the probability distribution function obeyed by the upper execution time, phimIs to remove
Figure BDA00032518747500001010
Outside base station mapping
Figure BDA00032518747500001011
Sum of task arrival rates of (a);
step A3: computing base station
Figure BDA00032518747500001012
With edge/cloud servers
Figure BDA00032518747500001013
Total service delay when establishing a connection:
Figure BDA00032518747500001014
Figure BDA00032518747500001015
step A4: the calculated system service delay is expressed as the average service delay of all base stations:
Figure BDA00032518747500001016
in the formula
Figure BDA00032518747500001017
A communication connection state identification is represented and,
Figure BDA00032518747500001018
a binary decision variable of 0 or 1 when
Figure BDA00032518747500001019
Determining and
Figure BDA00032518747500001020
when the communication is carried out,
Figure BDA00032518747500001021
otherwise
Figure BDA00032518747500001022
Step A5: calculating the jth base station
Figure BDA00032518747500001023
Energy consumption of (2):
Figure BDA00032518747500001024
in the formula
Figure BDA00032518747500001025
Indicating a base station
Figure BDA00032518747500001026
Energy dissipation in the transmission of the responsible end-user computing tasks,
Figure BDA00032518747500001027
is a base station
Figure BDA00032518747500001028
A power consumption constant of;
step A6: computing mth edge/cloud server
Figure BDA00032518747500001029
Energy consumption of (2):
Figure BDA00032518747500001030
in the formula
Figure BDA00032518747500001031
Representing edge/cloud servers
Figure BDA00032518747500001032
The amount of energy that is consumed,
Figure BDA00032518747500001033
is a static power constant, αmFor edge/cloud servers
Figure BDA00032518747500001034
Parameter of power consumption, αmBeing constants associated with the processor architecture, vmFor edge/cloud servers
Figure BDA00032518747500001035
The processor supply voltage of (a);
step A7: combining step A5 and step A6, calculating system energy consumption:
Figure BDA0003251874750000111
step A8: calculating slave base station
Figure BDA0003251874750000112
To edge/cloud server
Figure BDA0003251874750000113
Reliability of transmission:
Figure BDA0003251874750000114
in the formula
Figure BDA0003251874750000115
Represents from
Figure BDA0003251874750000116
To edge/cloud server
Figure BDA0003251874750000117
A constant bit error rate of the link of (1);
step A9: compute edge/cloud server
Figure BDA0003251874750000118
Average failure occurrence rate of (2):
Figure BDA0003251874750000119
in the formula CmAnd
Figure BDA00032518747500001110
respectively the m-th edge/cloud server
Figure BDA00032518747500001111
First and second fault occurrence parameters of CmAnd
Figure BDA00032518747500001112
are all constants, when practically used, CmAnd
Figure BDA00032518747500001113
depending on the hardware architecture of the actual device.
In this embodiment, step 2 specifically includes the following steps:
step B1: service delay target establishing undirected graph based on edge cloud computing
Figure BDA00032518747500001114
Undirected graph
Figure BDA00032518747500001115
Computing system service latency for describing topological relationships between base stations and edge/cloud servers
Figure BDA00032518747500001116
Step B2: ensuring a base station based on a first scheduling constraint
Figure BDA00032518747500001117
Mapping to edge/cloud server exactly and only one edge/cloud server:
Figure BDA00032518747500001118
step B3: ensuring that each edge/cloud server satisfies a maximum processing capacity constraint based on a second scheduling constraint:
Figure BDA00032518747500001119
step B4: and ensuring the satisfaction of the energy upper limit constraint based on a third scheduling constraint condition:
Figure BDA00032518747500001120
step B5: and ensuring that the backup quantity constraint is met based on a fourth scheduling constraint condition:
Figure BDA00032518747500001121
in the formula
Figure BDA00032518747500001122
A maximum number of backups specified for the system;
step B6: and ensuring that the system reliability constraint is met based on a fifth scheduling constraint condition:
Figure BDA00032518747500001123
in the formula
Figure BDA00032518747500001124
Representing a preset system reliability threshold.
In this embodiment, the step 3 specifically includes:
step C1: for the worst case, the base station
Figure BDA0003251874750000121
Co-completion
Figure BDA0003251874750000122
A backup is provided with
Figure BDA0003251874750000123
In the best case where no errors occur in the process of calculating the forwarding at the base station,
Figure BDA0003251874750000124
the average arrival rate of the base station at that time, i.e. the average arrival rate of the best case,
Figure BDA0003251874750000125
is the worst-case average arrival rate at which no errors occur during the process of calculating the forwarding at the base station. When the utility model is used in the practical application,
Figure BDA0003251874750000126
and
Figure BDA0003251874750000127
constant for each base station;
step C2: introducing an error adaptation factor phi representing the uncertainty of the average arrival rate due to the occurrence of bit errors and soft errors from the base station
Figure BDA0003251874750000128
To edge/cloud server
Figure BDA0003251874750000129
The average fault-tolerant arrival rate of (c) is:
Figure BDA00032518747500001210
step C3: according to average fault-tolerant arrival rate
Figure BDA00032518747500001211
And obtaining the backup quantity based on a backup quantity calculation formula, wherein the backup quantityThe quantity calculation formula is expressed as:
Figure BDA00032518747500001212
in the formula
Figure BDA00032518747500001213
Denotes the jth base station
Figure BDA00032518747500001214
With the mth edge/cloud server
Figure BDA00032518747500001215
The number of backups.
In this embodiment, the step 4 specifically includes:
step D1: undirected graph
Figure BDA00032518747500001216
Wherein
Figure BDA00032518747500001217
Respectively representing position information and link communication information, undirected graph
Figure BDA00032518747500001218
For describing the topological relationship between the base station and the edge/cloud server,
Figure BDA00032518747500001219
and
Figure BDA00032518747500001220
for use as undirected graphs
Figure BDA00032518747500001221
The input of (1);
step D2: will be provided with
Figure BDA00032518747500001222
The value is 0, i.e.:
Figure BDA00032518747500001223
step D3: will phistartAssigned a value of 0, phiendAssigned a value of 1, i.e. phistart←0,Φend←1;
Step D4: judgment of
Figure BDA00032518747500001224
Whether the result is true or not;
if yes, go to step D5;
otherwise, go to step D12;
step D5: will phistart+(Φendstart) Per 2 is assigned to Φ, i.e., < ← Φstart+(Φendstart)/2;
Step D6: for each
Figure BDA00032518747500001225
Each of (1)
Figure BDA00032518747500001226
Calculating the number of backups using C3
Figure BDA00032518747500001227
Step D7: processing the ILP plan with 5 scheduling constraint conditions by adopting an ILP solver, wherein the 5 scheduling constraint conditions are the 5 scheduling constraint conditions in the step 2;
step D8: obtaining the reliability of the current system by Monte Carlo simulation
Figure BDA00032518747500001228
In this embodiment, the step D8 includes the following specific steps:
step D8-1: calculating a base station using exponential distribution
Figure BDA00032518747500001229
System execution reliability ofExpressed as:
Figure BDA0003251874750000131
step D8-2: calculating base station based on system execution reliability
Figure BDA0003251874750000132
System backup reliability of when
Figure BDA0003251874750000133
Back up as a base station
Figure BDA0003251874750000134
While retained, computing system backup reliability:
Figure BDA0003251874750000135
step D8-3: obtaining the characteristic of system reliability according to the system backup reliability of all base stations, wherein the characteristic of the system reliability is the product of the system backup reliability of all base stations which establish connection with the edge/cloud server in the system, namely:
Figure BDA0003251874750000136
step D9: judgment of
Figure BDA0003251874750000137
Whether the result is true or not;
if so, will phistartAssigned a value of phi +1, i.e. phistartC, going to step D10;
otherwise, will phiendAssigned a value of phi-1, i.e. phiendC, going to step D10;
step D10: and outputting the optimal calculation unloading mapping and the task backup number of each base station in the static optimization stage, and then exiting, wherein the task backup number is calculated through the step C3.
In this embodiment, the step 5 specifically includes the following steps:
step E1: j is assigned as 1, namely j ← 1;
step E2: and E, judging whether J is less than or equal to J, if so, executing the step E3, and otherwise, exiting.
Step E3: assigning m to be 0, namely m ← 0;
step E4: judging whether m is less than or equal to k, if so, executing a step E5, otherwise, executing a step E13;
step E5: judgment of
Figure BDA0003251874750000138
If yes, performing a step E6, otherwise performing a step E12;
step E6: assigning i to be 1, namely i ← 1;
step E7: judgment of
Figure BDA0003251874750000139
If yes, executing a step E8, otherwise, executing a step E12;
step E8: determining whether the transfer was successfully propagated, if so, performing step E9, otherwise, performing step E11;
step E9: slave base station
Figure BDA00032518747500001310
Executing backup in all task backups;
step E10: updating
Figure BDA00032518747500001311
Namely, it is
Figure BDA00032518747500001312
Step E12 is executed;
step E11: updating i, i ← i + 1;
step E12: updating m, namely m ← m + 1;
step E13: update j, i.e., j ← j +1, and proceed to step E2.
Example 2
The embodiment describes how the reliable edge-cloud computing service delay optimization method for the cyber-physical system is applied by taking the cyber-physical system as an example, where the cyber-physical system is a CPS formed by coupling typical edge/cloud computing and specifically includes a plurality of end users, a plurality of base stations, a plurality of heterogeneous edge servers and a cloud server, where the plurality of heterogeneous edge servers and the cloud server form an edge/cloud server, the plurality of end users are wirelessly connected with adjacent base stations, and the edge/cloud server is wirelessly connected with the adjacent base stations.
In the present embodiment, the base station is represented as
Figure BDA0003251874750000141
J is the number of base stations and the heterogeneous edge servers are denoted as
Figure BDA0003251874750000142
Kappa is the number of heterogeneous edge servers, edge/cloud servers are denoted as
Figure BDA0003251874750000143
In practical applications, each heterogeneous edge server has server heterogeneity, which is mainly expressed in computing power, i.e., any two different edge servers have different computing power.
Base station
Figure BDA0003251874750000144
For the jth base station, each base station unloads the computing task of the terminal user connected in the service range to a processing server for processing the computing task, wherein the processing server is an edge server
Figure BDA0003251874750000145
Or
Figure BDA0003251874750000146
And the cloud server processes the computing task by entrusting the selected edge server or cloud server.
The embodiment respectively calculates the unloading transmission delay and the execution delay for the base station
Figure BDA0003251874750000147
The service delay of (a):
model for calculating offload transfer delay:
multiple end users transmitting to a base station
Figure BDA0003251874750000148
The computing task server satisfies the slave Poisson distribution and makes the jth base station
Figure BDA0003251874750000149
The average arrival rate of the computing task is
Figure BDA00032518747500001410
Specifically, with reference to step A1, the jth BS is calculated
Figure BDA00032518747500001411
And m is
Figure BDA00032518747500001412
Communication delay between servers
Figure BDA00032518747500001413
Model for calculating offload execution delay:
selecting M/G/1 queue model to quantify connectivity to edge/cloud servers
Figure BDA00032518747500001414
And base station
Figure BDA00032518747500001415
The execution of (2). In this model, tasks are at edge/cloud servers
Figure BDA00032518747500001416
When is executedWithout being limited by any given probability distribution, i.e. allowed to obey a mean value μmStandard deviation of deltamGeneral probability distribution function of (1). It is noted that this general probability distribution function should be given in advance before the system starts to operate. Once the system is in operation, tuning of the probability distribution function is disabled. Specifically, as shown in step A2, the calculation is performed
Figure BDA00032518747500001417
And
Figure BDA00032518747500001418
execution delay therebetween
Figure BDA00032518747500001419
Step A3 is executed, the calculation results of step A1 and step A2 are combined to calculate the base station
Figure BDA00032518747500001420
With edge/cloud servers
Figure BDA00032518747500001421
Total traffic delay in setting up a connection
Figure BDA00032518747500001422
From step A4, the system service delay is calculated
Figure BDA00032518747500001423
The overall energy consumption of the edge-cloud computing coupling CPS mainly comprises two parts, namely energy consumed by a base station for unloading a computing task from an end user to an edge/cloud server, and energy consumed by the edge/cloud server for processing the unloaded computing task. Base station
Figure BDA0003251874750000151
Is constant
Figure BDA0003251874750000152
Therefore, in step A5, the base station is calculated
Figure BDA0003251874750000153
Energy dissipation in the transmission of end-user computing tasks for which it is responsible
Figure BDA0003251874750000154
In practical applications, the power consumption of the edge/cloud server depends to a large extent on several functional components such as processors, disks, memories, fans, and cooling systems. The processor power consumption is a large part of the total power consumption of the edge/cloud server, and therefore, the power consumption of the processor is used as the power consumption of the edge/cloud server when modeling the power consumption of the edge/cloud server. Estimating the edge/cloud server according to the energy model and the step A6
Figure BDA0003251874750000155
Energy consumed
Figure BDA0003251874750000156
From step A7, the system energy consumption is calculated and expressed as Esys
The reliability of base station tasks is defined as the probability that these tasks are first successfully transmitted to the target edge/cloud server without bit errors and then successfully executed by the target edge/cloud server without soft errors. In the digital transmission process, the bit error rate is mainly caused by environmental factors, which are derived from noise, interference, distortion and bit synchronization errors on a link. Calculated according to step A8
Figure BDA0003251874750000157
To edge/cloud server
Figure BDA0003251874750000158
The reliability of the transmission is
Figure BDA0003251874750000159
Unlike bit errors, soft errors are mainly caused by transient faults caused by cosmic radiation or electromagnetic interference. According to step A9, compute edge/cloud server
Figure BDA00032518747500001510
Mean rate of failure of
Figure BDA00032518747500001511
System execution reliability is calculated according to step D8-1 using an exponential distribution assumption
Figure BDA00032518747500001512
In order to meet the reliability requirement of the system, the implementation utilizes a backup technology to achieve the purpose of simultaneously tolerating bit errors and soft errors. In addition, in order to check whether the task is successfully processed, an acceptance test is performed after the current backup is performed on any edge/cloud server. If the acceptance test has no error, the output result of the current backup is accepted; otherwise, they will be discarded directly. When in use
Figure BDA00032518747500001513
Back up as a base station
Figure BDA00032518747500001514
When the backup is reserved, according to the step D8-2, the backup reliability is calculated
Figure BDA00032518747500001515
From step D8-3, the system reliability is characterized
Figure BDA00032518747500001516
In order to reduce the system service delay, the embodiment minimizes the system service delay by determining the optimal strategy for computation offloading and task backup of each base station under the given scheduling constraint condition.
The service delay optimization problem is defined as: for no directionDrawing (A)
Figure BDA00032518747500001517
The described system, determining
i) A computing offload map sum from the base station to the edge/cloud server;
ii) the number of task backups per base station, minimizing system service delay.
In order to ensure feasibility of system scheduling, five scheduling constraints, namely a first scheduling constraint to a fifth scheduling constraint, need to be satisfied.
The method comprises the following steps that a first scheduling constraint condition is that each base station only allows a computing task to be forwarded to one edge/cloud server, a second scheduling constraint condition is that the workload of any edge/cloud server cannot exceed the maximum processing capacity of the edge/cloud server, a third scheduling constraint condition is that the energy consumed by the whole system cannot exceed a given energy threshold, a fourth scheduling constraint condition is that the task backup quantity of each base station cannot exceed the maximum backup quantity specified by the system, and a fifth scheduling constraint condition is that the reliability of the system with fault tolerance is higher than a preset reliability threshold. A mathematical expression of the optimization problem is determined using step 2.
To solve the problem defined above, a two-stage approach consisting of static and dynamic service delay optimization is employed:
in the static optimization stage, Monte Carlo simulation and LLP technology are adopted to perform static calculation on the loading mapping and the task backup quantity of each base station. To describe the random nature of error occurrence, the definition of the error adaptation factor is first introduced. And solving the optimization problem determined under the energy consumption constraint by utilizing an ILP algorithm based on the error adaptive factor, and judging whether the system reliability constraint is met or not through Monte Carlo simulation. Through a plurality of attempts of adjusting the error adaptive factor, an optimal solution which meets both the energy consumption and the system reliability constraint is obtained.
Further, in order to reduce the system runtime service delay caused by redundant backup transmission and execution operations generated in the static optimization stage, the present embodiment utilizes a backup adaptive dynamic optimization mechanism to reduce the enhanced system runtime service delay: in the dynamic optimization phase, as soon as the first successful backup is detected, the transmission is cancelled and other unnecessary task backups are performed immediately.
Through the two stages, the system can achieve the aim of minimizing the service delay of the system.
And (3) a static stage:
as previously described, backup techniques are employed to tolerate bit errors and soft errors. At a base station
Figure BDA0003251874750000161
Under the best conditions that no errors occur in the calculation forwarding and processing processes, redundant backup is obviously not needed to provide fault tolerance. Then is provided with
Figure BDA0003251874750000162
Is the base station at this time
Figure BDA0003251874750000163
Average arrival rate of. On the contrary, in the worst case,
Figure BDA0003251874750000164
the base station should totally complete
Figure BDA0003251874750000165
And (4) backup. Is provided with
Figure BDA0003251874750000166
Is a worst case base station
Figure BDA0003251874750000167
Average arrival rate of
Figure BDA0003251874750000168
Then it is given by step C1. It is clear that,
Figure BDA0003251874750000169
and
Figure BDA00032518747500001610
for each base station
Figure BDA00032518747500001611
Are all constants. However, due to the randomness of the error, the base station usually
Figure BDA00032518747500001612
Average arrival rate of
Figure BDA00032518747500001613
Is a random variable. Thus, an error adaptation factor φ ∈ [0.1 ] is defined]The error adaptation factor is used to describe the uncertainty of the average arrival rate due to the occurrence of bit errors and soft errors. Using the error adaptation factor, the slave base station is calculated by step C2
Figure BDA00032518747500001614
To edge/cloud server
Figure BDA00032518747500001615
Average fault-tolerant arrival rate of
Figure BDA00032518747500001616
The goal is to minimize system service delay by determining the optimal computation offload map and number of task backups. Determining the number of backups by step C3
Figure BDA00032518747500001617
The number of backups is calculated using step C3 given an error-adaptive factor
Figure BDA00032518747500001618
To determine an optimal computation offload mapping, the present embodiment adopts an effective computation offload mapping method based on monte carlo simulation of ILP algorithm. Specifically, the ILP algorithm is firstly adopted to obtain the optimal of a single base station under the current error adaptive factor phi
Figure BDA00032518747500001619
Wherein the computing system service delay in step B1 is set
Figure BDA00032518747500001620
To the linear target, the first to fifth scheduling constraints in steps B2 to B5 are set as linear constraints. Bit errors are then generated for each communication link and soft errors are generated for each edge/cloud server based on the probability distribution of error occurrences. Next, the system reliability characteristics corresponding to the current bit error rate and the soft bit error rate are calculated by using step D8-3. Only one Monte Carlo simulation sample is generated in the two steps, and sufficient Monte Carlo samples can be obtained by repeating the process. According to the ratio of the feasible samples to the total samples satisfying step B6, the system reliability monte carlo samples corresponding to a large number of monte carlo samples can be safely estimated. Outputting a computation offload mapping variable if the system reliability is not less than a predefined reliability
Figure BDA0003251874750000171
Otherwise, adjusting the current value of the error adaptive factor, and repeating the Monte Carlo simulation process by using an ILP algorithm until a first feasible calculation unloading mapping solution is found, so as to obtain the optimal calculation unloading mapping.
And (3) a dynamic stage:
as described above with respect to the reliability model, to meet the reliability requirements of the system, the present embodiment employs a task backup technique to tolerate bit errors and soft errors. The task backup technology has a strong capability of handling various errors, but inevitably increases system service delay due to redundant backup transmission and execution. For example, task backup techniques strictly allow unnecessary transmissions and performing the remaining backups even if the first backup of any task was successfully processed, without bit errors and soft errors. Obviously, as long as one backup task is successfully transmitted and executed, the correctness of the processing result is ensured. On this basis, the embodiment adopts an online backup adaptive dynamic policy in step 5. At this stage, once the first successful backup is detected using the acceptance test method, the transmission and execution of other task backups is cancelled to enhance system service latency.
In order to evaluate the effectiveness of the reliable edge-cloud computing service delay optimization method for the cyber-physical system on the reliable edge cloud computing solution, a large number of experiments are performed on the base station database of the oversea telecommunication system in the embodiment. Specifically, as shown in fig. 3, fig. 3 is a position distribution of 3233 base stations in the base station database, wherein the numbers in the red circles indicate the number of base stations that have been correctly deployed in the area. For each base station, the task arrival rate and the data volume are averagely set to [4 × 106,6 × 108 ] respectively]And [1,100 ]]Mb apart. In addition, a set of heterogeneous edge/cloud servers is constructed based on five real-world commercial servers. The first type of server is from microsoft Azure china (shanghai). Randomly selecting a server containing 10 processor cores from the servers, wherein the operating frequency of each core is 3.6GHz, and requiring the server to act as a cloud server
Figure BDA0003251874750000172
The role of (c). The second type of server is established on an HPE ProLiant MicroServer Gen10 server, each server comprises four processor cores, and the working frequency of each core is 3.4 GHz. The third server is built on Dell R230 servers, each server comprises 6 processor cores, and the working frequency of each core is 3.0 GHz. The fourth server is built on associative TS250 servers, each server contains two processor cores, and the operating frequency of each processor core is 3.9 GHz. The fifth server is built on the Langchao NP3020 server, each server includes four processor cores, and the operating frequency of each processor core is 3.0 GHz.
A set of heterogeneous edge servers is constructed using the latter four servers. The number of each edge server is equally set to 50, so the size of the edge server set is 200. Assuming that the task execution time on each edge server follows a normal distribution, the mean-variance parameter of the normal distribution is set to the pairs of (20,5), (14,8), (35,15), and (17,10) in this order. The location distribution of the edge servers is randomly generated, assuming that each edge server is strictly configured with a selected base station. The interval of the communication capacity for each link between the base station and the edge/cloud server is [100,1000] KB/s. The propagation speed of the electromagnetic wave is 2 multiplied by 105 km/s.
As shown in fig. 4, the system service delay comparison implemented by the three solutions under the fixed edge server position and different base station workloads includes a reliable edge-cloud computing service delay optimization method for an cyber-physical system, GAES, and RTWI. As can be seen from fig. 4, each data point in the graph is an average of 100 simulation experiments. Compared with the baseline solution GAES, the reliable edge-cloud computing service delay optimization method facing the cyber-physical system, provided by the embodiment 1, shortens the service delay by 18.3%.
The baseline solution GAES is a computational offloading mechanism based on an enhanced non-dominated ranking genetic algorithm, which can jointly optimize energy optimization and service delay. This approach does not take into account the task reliability constraints. In addition, as can be seen from fig. 4, the reliable edge-cloud computing service delay optimization method for the cyber-physical system is lower than the baseline solution RTWI in terms of service delay, and the average gap is 13.2%. The baseline solution RTWI is to minimize not only the average response time of all base stations, but also the response time of each base station. However, it does not take into account the constraints of energy budget and reliability requirements.
As shown in fig. 5, three solutions implement system service delay under fixed base station workload and different edge server locations. Similar to fig. 4, each data point in the graph is also the average of 100 simulation experiments. As can be seen from the figure, the reliable edge-cloud computing service delay optimization method for the cyber-physical system has the system service delay 17.4% less than that of the baseline solution GAES, but 19.1% higher than that of the baseline solution RTWI. This is mainly because the reliable edge-cloud computing service delay optimization method for cyber-physical systems allows multiple executions of the same task to provide the required fault tolerance requirements, but the baseline solution RTWI ignores the fault tolerance requirements, and one task is executed only once, even bit errors or soft errors occur.
As shown in fig. 6, the results of comparing the feasibility of task scheduling for the reliable edge-cloud computing service delay optimization method for an cyber-physical system and two benchmark solutions are shown. The feasibility of task scheduling is derived by the ratio of the number of simulations to the total number of simulations tested to successfully schedule tasks under the constraints of energy budget and reliability requirements. In practical application, the total number of tested simulations is set to 10000, and those skilled in the art can adjust the total number of tested simulations according to practical situations, which is not limited herein.
The results of fig. 6 show that the reliable edge-cloud computing service delay optimization method for the cyber-physical system can maintain 100% of task scheduling feasibility, and the other two reference solutions cannot guarantee the task scheduling feasibility. This is because the method takes into account energy budget and reliability requirements, while the other two baseline solutions ignore energy and reliability constraints.
In this embodiment, the reliable edge-cloud computing service delay optimization method for the cyber-physical system solves the problem of minimizing the service delay of the edge cloud computing by a two-stage method considering the energy budget and the reliability requirement. The goal of the static phase is to find the optimal number of compute offload maps and task backups, and the goal of the dynamic phase is to avoid the transmission and execution of redundant tasks at runtime. A large number of experimental results show that the method reduces the system service delay by 18.3% while ensuring that specific energy budget and reliability requirements are met.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. A reliable edge-cloud computing service delay optimization method facing an cyber-physical system is characterized by comprising the following steps:
step 1: modeling service delay of a base station based on computation unloading transmission delay and execution delay, and setting a service delay target of edge cloud computing according to energy budget and reliability characteristics, wherein the service delay target of the edge cloud computing comprises a static stage target and a dynamic stage target, the static stage target is used for searching optimal computation unloading mapping and task backup quantity, and the dynamic stage target is used for avoiding transmission and execution of redundant tasks during operation;
step 2: calculating system service delay, and converting a service delay target of edge cloud computing into 5 scheduling constraint conditions, wherein the 5 scheduling constraint conditions comprise a first scheduling constraint condition, a second scheduling constraint condition, a third scheduling constraint condition, a fourth scheduling constraint condition and a fifth scheduling constraint condition;
the first scheduling constraint condition is that each base station only allows to forward the computing task to one edge/cloud server, the second scheduling constraint condition is that the workload of any edge/cloud server cannot exceed the maximum processing capacity of the edge/cloud server, the third scheduling constraint condition is that the energy consumed by the whole system cannot exceed a given energy threshold, the fourth scheduling constraint condition is that the task backup quantity of each base station cannot exceed the maximum backup quantity specified by the system, and the fifth scheduling constraint condition is that the reliability of the system with fault tolerance is higher than a preset reliability threshold;
and step 3: obtaining the backup number according to a backup number calculation formula by using an error adaptive factor, wherein the error adaptive factor is used for representing the uncertainty of the average arrival rate caused by the occurrence of bit errors and soft errors, and the backup number calculation formula is specifically represented as follows:
Figure FDA0003251874740000011
in the formula
Figure FDA0003251874740000012
Denotes the jth base station
Figure FDA0003251874740000013
With the mth edge/cloud server
Figure FDA0003251874740000014
The number of backups of (a) is,
Figure FDA0003251874740000015
representing slave base stations
Figure FDA0003251874740000016
To edge/cloud server
Figure FDA0003251874740000017
The average fault-tolerant arrival rate of (c),
Figure FDA0003251874740000018
represents an average arrival rate in the best case where no error occurs in the process of calculating the forwarding of the base station,
Figure FDA0003251874740000019
to represent
Figure FDA00032518747400000110
And
Figure FDA00032518747400000111
a rounded value in orientation after division;
and 4, step 4: in a static stage, determining the optimal calculation unloading mapping and task backup number through Monte Carlo simulation and integer linear programming to minimize system service delay;
and 5: and in the dynamic stage, determining that a backup task is successfully transmitted and executed once based on an online backup self-adaptive dynamic strategy, traversing all base stations, respectively finding out all edge/cloud servers which are in communication connection with the base stations, finding out the updated task backup quantity of each base station after traversing, and executing backup in all task backups of each base station.
2. The cyber-physical system-oriented reliable edge-cloud computing service delay optimization method according to claim 1, wherein in step 4, the determining an optimal computation offload mapping and task backup number through monte carlo simulation and integer linear programming to minimize the system service delay specifically includes: and searching for an optimal calculation unloading mapping by repeating the Monte Carlo simulation process by using an error adaptive factor and an ILP algorithm, obtaining the backup quantity of each base station and each edge/cloud server according to a backup quantity calculation formula, obtaining the system reliability by using Monte Carlo simulation, and screening and outputting the optimal calculation unloading mapping and the task backup quantity of each base station in a static optimization stage according to 5 scheduling constraint conditions.
3. The cyber-physical system-oriented reliable edge-cloud computing service latency optimization method according to claim 1, wherein in step 5, once the first successful backup is detected, the transmission and execution of other task backups are cancelled.
4. The cyber-physical system-oriented reliable edge-cloud computing service delay optimization method according to claim 1, wherein the specific steps of the step 1 include:
step A1: modeling the calculation unloading transmission delay of the base station, calculating the jth base station according to the Poisson distribution satisfied by the calculation task services sent to the base station by a plurality of terminal users
Figure FDA0003251874740000021
And mth edge/cloud server
Figure FDA0003251874740000022
Communication delay between:
Figure FDA0003251874740000023
wherein Dj,mIs composed of
Figure FDA0003251874740000024
And
Figure FDA0003251874740000025
distance between, xi is the electromagnetic wave propagation speed, WjFor end user at jth base station
Figure FDA0003251874740000026
Total amount of task data on, Cj,mIs composed of
Figure FDA0003251874740000027
And
Figure FDA0003251874740000028
bandwidth of communication between, order
Figure FDA0003251874740000029
Is k edge servers and cloud servers
Figure FDA00032518747400000210
The set of (a) and (b),
Figure FDA00032518747400000211
the concrete expression is as follows:
Figure FDA00032518747400000212
step A2: modeling the computation unloading execution delay of the base station, quantifying the execution delay connected to the edge/cloud server and the base station based on an M/G/1 queue model, and making the execution time of the task on the edge/cloud server follow an average value mumStandard deviation of deltamA general probability distribution function of;
computing
Figure FDA00032518747400000213
And
Figure FDA00032518747400000214
the execution delay in between:
Figure FDA00032518747400000215
wherein a plurality of end users transmit to a base station
Figure FDA00032518747400000216
The computational tasks of (a) are subject to a poisson distribution,
Figure FDA00032518747400000217
is the jth base station
Figure FDA00032518747400000218
The average arrival rate of the tasks is calculated,
Figure FDA00032518747400000219
to represent
Figure FDA00032518747400000220
Supported calculation speed, μmAnd deltamSeparately representing tasks at edge/cloud servers
Figure FDA00032518747400000221
Mean and standard deviation of the probability distribution function obeyed by the upper execution time, phimIs to remove
Figure FDA00032518747400000222
Outside base station mapping
Figure FDA00032518747400000223
Sum of task arrival rates of (a);
step A3: computing base station
Figure FDA00032518747400000224
With edge/cloud servers
Figure FDA00032518747400000225
Total service delay when establishing a connection:
Figure FDA00032518747400000226
Figure FDA00032518747400000227
step A4: the calculated system service delay is expressed as the average service delay of all base stations:
Figure FDA0003251874740000031
in the formula
Figure FDA0003251874740000032
A communication connection state identification is represented and,
Figure FDA0003251874740000033
a binary decision variable of 0 or 1 when
Figure FDA0003251874740000034
Determining and
Figure FDA0003251874740000035
when the communication is carried out,
Figure FDA0003251874740000036
otherwise
Figure FDA0003251874740000037
Step A5: calculating the jth base station
Figure FDA0003251874740000038
Energy consumption of (2):
Figure FDA0003251874740000039
in the formula
Figure FDA00032518747400000310
Indicating a base station
Figure FDA00032518747400000311
Energy dissipation in the transmission of the responsible end-user computing tasks,
Figure FDA00032518747400000312
is a base station
Figure FDA00032518747400000313
A power consumption constant of;
step A6: computing mth edge/cloud server
Figure FDA00032518747400000314
Energy consumption of (2):
Figure FDA00032518747400000315
in the formula
Figure FDA00032518747400000316
Representing edge/cloud servers
Figure FDA00032518747400000317
The amount of energy that is consumed,
Figure FDA00032518747400000318
is a static power constant, αmFor edge/cloud servers
Figure FDA00032518747400000319
Parameter of power consumption, αmBeing constants associated with the processor architecture, vmFor edge/cloud servers
Figure FDA00032518747400000320
The processor supply voltage of (a);
step A7: combining step A5 and step A6, calculating system energy consumption:
Figure FDA00032518747400000321
step A8: calculating slave base station
Figure FDA00032518747400000322
To edge/cloud server
Figure FDA00032518747400000323
Reliability of transmission:
Figure FDA00032518747400000324
in the formula
Figure FDA00032518747400000325
Represents from
Figure FDA00032518747400000326
To edge/cloud server
Figure FDA00032518747400000327
A constant bit error rate of the link of (1);
step A9: compute edge/cloud server
Figure FDA00032518747400000328
Average failure occurrence rate of (2):
Figure FDA00032518747400000329
in the formula CmAnd
Figure FDA00032518747400000330
respectively the m-th edge/cloud server
Figure FDA00032518747400000331
First and second fault occurrence parameters of CmAnd
Figure FDA00032518747400000332
are all constants, when practically used, CmAnd
Figure FDA00032518747400000333
depending on the hardware architecture of the actual device.
5. The cyber-physical system-oriented reliable edge-cloud computing service delay optimization method according to claim 1, wherein the step 2 specifically includes the steps of:
step B1: service delay target establishing undirected graph based on edge cloud computing
Figure FDA00032518747400000334
Undirected graph
Figure FDA00032518747400000335
Computing system service latency for describing topological relationships between base stations and edge/cloud servers
Figure FDA0003251874740000041
Step B2: ensuring a base station based on a first scheduling constraint
Figure FDA0003251874740000042
Mapping to edge/cloud server exactly and only one edge/cloud server:
Figure FDA0003251874740000043
step B3: ensuring that each edge/cloud server satisfies a maximum processing capacity constraint based on a second scheduling constraint:
Figure FDA0003251874740000044
step B4: and ensuring the satisfaction of the energy upper limit constraint based on a third scheduling constraint condition:
Figure FDA0003251874740000045
step B5: and ensuring that the backup quantity constraint is met based on a fourth scheduling constraint condition:
Figure FDA0003251874740000046
in the formula
Figure FDA0003251874740000047
A maximum number of backups specified for the system;
step B6: and ensuring that the system reliability constraint is met based on a fifth scheduling constraint condition:
Figure FDA0003251874740000048
in the formula
Figure FDA0003251874740000049
Representing a preset system reliability threshold.
6. The cyber-physical system-oriented reliable edge-cloud computing service delay optimization method according to claim 1, wherein the step 3 includes the following specific steps:
step C1: for the worst case, the base station
Figure FDA00032518747400000410
Co-completion
Figure FDA00032518747400000411
A backup is provided with
Figure FDA00032518747400000412
In the best case where no errors occur in the process of calculating the forwarding at the base station,
Figure FDA00032518747400000413
the average arrival rate of the base station at that time, i.e. the average arrival rate of the best case,
Figure FDA00032518747400000414
the average arrival rate is the worst case in which no error occurs in the process of calculating and forwarding of the base station;
step C2: introducing an error adaptation factor phi representing the uncertainty of the average arrival rate due to the occurrence of bit errors and soft errors from the base station
Figure FDA00032518747400000415
To edge/cloud server
Figure FDA00032518747400000416
The average fault-tolerant arrival rate of (c) is:
Figure FDA00032518747400000417
step C3: according to average fault-tolerant arrival rate
Figure FDA00032518747400000418
And obtaining the backup quantity based on a backup quantity calculation formula, wherein the backup quantity calculation formula is expressed as:
Figure FDA0003251874740000051
in the formula
Figure FDA0003251874740000052
Denotes the jth base station
Figure FDA0003251874740000053
With the mth edge/cloud server
Figure FDA0003251874740000054
The number of backups.
7. The cyber-physical system-oriented reliable edge-cloud computing service delay optimization method according to claim 1, wherein the specific step of the step 4 comprises:
step D1: undirected graph
Figure FDA0003251874740000055
Wherein
Figure FDA0003251874740000056
Epsilon represents position information and link communication information, respectively, undirected graph
Figure FDA0003251874740000057
For describing the topological relationship between the base station and the edge/cloud server,
Figure FDA0003251874740000058
and ε is used as an undirected graph
Figure FDA0003251874740000059
The input of (1);
step D2: will be provided with
Figure FDA00032518747400000510
The value is 0, i.e.:
Figure FDA00032518747400000511
step D3: will phistartAssigned a value of 0, phiendAssigned a value of 1, i.e. phistart←0,Φend←1;
Step D4: judgment of
Figure FDA00032518747400000512
Whether the result is true or not;
if yes, go to step D5;
otherwise, go to step D12;
step D5: will phistart+(Φendstart) Assigning/to Φ, i.e. Φ ← Φstart+(Φendstart)/2;
Step D6: for each
Figure FDA00032518747400000513
Each of (1)
Figure FDA00032518747400000514
Calculating the number of backups using C3
Figure FDA00032518747400000515
Step D7: processing the ILP plan with 5 scheduling constraint conditions by adopting an ILP solver, wherein the 5 scheduling constraint conditions are the 5 scheduling constraint conditions in the step 2;
step D8: obtaining the reliability of the current system by Monte Carlo simulation
Figure FDA00032518747400000516
Step D9: judgment of
Figure FDA00032518747400000517
Whether the result is true or not;
if so, will phistartAssigned a value of phi +1, i.e. phistartC, going to step D10;
otherwise, will phiendAssigned a value of phi-1, i.e. phiendC, going to step D10;
step D10: and outputting the optimal calculation unloading mapping and the task backup quantity of each base station in the static optimization stage.
8. The cyber-physical system-oriented reliable edge-cloud computing service delay optimization method according to claim 7, wherein the step D8 includes the following steps:
step D8-1: calculating a base station using exponential distribution
Figure FDA00032518747400000518
The system execution reliability of (a), the system execution reliability being expressed as:
Figure FDA00032518747400000519
step D8-2: calculating base station based on system execution reliability
Figure FDA00032518747400000520
System backup reliability of when
Figure FDA00032518747400000521
Back up as a base station
Figure FDA00032518747400000522
At the time of retention, the computing system backup reliability is expressed as:
Figure FDA0003251874740000061
step D8-3: obtaining the characteristics of system reliability according to the system backup reliability of all base stations;
the system reliability is characterized by the product of system backup reliability of all base stations establishing connection with the edge/cloud server in the system, which is specifically expressed as:
Figure FDA0003251874740000062
9. the cyber-physical system-oriented reliable edge-cloud computing service delay optimization method according to claim 1, wherein the specific step of the step 5 comprises:
step E1: j is assigned as 1, namely j ← 1;
step E2: judging whether J is equal to or less than J, if so, executing a step E3, otherwise, exiting;
step E3: assigning m to be 0, namely m ← 0;
step E4: judging whether m is less than or equal to k, if so, executing a step E5, otherwise, executing a step E13;
step E5: judgment of
Figure FDA0003251874740000063
If yes, performing a step E6, otherwise performing a step E12;
step E6: assigning i to be 1, namely i ← 1;
step E7: judgment of
Figure FDA0003251874740000064
If yes, executing a step E8, otherwise, executing a step E12;
step E8: determining whether the transfer was successfully propagated, if so, performing step E9, otherwise, performing step E11;
step E9: slave base station
Figure FDA0003251874740000065
Executing backup in all task backups;
step E10: updating
Figure FDA0003251874740000066
Namely, it is
Figure FDA0003251874740000067
Step E12 is executed;
step E11: updating i, i ← i + 1;
step E12: updating m, namely m ← m + 1;
step E13: update j, i.e., j ← j +1, and proceed to step E2.
10. An cyber-physical system applying the reliable edge-cloud computing service delay optimization method for the cyber-physical system according to any one of claims 1 to 9, wherein the cyber-physical system is a CPS formed by coupling typical edge/cloud computing, and the cyber-physical system includes a plurality of end users, a plurality of base stations, a plurality of heterogeneous edge servers, and a cloud server, the plurality of heterogeneous edge servers and the cloud server form the edge/cloud server, the plurality of end users are wirelessly connected with adjacent base stations, and the edge/cloud server is wirelessly connected with adjacent base stations.
CN202111048618.1A 2021-09-08 2021-09-08 Reliable edge-cloud computing service delay optimization method for information physical system Active CN113918321B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111048618.1A CN113918321B (en) 2021-09-08 2021-09-08 Reliable edge-cloud computing service delay optimization method for information physical system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111048618.1A CN113918321B (en) 2021-09-08 2021-09-08 Reliable edge-cloud computing service delay optimization method for information physical system

Publications (2)

Publication Number Publication Date
CN113918321A true CN113918321A (en) 2022-01-11
CN113918321B CN113918321B (en) 2022-09-09

Family

ID=79234157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111048618.1A Active CN113918321B (en) 2021-09-08 2021-09-08 Reliable edge-cloud computing service delay optimization method for information physical system

Country Status (1)

Country Link
CN (1) CN113918321B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114444240A (en) * 2022-01-28 2022-05-06 暨南大学 Delay and service life optimization method for cyber-physical system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180046503A1 (en) * 2016-08-09 2018-02-15 International Business Machines Corporation Data-locality-aware task scheduling on hyper-converged computing infrastructures
CN110222379A (en) * 2019-05-17 2019-09-10 井冈山大学 Manufacture the optimization method and system of network service quality
CN110266744A (en) * 2019-02-27 2019-09-20 中国联合网络通信集团有限公司 Location-based edge cloud resource dispatching method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180046503A1 (en) * 2016-08-09 2018-02-15 International Business Machines Corporation Data-locality-aware task scheduling on hyper-converged computing infrastructures
CN110266744A (en) * 2019-02-27 2019-09-20 中国联合网络通信集团有限公司 Location-based edge cloud resource dispatching method and system
CN110222379A (en) * 2019-05-17 2019-09-10 井冈山大学 Manufacture the optimization method and system of network service quality

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐久强等: "CPS资源服务模型和资源调度研究", 《计算机学报》 *
邵亚丽等: "动态信息物理融合系统实时数据服务", 《计算机技术与发展》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114444240A (en) * 2022-01-28 2022-05-06 暨南大学 Delay and service life optimization method for cyber-physical system
CN114444240B (en) * 2022-01-28 2022-09-09 暨南大学 Delay and service life optimization method for cyber-physical system

Also Published As

Publication number Publication date
CN113918321B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN110928654B (en) Distributed online task unloading scheduling method in edge computing system
CN113225377B (en) Internet of things edge task unloading method and device
US20210042578A1 (en) Feature engineering orchestration method and apparatus
US11831708B2 (en) Distributed computation offloading method based on computation-network collaboration in stochastic network
CN113918321B (en) Reliable edge-cloud computing service delay optimization method for information physical system
CN113645637B (en) Method and device for unloading tasks of ultra-dense network, computer equipment and storage medium
CN115292032A (en) Task unloading method in multi-user accessed intelligent edge computing system
CN112231085A (en) Mobile terminal task migration method based on time perception in collaborative environment
CN117135131A (en) Task resource demand perception method for cloud edge cooperative scene
CN115480882A (en) Distributed edge cloud resource scheduling method and system
CN113515378A (en) Method and device for migration and calculation resource allocation of 5G edge calculation task
CN112437468A (en) Task unloading algorithm based on time delay and energy consumption weight calculation
CN109450684B (en) Method and device for expanding physical node capacity of network slicing system
Lin et al. Aoi research on pmu cloud side cooperative system of active distribution network
CN116437341A (en) Computing unloading and privacy protection combined optimization method for mobile blockchain network
CN116562364A (en) Deep learning model collaborative deduction method, device and equipment based on knowledge distillation
CN113297152B (en) Method and device for updating cache of edge server of power internet of things
CN115955479A (en) Task rapid scheduling and resource management method in cloud edge cooperation system
CN113498077B (en) Communication method and device for guaranteeing low-delay transmission of intelligent Internet of things
Shi et al. Workflow migration in uncertain edge computing environments based on interval many-objective evolutionary algorithm
CN112203309B (en) Joint task unloading and caching method based on server cooperation
CN113518122A (en) Task unloading method, device, equipment and medium for ensuring low-delay transmission by edge intelligent network
CN114189756A (en) Information updating method, device, equipment and medium for multi-equipment cooperative Internet of things
CN111784029A (en) Fog node resource allocation method
CN114339796B (en) Cell dormancy data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant