CN116521377B - Service computing unloading method, system, device, equipment and medium - Google Patents

Service computing unloading method, system, device, equipment and medium Download PDF

Info

Publication number
CN116521377B
CN116521377B CN202310802419.8A CN202310802419A CN116521377B CN 116521377 B CN116521377 B CN 116521377B CN 202310802419 A CN202310802419 A CN 202310802419A CN 116521377 B CN116521377 B CN 116521377B
Authority
CN
China
Prior art keywords
service
calculation
user
edge
edge server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310802419.8A
Other languages
Chinese (zh)
Other versions
CN116521377A (en
Inventor
宋雅奇
丁鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202310802419.8A priority Critical patent/CN116521377B/en
Publication of CN116521377A publication Critical patent/CN116521377A/en
Application granted granted Critical
Publication of CN116521377B publication Critical patent/CN116521377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/505Clust
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure provides a method, a system, a device, equipment and a medium for service calculation and offloading, and relates to the technical field of computers, wherein the method comprises the following steps: the method comprises the steps of mapping a real physical edge server cluster through a digital twin layer, acquiring various information in the physical edge server cluster in real time, constructing a twin model, acquiring a user decision vector for unloading when the local service calculation and the edge service calculation take minimum joint cost in time delay and energy consumption in a simulation mode, improving the speed of acquiring the user decision vector, and carrying out calculation unloading on services initiated by users in a plurality of physical edge server clusters through a trained decision model, so that the problem that the edge equipment of the users cannot bear the load on the services is solved, and the time delay and energy consumption cost are saved.

Description

Service computing unloading method, system, device, equipment and medium
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a service computing and unloading method, a system, a device, equipment and a medium.
Background
With the development of internet technology, a great amount of application software and services bring a qualitative leap to the life of people, and in the today of various internet business blowout, when a user processes task business, how to process and calculate the business can be involved.
In the related art, the edge devices of the users often cannot bear the increasing computing growth speed.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure provides a method, a system, a device, equipment and a medium for service calculation offloading, which at least overcome the problem that the processing and calculation of the service by the edge equipment of a user cannot be carried in the related art to a certain extent.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
In a first aspect, embodiments in the present disclosure provide a service computing offloading method, the method including:
mapping a plurality of physical edge server clusters, and constructing a twin model in the digital twin layer;
in the twin model, for any physical edge server cluster, determining the minimum joint cost of local service calculation and edge service calculation on time delay and energy consumption according to data information of local service calculation and data information of edge service calculation of a user;
Acquiring a user decision vector of service calculation unloading when each physical edge server cluster takes the minimum joint cost in time delay and energy consumption;
training a decision model in the digital twin layer through the user decision vector acquired for multiple times;
and according to the trained decision model, calculating and unloading the service initiated by the user in the plurality of physical edge server clusters.
In a possible embodiment, the mapping a plurality of physical edge server clusters, building a twin model in the digital twin layer, comprises:
obtaining edge server information, user information, calculation information and network information in a plurality of physical edge server clusters by mapping the plurality of physical edge server clusters;
a twinning model of the plurality of physical edge server clusters is built in the digital twinning layer.
In one possible embodiment, the determining the minimum joint cost of the local service calculation and the edge service calculation in terms of time delay and energy consumption according to the data information of the local service calculation and the data information of the edge service calculation of the service by the user includes:
determining the time delay of the user on local service calculation according to the acquired local service data volume of the user, the estimated running speed of the first processor and the actual running speed of the first processor;
Determining the maximum value of time delay calculated by the user at the edge service according to the acquired service data volume of the user at any one physical edge server cluster, the estimated running speed of the second processor and the actual running speed of the second processor;
acquiring local calculation energy consumption and edge calculation energy consumption of a user;
and determining the minimum joint cost of the local service calculation and the edge service calculation on the time delay and the energy consumption according to the time delay of the local service calculation, the local calculation energy consumption, the time delay maximum value of the edge service calculation and the edge calculation energy consumption.
In one possible embodiment, the determining the delay of the user's local service calculation according to the acquired local service data volume of the user, the estimated running speed of the first processor and the actual running speed of the first processor includes:
obtaining estimated time delay of local service calculation according to the ratio of the local service data volume to the estimated running speed of the first processor;
obtaining a time delay difference value of local service calculation according to the local service data volume, the estimated running speed of the first processor and the actual running speed of the first processor;
And determining the time delay calculated by the local service according to the sum of the estimated time delay calculated by the local service and the time delay difference value calculated by the local service.
In a possible embodiment, the determining the maximum value of the delay calculated by the user at the edge service according to the acquired service data volume of the user at the any one physical edge server cluster, the estimated running speed of the second processor and the actual running speed of the second processor includes:
determining estimated time delay of edge service calculation according to the product of the service data volume of any one physical edge server cluster and the service calculation proportion of the edge servers in the any one physical edge server cluster and the ratio of the estimated running speed of the second processor;
determining a delay difference value of edge service calculation according to the acquired service data volume of the user in any one physical edge server cluster, the estimated running speed of the second processor, the actual running speed of the second processor and the service calculation proportion of the edge servers in any one physical edge server cluster;
and determining the maximum value of the time delay calculated by the user at the edge service according to the estimated time delay calculated by the edge service, the maximum value of the sum of the time delay difference calculated by the edge service and the transmission time delay of the service data transmitted to the edge server cluster.
In one possible embodiment, the determining the minimum joint cost of the local service computation and the edge service computation in terms of time delay and energy consumption according to the time delay of the local service computation, the local computation energy consumption, the time delay maximum value of the edge service computation and the edge computation energy consumption includes:
determining a first joint cost according to the time delay calculated by the local service and the local calculation energy consumption;
determining a second combined cost according to the maximum time delay calculated by the edge service and the energy consumption calculated by the edge;
and taking the minimum value of the first joint cost and the second joint cost as the minimum joint cost.
In a possible embodiment, the obtaining the user decision vector of service calculation offloading when each physical edge server cluster incurs the minimum joint cost in time delay and energy consumption includes:
acquiring a service calculation mode of a user when each physical edge server cluster is at the minimum joint cost;
if the user performs local service calculation, the service calculation proportion of the edge server in the first preset label and any one of the physical edge server clusters is null, and the null value is used as the user decision vector;
If the user performs the edge service calculation, taking the second preset label and the service calculation proportion of the edge servers in any one of the physical edge server clusters as the user decision vector when the user performs the edge service calculation.
In a second aspect, embodiments in the present disclosure provide a business computing offload system comprising:
a central cloud computing system and a plurality of physical edge server clusters; the central cloud computing system is used for constructing a digital twin layer;
the central cloud computing system is configured to perform the method of any of the first aspects.
In a third aspect, an embodiment of the present disclosure provides a service computation offload device, including:
the establishing unit is used for mapping a plurality of physical edge server clusters and establishing a twin model in the digital twin layer;
the combined optimization unit is used for determining the minimum combined cost of the local service calculation and the edge service calculation on time delay and energy consumption according to the data information of the local service calculation and the data information of the edge service calculation of the service by a user aiming at any physical edge server cluster in the twin model;
the acquisition unit is used for acquiring user decision vectors of service calculation unloading when the minimum joint cost is acquired by time delay and energy consumption of each physical edge server cluster;
The training unit is used for training a decision model in the digital twin layer through the user decision vector acquired for many times;
and the calculation unloading unit is used for calculating and unloading the service initiated by the user in the plurality of physical edge server clusters according to the trained decision model.
In a fourth aspect, an embodiment of the present disclosure provides an electronic device, including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method described in the first aspect above via execution of the executable instructions.
In a fifth aspect, embodiments of the present disclosure provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method described in the first aspect above.
In a sixth aspect, according to another aspect of the present disclosure, there is also provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the method of any of the above.
The embodiment of the disclosure provides a service computing unloading method, a system, a device, equipment and a medium, wherein the method comprises the following steps: mapping a plurality of physical edge server clusters through a digital twin layer, constructing a twin model in the digital twin layer, in the twin model, aiming at any physical edge server cluster, determining the minimum joint cost of local service calculation and edge service calculation on time delay and energy consumption according to data information of local service calculation and data information of edge service calculation of a user, acquiring user decision vectors unloaded by service calculation when the minimum joint cost is taken by the time delay and the energy consumption of each physical edge server cluster, training the decision model through the user decision vectors acquired for multiple times in the digital twin layer, and carrying out calculation unloading on services initiated by the user in the plurality of physical edge server clusters according to the trained decision model. On one hand, various information in the physical edge server clusters is obtained in real time through mapping of the digital twin layer to the real physical edge server clusters, a twin model is built, and by means of simulation, when the minimum joint cost is obtained through time delay and energy consumption of local service calculation and edge service calculation, the unloaded user decision vector is calculated, so that the speed of obtaining the user decision vector can be improved, on the other hand, the trained decision model is used for calculating and unloading services initiated by users in the physical edge server clusters, the problem that service processing and calculation cannot be carried by edge equipment of the users in the related technology can be solved, service processing and calculation can be guaranteed, and time delay and energy consumption cost can be saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 shows a schematic structural diagram of a service computing offload system in an embodiment of the disclosure.
Fig. 2 illustrates a schematic architecture of another business computing offload system in an embodiment of the present disclosure.
Fig. 3 is a flow chart illustrating a service computing offload method in an embodiment of the disclosure.
FIG. 4 illustrates a flow diagram for calculating a minimum joint cost in an embodiment of the present disclosure.
Fig. 5 is a schematic flow chart of determining a time delay calculated by a user in a local service in an embodiment of the disclosure.
Fig. 6 is a schematic flow chart of determining a maximum value of delay calculated by a user at an edge service in an embodiment of the disclosure.
Fig. 7 is a schematic structural diagram of a service computing and offloading apparatus according to an embodiment of the disclosure.
Fig. 8 shows a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
With the development of internet technology, a great amount of application software and services bring a qualitative leap to the life of people, and in the today of various internet business blowout, when a user processes task business, how to process and calculate the business is involved. In the related art, the edge devices of the users often cannot bear the increasing computing growth speed.
Based on this, the disclosure provides a service computing and offloading method, a system, a device and a medium, wherein a digital twin layer is used for mapping a plurality of physical edge server clusters, a twin model is built in the digital twin layer, in the twin model, for any one physical edge server cluster, according to data information of local service computing and data information of edge service computing performed on a service by a user, minimum joint cost of the local service computing and the edge service computing on time delay and energy consumption is determined, user decision vectors of service computing and offloading are obtained when the minimum joint cost is obtained for each physical edge server cluster on time delay and energy consumption, in the digital twin layer, a decision model is trained through the user decision vectors obtained for a plurality of times, and computing and offloading is performed on the service initiated by the user in the plurality of physical edge server clusters according to the trained decision model. On one hand, various information in the physical edge server clusters is obtained in real time through mapping of the digital twin layer to the real physical edge server clusters, a twin model is built, and by means of simulation, when the minimum joint cost is obtained through time delay and energy consumption, user decision vectors for service calculation unloading are obtained, so that the speed of obtaining the user decision vectors can be improved, on the other hand, the service initiated by the user in the plurality of physical edge server clusters is calculated and unloaded through the trained decision model, the problem that edge equipment of the user cannot bear the load on service processing and calculation in the related technology can be solved, service processing and calculation can be guaranteed, and time delay and energy consumption cost can be saved.
The business computing unloading method can be applied to electronic equipment and a business computing unloading system.
The service computing and offloading system in the present disclosure may be jointly constructed by a digital twin layer constructed by a central computing cloud system and a physical layer constructed by a plurality of physical edge servers, and fig. 1 shows a schematic structural diagram of a service computing and offloading system 100 provided by an embodiment of the present disclosure, as shown in fig. 1.
The service computing unloading system 100 comprises a central cloud computing system 101, a physical edge server cluster 102, a physical edge server cluster 103 and a physical edge server cluster 104; wherein the central cloud computing system 101 may build a digital twin layer. A plurality of physical edge servers constitute a physical layer.
Various information of a plurality of physical edge server structures in a physical layer, such as user information, server information, calculation information and network information, data information of a user performing local service calculation on a service, data information of performing edge service calculation, and the like, can be acquired in real time through the service calculation offload system 100.
Further, a physical marginaler cluster may also be used to build digital twin layers.
Based on the service computing and offloading system 100, a twin model of a plurality of physical edge servers can be further constructed in a digital twin layer of the service computing and offloading system 100, and the twin model is managed and processed to obtain a user decision vector for computing and offloading when a user initiates a service. And constructing a decision model, training the decision model, and calculating and unloading the service initiated by the user by using the trained decision model.
Fig. 2 shows a schematic structural diagram of a service computing offload system provided by an embodiment of the present disclosure.
As shown in fig. 2, the business computing offload system 200 may include a central cloud computing system 201 and physical edge server clusters 202, 203, 204.
The central cloud computing system 201 may include a data center 211, where the data center 211 stores information of the acquired plurality of physical edge server clusters, such as user information, server information, computing information, network information, and the like.
Also included in the central cloud computing system 201 may be a twinning model 221 constructed from a plurality of physical edge server clusters. Among the twin models 221 are user models, edge server models, various cache resources, computing resources, and the like.
A cache resource may be understood as a resource used by a user and an edge server for caching, and a computing resource may be understood as a resource used by a user and a server for processing a service and computing the service.
The central cloud computing system 201 may further include a twin management module 231, where the twin management module 231 is configured to manage and update a twin model, obtain information of a plurality of physical edge server clusters, update information, update a decision model, and so on.
The central cloud computing system 201 is configured to perform mapping of a plurality of physical edge server clusters, construct a twin model in a digital twin layer, in the twin model, according to data information of local service computation performed on a service by a user and data information of edge service computation performed on the service by the user, determine minimum joint cost of the local service computation and the edge service computation on time delay and energy consumption, obtain user decision vectors for service computation offloading when the minimum joint cost is taken by the time delay and the energy consumption of each physical edge server cluster, train the decision model through the user decision vectors obtained multiple times in the digital twin layer, and perform computation offloading on services initiated by the user in the plurality of physical edge server clusters according to the trained decision model.
The present exemplary embodiment will be described in detail below with reference to the accompanying drawings and examples.
Firstly, in the embodiment of the present disclosure, a service computing and offloading method is provided, where the method may be executed by any electronic device with computing processing capability, and in the following process, the electronic device is taken as an example of a central cloud computing system.
Fig. 3 shows a flowchart of a service computing offloading method in an embodiment of the disclosure, and as shown in fig. 3, the service computing offloading method provided in the embodiment of the disclosure includes the following steps:
s302: a plurality of physical edge server clusters are mapped, and a twin model is built in a digital twin layer.
In one possible embodiment, a digital twin layer is built in a central cloud computing system, edge server information, user information, computing information and network information in a plurality of physical edge server clusters are acquired by mapping the plurality of physical edge server clusters, and a twin model of the plurality of physical edge server clusters is built in the digital twin layer.
A physical edge server cluster is understood to be a real physical layer edge server cluster, which is a real device.
The twin model may be a plurality of virtual edge server clusters constructed in the digital twin layer by mapping the obtained edge server information, user information, computation information, and network information, and be mirror images of a plurality of physical edge server clusters.
It should be noted that, for processing and calculating the service, the user may select whether the calculation for the service is performed at the edge device or at the edge server. Computation offloading may be understood as offloading computation of traffic to an edge server to release computing resources of the edge device when the edge device cannot carry the computation of the traffic.
In a multimedia application scenario, for example, a user views a video, refers to a data, and performs online shopping, etc., which belong to performing internet services, and a mobile phone and a computer of the user can be understood as an edge device.
In an industrial scenario, e.g. internet of things or industrial visual identification in a factory, etc., the sensor may act as an edge device. The business calculation unloading method in the disclosure can be applied to multimedia application scenes, industrial scenes and other scenes. The method is mainly aimed at performing calculation offloading on the service initiated by the user.
S304: in the twin model, for any physical edge server cluster, according to the data information of local service calculation of the service and the data information of edge service calculation of the service, the minimum joint cost of the local service calculation and the edge service calculation on time delay and energy consumption is determined.
In one possible embodiment, the physical layer formed by a plurality of physical edge server clusters may be divided into areas, each edge server cluster is an area, and may include Q areas in total, and M users in an area, and each physical edge server cluster includes K edge servers.
The central cloud computing system can acquire information of any one physical edge server cluster in real time, and can comprise data information of local service calculation of a user on the service and data information of edge service calculation. Local traffic computation may be understood as the computation of traffic by a user on an edge device, and edge traffic computation may be understood as the computation of traffic by a user on an edge server.
The data information of the local service calculation may include the local service data volume of the user, the actual running speed of the first processor, the local calculation energy consumption, and the like.
The amount of traffic data local to a user can be understood as the size of the task that the user performs the computation locally.
The actual operation speed of the first processor may be understood as the actual CPU rotation rate calculated by the local service, and the CPU rotation rate may be understood as the operation speed of the CPU, such as the frequency of operation, etc.
The data information of the edge service calculation can comprise the service data volume of the user in the physical edge server cluster, the actual running speed of the second processor, the transmission delay and the edge calculation energy consumption of the service when the physical edge server cluster calculates, and the like.
The traffic data volume of a user in a physical edge server cluster can be understood as the task size of the user performing computation in the physical edge server cluster.
The actual operating speed of the second processor may be understood as the CPU turn rate at which tasks perform calculations at the physical edge server cluster.
The transmission delay refers to a transmission delay for transmitting the service data and the calculation request to the physical edge server cluster, and the service data and the calculation request are transmitted to the physical edge server cluster together, so the transmission delay can be understood as a transmission delay for transmitting the service data to the physical edge server cluster.
And simulating calculation unloading performed when a user initiates a service through the data information to obtain the minimum joint cost of time delay and energy consumption generated when the user processes and calculates the service.
The time delay comprises time delay calculated by local service and time delay calculated by edge service.
The energy consumption comprises local computing energy consumption consumed by local service computing and energy consumption consumed by clustering for edge service computing, namely edge computing energy consumption.
Minimum joint cost refers to the minimum joint cost of latency and energy consumption.
In one possible embodiment, a local calculation model and an edge calculation model may be constructed, the time delay of the user in the local service calculation is calculated through the local calculation model, the time delay maximum of the user in the edge service calculation is calculated through the edge calculation model, and the joint cost of the user in the local service calculation and the joint cost of the user in the edge service calculation are determined accordingly.
In one possible embodiment, the delay calculated by the user in the local service can be calculated by calculating the estimated delay calculated by the user in the local service in the twin model and calculating the delay difference value between the estimated delay and the real delay, so as to obtain the real delay calculated by the user in the local service, and determining the joint cost calculated by the user in the local service through the real delay and the local calculation energy consumption.
The time delay of the user in the edge service calculation can be calculated by calculating the time delay difference value between the estimated time delay and the real time delay and the transmission time delay in the estimated time delay of the user in the edge service calculation in the twin model, the maximum value of the real time delay of the user in the edge service calculation is obtained, and the joint cost of the user in the edge service calculation is determined by the maximum value of the real time delay and the energy consumption of the edge calculation.
And determining the minimum joint cost of the local service calculation and the edge service calculation on time delay and energy consumption through the joint cost of the local service calculation and the joint cost of the edge service calculation.
In another possible embodiment, the minimum joint cost may also be calculated specifically by, as shown in fig. 4, fig. 4 shows a schematic flow chart of calculating the minimum joint cost; the method comprises the following steps:
s402: and determining the time delay of the local service calculation of the user according to the acquired local service data volume of the user, the estimated running speed of the first processor and the actual running speed of the first processor.
S404: and determining the maximum time delay calculated by the user at the edge service according to the acquired service data volume of the user at any one physical edge server cluster, the estimated running speed of the second processor and the actual running speed of the second processor.
S406: and acquiring the local calculation energy consumption and the edge calculation energy consumption of the user.
S408: and determining the minimum joint cost of the local service calculation and the edge service calculation on the time delay and the energy consumption according to the time delay of the local service calculation, the energy consumption of the local calculation, the time delay maximum value of the edge service calculation and the energy consumption of the edge calculation.
In a possible embodiment, the minimum joint cost of the user in terms of time delay and energy consumption in the local service calculation and the minimum joint cost of the user in terms of time delay and energy consumption in the edge service calculation can be calculated through the data information, and if the minimum joint cost is smaller, the minimum joint cost of the physical edge server cluster in terms of time delay and energy consumption is selected.
In one possible embodiment, the manner of determining the delay of the local service calculation is as follows, as shown in fig. 5, and fig. 5 shows a schematic flow chart of determining the delay of the local service calculation; the method comprises the following steps:
s502: and obtaining the estimated time delay of the local service calculation according to the ratio of the local service data volume to the estimated running speed of the first processor.
S504: and obtaining a time delay difference value calculated by the local service according to the local service data volume, the estimated running speed of the first processor and the actual running speed of the first processor.
S506: and determining the time delay calculated by the local service according to the sum of the estimated time delay calculated by the local service and the time delay difference value calculated by the local service.
Illustratively, it can be calculated specifically by the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the estimated time delay of the mth user in local service calculation; / >Representing the local business data volume of the mth user; />Indicating that the mth user predicts the running speed at the first processor.
Through the formula (1), the estimated time delay calculated by each user in the local service can be determined.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the time delay difference value calculated by the mth user in the local service; />Indicating the actual operating speed of the first processor.
The time delay difference value calculated by each user in the local service can be determined through the formula (2).
Wherein, the liquid crystal display device comprises a liquid crystal display device,the time delay calculated by the mth user in the local service can be also understood as the real time delay calculated by the twin model and calculated by the mth user in the local service.
In one possible embodiment, the manner of determining the maximum value of the delay calculated by the user at the edge service is as follows, as shown in fig. 6, fig. 6 shows a schematic flow chart of determining the maximum value of the delay calculated by the user at the edge service; the method comprises the following steps:
s602: and determining the estimated time delay of edge service calculation according to the ratio of the product of the service data volume of any one physical edge server cluster and the service calculation proportion of the edge servers in any one physical edge server cluster and the estimated running speed of the second processor.
S604: and determining the delay difference value of edge service calculation according to the acquired service data volume of the user in any one physical edge server cluster, the estimated running speed of the second processor, the actual running speed of the second processor and the service calculation proportion of the edge servers in any one physical edge server cluster.
S606: and determining the maximum value of the time delay calculated by the user at the edge service according to the estimated time delay calculated by the edge service, the maximum value of the sum of the time delay difference calculated by the edge service and the transmission time delay of the service data transmitted to the edge server cluster.
Illustratively, it can be calculated specifically by the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the estimated time delay of the mth user for carrying out edge service calculation on the kth edge server of the physical edge server cluster; />Representing the traffic data volume sent by the mth user to the physical edge server cluster;representing the service calculation proportion of the service data volume occupied by the kth edge server in the physical edge server cluster; />Representing the estimated operating speed of the second processor.
Through the formula (4), the estimated time delay of any user for calculating the edge service on any edge server can be determined, so that the estimated time delay of the user for calculating the edge service is obtained.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the delay difference value of the mth user for carrying out edge service calculation on the kth edge server of the physical edge server cluster; />Representing the actual operating speed of the mth user at the kth edge server second processor of the physical edge server cluster.
Through the formula (5), the time delay difference value of the edge service calculation performed by any one user on any one edge server can be determined, so that the time delay difference value of the edge service calculation is obtained.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the computation delay of the mth user for performing edge service computation on the kth edge server in the physical edge server cluster.
Through the formula (6), the calculation time delay of the user for calculating the edge service on the physical edge server cluster can be determined.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the maximum value of time delay of the mth user on the physical edge server cluster; />Representing the transmission delay of the mth user for transmitting the service data to the edge server cluster; k denotes K edge servers in the physical edge server cluster.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the bandwidth of the mth user for transmitting the service data to the edge server cluster; />The mth user transmits the service data The rate of transmission to the edge server cluster.
By the formulas (7) and (8), the maximum value of the time delay calculated by the user at the edge service can be determined.
In one possible embodiment, after determining the time delay of the user in the local service calculation and the time delay maximum value of the user in the edge service calculation, the minimum joint cost of the local service calculation and the edge service calculation on the time delay and the energy consumption can be determined according to the time delay of the local service calculation, the local calculation energy consumption, the time delay maximum value of the edge service calculation and the edge calculation energy consumption.
In an exemplary embodiment, the first joint cost is determined according to the time delay calculated by the local service and the local calculation energy consumption, the second joint cost is determined according to the time delay maximum value calculated by the edge service and the edge calculation energy consumption, and the minimum value of the first joint cost and the second joint cost is taken as the minimum joint cost.
Illustratively, the following formula may be specifically taken as an example:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing a joint cost of the mth user on a kth edge server in the physical edge server cluster of the qth region; />Representing edge computing energy consumption of an mth user on a physical edge server cluster; / >Representing the local computing energy consumption of the mth user; />Representing the ratio relation between time delay and energy consumption in the joint cost, which is a preset constant according to the actualAnd (5) condition adjustment.
According to the above formula (9), the first joint cost and the second joint cost of the user can be determined, if the first joint cost of the user is small, the minimum joint cost is the first joint cost, and if the second joint cost is small, the minimum joint cost is the second joint cost.
It should be noted that, in the embodiment of the present disclosure, one physical edge server cluster is taken as one area, so q in the above formula (9) may be understood to essentially represent users and edge servers within the area of one physical edge server cluster when the area is represented.
The determination formula for the minimum joint cost is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,a vector indicating whether the user decides to perform local service calculation or edge service calculation on the service, if the local service calculation is performed, the +.>0, if edge traffic calculation is performed, then +_as shown in equation (9)>1.
Wherein, the liquid crystal display device comprises a liquid crystal display device,a vector representing a service calculation proportion of the user decision vector for distributing service data to K edge servers in any one physical edge server cluster; if a local business calculation is performed +. >Is null, if edge traffic calculation is performed, +.>For the calculated vector.
M represents the total number of users as M, K represents the total number of edge servers as K, and Q represents a total of Q areas.
Further, in determining the minimum joint cost, there are the following restrictions on various parameters, specifically expressed by the following formulas:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->And->And->The meaning of the representation is the same, only the concept of the region is added here.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the actual operating speed of the processor of the kth edge server of the mth user in the qth area; />Representing a preset actual operating speed of the physical edge server cluster. />
Wherein, the liquid crystal display device comprises a liquid crystal display device,a processor of a kth edge server representing an mth user in a qth area predicts an operating speed; />And representing the preset estimated running speed of the physical edge server cluster.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the maximum value of the bandwidth.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the sum of traffic calculation proportions for a total of K edge servers.
Wherein, the liquid crystal display device comprises a liquid crystal display device,the constraint of equation (18) indicates that the minimum federated cost across the physical edge server cluster needs to be less than the preset federated cost, +. >The ratio relation of time delay and energy consumption in the preset joint cost is shown, and the ratio relation is a preset constant and is adjusted according to actual conditions.
And (3) calculating the formula (9) through constraint conditions of the formulas (11) and (18), and determining the value of the minimum joint cost corresponding to the formula (10).
After determining the minimum joint cost, S304 execution is completed and S306 is executed.
S306: and acquiring a user decision vector of service calculation unloading when the minimum joint cost is taken by time delay and energy consumption of each physical edge server cluster.
In one possible embodiment, the value of the minimum joint cost is determined, the mode of service calculation performed by the user is obtained, and the user decision vector is determined according to the mode of service calculation performed by the user.
In an exemplary embodiment, if the minimum joint cost is calculated by local service, the service calculation ratio of the first preset label and the edge server in any one of the physical edge server clusters is null, and the null value is used as the user decision vector.
The first preset label can be understood asWhen the label is 0, the local service calculation is executed for the service, and the service calculation proportion is null.
For example, if the user performs edge service calculation, the second preset label and the service calculation proportion of the edge server in any one of the physical edge server clusters are used as the user decision vector when the user performs edge service calculation.
The second preset label can be understood asAnd when the service data quantity is 1, performing edge service calculation on the service to obtain a calculation result of service calculation proportion of each edge server in the physical edge server cluster to which the service belongs, wherein the service data quantity is distributed to the edge servers. For example, a physical edge garment to which a certain service belongsThe server cluster comprises 3 edge servers, and the service calculation proportion relation is 0.3:0.2:0.5. and a corresponding relation exists between the service calculation proportion and the edge server.
Determining a user decision vector, which may be: {0}, or, {1,0.2,0.3,0.5}, etc.
S308: in the digital twin layer, a decision model is trained by a plurality of acquired user decision vectors.
In one possible embodiment, in the digital twin layer, through the acquired information of the plurality of physical edge server clusters, various pre-estimated data and simulation processes, it may be determined that a plurality of users are in a plurality of areas, and each user in each area uses a user decision vector for each service execution, and the user decision vector is used as a data training set to train a decision model.
The decision model can be a deep learning network model, the deep learning network model is continuously refreshed through the acquired user decision vector, so that the decision mode of the user decision vector is learned until a group of random data is input, the decision mode output by the deep learning network model is similar to the user decision vector in the acquired data training set, and the difference value is within a preset range, so that the decision model training is finished and can be put into use.
S310: and according to the trained decision model, calculating and unloading the service initiated by the user in the plurality of physical edge server clusters.
In one possible embodiment, according to the trained decision model, the method can be directly deployed in a twin model, acquire information of a physical edge server cluster in real time, generate a decision through the decision model in a digital twin layer, send the decision to a physical layer, and perform calculation and offloading on services initiated by each user.
In another possible embodiment, the trained decision model may be deployed directly into multiple physical edge server clusters in the physical layer, where the multiple physical edge server clusters compute offload user-initiated traffic through the decision model.
By the service calculation unloading method, a twin model can be built in a digital twin layer, information can be acquired in real time, the minimum joint cost of a user for executing calculation on the service is determined in two dimensions of time delay and energy consumption, and the cost for executing calculation on the service is saved; the user decision vector can be obtained through the digital twin layer, the decision model is trained, high intelligence of the number is achieved through the trained decision model, and the unloading decision is calculated. Application grounding for multimedia service computing offloading can be supported, and digital twin techniques and deep reinforcement learning algorithms are introduced into the computing offloading of multimedia services.
Based on the same inventive concept, the embodiments of the present disclosure also provide a service computing and unloading device, as follows. Since the principle of solving the problem of the embodiment of the device is similar to that of the embodiment of the method, the implementation of the embodiment of the device can be referred to the implementation of the embodiment of the method, and the repetition is omitted.
Fig. 7 is a schematic structural diagram of a service computing and offloading apparatus according to an embodiment of the disclosure, and as shown in fig. 7, the service computing and offloading apparatus 70 includes: the system comprises a building unit 701, a combined optimizing unit 702, a calculation unloading unit 705, and a calculation unloading unit 703, wherein the building unit 701 is used for mapping a plurality of physical edge server clusters, a twin model is built in a digital twin layer, the combined optimizing unit 702 is used for aiming at any physical edge server cluster in the twin model, according to data information of local service calculation of a service by a user and data information of edge service calculation, minimum combined cost of the local service calculation and the edge service calculation on time delay and energy consumption is determined, the obtaining unit 703 is used for obtaining a user decision vector of service calculation unloading when each physical edge server cluster takes minimum combined cost on time delay and energy consumption, the training unit 704 is used for training the decision model through the obtained user decision vector for a plurality of times in the digital twin layer, and the calculation unloading unit 705 is used for carrying out calculation unloading on the service initiated by the user in the plurality of physical edge server clusters according to the trained decision model.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to such an embodiment of the present disclosure is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, and a bus 830 connecting the various system components, including the memory unit 820 and the processing unit 810.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the present specification. For example, the processing unit 810 may perform the steps of any of the method embodiments described above.
The storage unit 820 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 8201 and/or cache memory 8202, and may further include Read Only Memory (ROM) 8203.
Storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 840 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 800, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 860. As shown, network adapter 860 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In particular, according to embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method in the above-described embodiment.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium, which may be a readable signal medium or a readable storage medium, is also provided. On which a program product is stored which enables the implementation of the method described above of the present disclosure. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
More specific examples of the computer readable storage medium in the present disclosure may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In this disclosure, a computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Alternatively, the program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In particular implementations, the program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the description of the above embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A traffic computation offload method, the method comprising:
mapping a plurality of physical edge server clusters, and constructing a twin model in a digital twin layer;
in the twin model, for any physical edge server cluster, determining the minimum joint cost of local service calculation and edge service calculation on time delay and energy consumption according to data information of local service calculation and data information of edge service calculation of a user;
acquiring a user decision vector of service calculation unloading when each physical edge server cluster takes the minimum joint cost in time delay and energy consumption;
Training a decision model in the digital twin layer through the user decision vector acquired for multiple times;
according to the trained decision model, calculating and unloading the service initiated by the user in the plurality of physical edge server clusters;
the obtaining the user decision vector of service calculation unloading when each physical edge server cluster takes the minimum joint cost in time delay and energy consumption comprises the following steps:
acquiring a service calculation mode of a user when each physical edge server cluster is at the minimum joint cost;
if the user performs local service calculation, the service calculation proportion of the edge server in the first preset label and any one of the physical edge server clusters is null, and the null value is used as the user decision vector;
if the user performs the edge service calculation, taking the second preset label and the service calculation proportion of the edge servers in any one of the physical edge server clusters as the user decision vector when the user performs the edge service calculation.
2. The method of claim 1, wherein mapping the plurality of physical edge server clusters, constructing a twinning model in a digital twinning layer, comprises:
Obtaining edge server information, user information, calculation information and network information in a plurality of physical edge server clusters by mapping the plurality of physical edge server clusters;
a twinning model of the plurality of physical edge server clusters is built in the digital twinning layer.
3. The method of claim 1, wherein determining a minimum joint cost in terms of latency and energy consumption for the local service computation and the edge service computation based on data information for the local service computation and data information for the edge service computation for the service by the user comprises:
determining the time delay of the user on local service calculation according to the acquired local service data volume of the user, the estimated running speed of the first processor and the actual running speed of the first processor;
determining the maximum value of time delay calculated by the user at the edge service according to the acquired service data volume of the user at any one physical edge server cluster, the estimated running speed of the second processor and the actual running speed of the second processor;
acquiring local calculation energy consumption and edge calculation energy consumption of a user;
and determining the minimum joint cost of the local service calculation and the edge service calculation on the time delay and the energy consumption according to the time delay of the local service calculation, the local calculation energy consumption, the time delay maximum value of the edge service calculation and the edge calculation energy consumption.
4. The method of claim 3, wherein determining the delay of the user's local service calculation based on the acquired local service data amount of the user, the estimated running speed of the first processor, and the actual running speed of the first processor comprises:
obtaining estimated time delay of local service calculation according to the ratio of the local service data volume to the estimated running speed of the first processor;
obtaining a time delay difference value of local service calculation according to the local service data volume, the estimated running speed of the first processor and the actual running speed of the first processor;
and determining the time delay calculated by the local service according to the sum of the estimated time delay calculated by the local service and the time delay difference value calculated by the local service.
5. The method of claim 3, wherein determining the maximum value of the delay calculated by the user at the edge service according to the acquired service data volume of the user at the any one physical edge server cluster, the estimated operation speed of the second processor and the actual operation speed of the second processor comprises:
determining estimated time delay of edge service calculation according to the product of the service data volume of any one physical edge server cluster and the service calculation proportion of the edge servers in the any one physical edge server cluster and the ratio of the estimated running speed of the second processor;
Determining a delay difference value of edge service calculation according to the acquired service data volume of the user in any one physical edge server cluster, the estimated running speed of the second processor, the actual running speed of the second processor and the service calculation proportion of the edge servers in any one physical edge server cluster;
and determining the maximum value of the time delay calculated by the user at the edge service according to the estimated time delay calculated by the edge service, the maximum value of the sum of the time delay difference calculated by the edge service and the transmission time delay of the service data transmitted to the edge server cluster.
6. A method according to claim 3, wherein said determining a minimum joint cost of local traffic computation and edge traffic computation in terms of latency and energy consumption based on the latency of the local traffic computation, local computation energy consumption, the latency maximum of the edge traffic computation, and edge computation energy consumption comprises:
determining a first joint cost according to the time delay calculated by the local service and the local calculation energy consumption;
determining a second combined cost according to the maximum time delay calculated by the edge service and the energy consumption calculated by the edge;
And taking the minimum value of the first joint cost and the second joint cost as the minimum joint cost.
7. A business computing offload system comprising: a central cloud computing system and a plurality of physical edge server clusters; the central cloud computing system is used for constructing a digital twin layer;
the central cloud computing system configured to perform the method of any of claims 1-6.
8. A traffic computation offload device, comprising:
the establishing unit is used for mapping a plurality of physical edge server clusters and establishing a twin model in the digital twin layer;
the combined optimization unit is used for determining the minimum combined cost of the local service calculation and the edge service calculation on time delay and energy consumption according to the data information of the local service calculation and the data information of the edge service calculation of the service by a user aiming at any physical edge server cluster in the twin model;
the acquisition unit is used for acquiring user decision vectors of service calculation unloading when the minimum joint cost is acquired by time delay and energy consumption of each physical edge server cluster;
the training unit is used for training a decision model in the digital twin layer through the user decision vector acquired for many times;
The computing and unloading unit is used for computing and unloading the service initiated by the user in the plurality of physical edge server clusters according to the trained decision model;
the acquiring unit is further configured to acquire a manner in which a user performs service calculation when the minimum joint cost is set for each physical edge server cluster; if the user performs local service calculation, the service calculation proportion of the edge server in the first preset label and any one of the physical edge server clusters is null, and the null value is used as the user decision vector; if the user performs the edge service calculation, taking the second preset label and the service calculation proportion of the edge servers in any one of the physical edge server clusters as the user decision vector when the user performs the edge service calculation.
9. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1-6 via execution of the executable instructions.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1-6.
CN202310802419.8A 2023-06-30 2023-06-30 Service computing unloading method, system, device, equipment and medium Active CN116521377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310802419.8A CN116521377B (en) 2023-06-30 2023-06-30 Service computing unloading method, system, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310802419.8A CN116521377B (en) 2023-06-30 2023-06-30 Service computing unloading method, system, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN116521377A CN116521377A (en) 2023-08-01
CN116521377B true CN116521377B (en) 2023-09-29

Family

ID=87406761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310802419.8A Active CN116521377B (en) 2023-06-30 2023-06-30 Service computing unloading method, system, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116521377B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590232A (en) * 2021-08-23 2021-11-02 南京信息工程大学 Relay edge network task unloading method based on digital twinning
CN113852994A (en) * 2021-11-18 2021-12-28 南京信息工程大学 High-altitude base station cluster auxiliary edge calculation method used in emergency communication
CN114945044A (en) * 2022-07-25 2022-08-26 北京智芯微电子科技有限公司 Method, device and equipment for constructing digital twin platform based on federal learning
CN115119234A (en) * 2022-06-14 2022-09-27 浙江工业大学 Method for optimizing task processing of wireless equipment in wireless energy supply edge computing network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590232A (en) * 2021-08-23 2021-11-02 南京信息工程大学 Relay edge network task unloading method based on digital twinning
CN113852994A (en) * 2021-11-18 2021-12-28 南京信息工程大学 High-altitude base station cluster auxiliary edge calculation method used in emergency communication
CN115119234A (en) * 2022-06-14 2022-09-27 浙江工业大学 Method for optimizing task processing of wireless equipment in wireless energy supply edge computing network
CN114945044A (en) * 2022-07-25 2022-08-26 北京智芯微电子科技有限公司 Method, device and equipment for constructing digital twin platform based on federal learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Cross-Layer Optimization for Industrial Internet of Things in Real Scene Digital Twins;Zhihan Lv等;《IEEE INTERNET OF THINGS JOURNAL》;第9卷(第17期);全文 *
Digital-Twin-Assisted Task Offloading Based on Edge Collaboration in the Digital Twin Edge Network;Tong Liu等;《IEEE INTERNET OF THINGS JOURNAL》;第9卷(第2期);全文 *
数字孪生边缘网络;张彦等;《中兴通讯技术》;全文 *

Also Published As

Publication number Publication date
CN116521377A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN114091617A (en) Federal learning modeling optimization method, electronic device, storage medium, and program product
CN110413812B (en) Neural network model training method and device, electronic equipment and storage medium
WO2023284387A1 (en) Model training method, apparatus, and system based on federated learning, and device and medium
CN111340220A (en) Method and apparatus for training a predictive model
CN113050643A (en) Unmanned vehicle path planning method and device, electronic equipment and computer readable medium
CN114964296A (en) Vehicle driving path planning method, device, equipment and computer readable medium
US20220114457A1 (en) Quantization of tree-based machine learning models
CN113537512B (en) Model training method, device, system, equipment and medium based on federal learning
CN112152879B (en) Network quality determination method, device, electronic equipment and readable storage medium
CN116703131B (en) Power resource allocation method, device, electronic equipment and computer readable medium
CN111278085B (en) Method and device for acquiring target network
CN116521377B (en) Service computing unloading method, system, device, equipment and medium
CN108770014B (en) Calculation evaluation method, system and device of network server and readable storage medium
CN114038465B (en) Voice processing method and device and electronic equipment
CN111680754B (en) Image classification method, device, electronic equipment and computer readable storage medium
CN115695853A (en) Video processing method, electronic device and computer program product
CN112346870A (en) Model processing method and system
CN111353585A (en) Structure searching method and device of neural network model
CN113361678A (en) Training method and device of neural network model
CN112766698B (en) Application service pressure determining method and device
CN114756367B (en) Service migration method, device, medium and electronic equipment
CN112070163B (en) Image segmentation model training and image segmentation method, device and equipment
CN116528255B (en) Network slice migration method, device, equipment and storage medium
CN114173134B (en) Video encoding method, apparatus, electronic device, and computer-readable medium
CN116860355A (en) Task processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant