CN113419853A - Task execution strategy determining method and device, electronic equipment and storage medium - Google Patents

Task execution strategy determining method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113419853A
CN113419853A CN202110695651.7A CN202110695651A CN113419853A CN 113419853 A CN113419853 A CN 113419853A CN 202110695651 A CN202110695651 A CN 202110695651A CN 113419853 A CN113419853 A CN 113419853A
Authority
CN
China
Prior art keywords
task
computing
candidate
execution
energy consumption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110695651.7A
Other languages
Chinese (zh)
Inventor
郭金林
刘炼
霍志翠
陈涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110695651.7A priority Critical patent/CN113419853A/en
Publication of CN113419853A publication Critical patent/CN113419853A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure provides a task execution strategy determination method and device, an electronic device and a storage medium. The method and the device can be used in the technical field of cloud computing and can also be used in the technical field of finance. The task execution strategy determination method comprises the following steps: determining a candidate task execution strategy related to the computing task, wherein the computing task comprises a plurality of computing sub-tasks, the candidate task execution strategy is used for characterizing a candidate execution mode of each computing sub-task in the computing task, and the candidate execution mode comprises one of the following modes: executing at a local device, executing at a remote cloud server, executing at an edge cloud server; and determining a target task execution strategy from the candidate task execution strategies by using a joint optimization model and a preset algorithm, wherein the joint optimization model is used for calculating a weighted value of task delay and equipment energy consumption of the calculation task under each candidate task execution strategy, and the weighted value is smaller than a preset threshold under the target task execution strategy.

Description

Task execution strategy determining method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to a method and an apparatus for determining a task execution policy, an electronic device, a computer-readable storage medium, and a computer program product.
Background
In recent years, with the popularity and popularity of smart mobile devices such as smart phones, smart bracelets, VR (virtual reality) glasses, and the like, many novel applications with intensive computing tasks and high computing energy consumption, such as face recognition, VR applications, interactive games, and the like, have appeared on mobile devices.
In the process of implementing the concept of the present disclosure, the inventors found that there is at least the following problem in the related art, and it is difficult to meet the requirements of such applications in terms of time delay and energy consumption due to the limitations of the local mobile device on the computing resources and cruising ability, which seriously affects the user experience.
Disclosure of Invention
In view of the above, the present disclosure provides a task execution policy determination method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
One aspect of the present disclosure provides a task execution policy determination method, including:
determining a candidate task execution strategy related to the computing task, wherein the computing task comprises a plurality of computing sub-tasks, the candidate task execution strategy is used for characterizing a candidate execution mode of each computing sub-task in the computing task, and the candidate execution mode comprises one of the following modes: executing at a local device, executing at a remote cloud server, executing at an edge cloud server;
and determining a target task execution strategy from the candidate task execution strategies by using a joint optimization model and a preset algorithm, wherein the joint optimization model is used for calculating a weighted value of task delay and equipment energy consumption of the calculation task under each candidate task execution strategy, and the weighted value is smaller than a preset threshold under the target task execution strategy.
According to the embodiment of the present disclosure, the joint optimization model includes: the time delay parameter is used for representing the execution time of the calculation subtask, the energy consumption parameter is used for representing the execution energy consumption of the calculation subtask, the task number parameter is used for representing the number of the calculation subtasks, and the balance parameter is used for representing the task time delay weight and the equipment energy consumption weight of the calculation subtask.
According to the embodiment of the disclosure, each parameter in the joint optimization model satisfies a preset constraint condition, and the preset constraint condition includes: the numerical value of the delay parameter is less than or equal to the maximum delay allowed by the calculation task, the numerical value of the energy consumption parameter is less than or equal to the maximum energy consumption allowed by the calculation task, the numerical value of the task number parameter is less than or equal to the number of wireless channels, and the numerical value of the balance parameter is more than or equal to 0 and less than or equal to 1.
According to an embodiment of the present disclosure, determining a candidate task execution policy related to a computing task includes:
determining a candidate execution mode of each computing sub-task in the computing tasks;
and determining a candidate task execution strategy related to the calculation task according to the candidate execution mode of each calculation sub-task.
According to an embodiment of the present disclosure, determining candidate execution manners for each of the computing tasks includes:
determining the maximum time delay allowed by the calculation task and the maximum energy consumption allowed by the calculation task;
respectively calculating the task delay and the equipment energy consumption of each calculation sub task in each original execution mode, wherein the original execution mode comprises one of the following modes: executing at a local device, executing at a remote cloud server, executing at an edge cloud server;
determining an original execution mode meeting the initial selection condition as a candidate execution mode, wherein the initial selection condition is as follows: the task delay of the calculation subtask in the original execution mode is less than or equal to the maximum delay allowed by the calculation task, and the equipment energy consumption of the calculation subtask in the original execution mode is less than or equal to the maximum energy consumption allowed by the calculation task.
According to an embodiment of the present disclosure, the preset algorithm employs a simulated annealing algorithm.
According to the embodiment of the disclosure, determining the target task execution strategy from the candidate task execution strategies by using the joint optimization model and the preset algorithm comprises the following steps:
setting iteration conditions of a simulated annealing algorithm;
setting a termination condition of the simulated annealing algorithm according to a preset threshold value;
performing one or more times of iterative solution according to the iterative conditions to obtain one or more times of current task execution strategies;
calculating an objective function increment by using a joint optimization model, wherein the objective function increment is the difference between a weighted value under the current task execution strategy in the current iteration and a weighted value under the current task execution strategy in the last iteration;
and determining a target task execution strategy from the current task execution strategies according to the target function increment and the termination condition.
Another aspect of the present disclosure provides a task execution policy determination apparatus including a first determination module and a second determination module.
The first determining module is configured to determine a candidate task execution policy related to the computing task, where the computing task includes multiple computing sub-tasks, and the candidate task execution policy is used to characterize a candidate execution manner of each computing sub-task in the computing task, where the candidate execution manner includes one of: the method is executed at a local device, a remote cloud server and an edge cloud server.
And the second determining module is used for determining a target task execution strategy from the candidate task execution strategies by using a joint optimization model and a preset algorithm, wherein the joint optimization model is used for calculating a weighted value of task delay and equipment energy consumption of the calculation task under each candidate task execution strategy, and the weighted value is smaller than a preset threshold under the target task execution strategy.
According to the embodiment of the present disclosure, the joint optimization model includes: the time delay parameter is used for representing the execution time of the calculation subtask, the energy consumption parameter is used for representing the execution energy consumption of the calculation subtask, the task number parameter is used for representing the number of the calculation subtasks, and the balance parameter is used for representing the task time delay weight and the equipment energy consumption weight of the calculation subtask.
According to the embodiment of the disclosure, each parameter in the joint optimization model satisfies a preset constraint condition, and the preset constraint condition includes: the numerical value of the delay parameter is less than or equal to the maximum delay allowed by the calculation task, the numerical value of the energy consumption parameter is less than or equal to the maximum energy consumption allowed by the calculation task, the numerical value of the task number parameter is less than or equal to the number of wireless channels, and the numerical value of the balance parameter is more than or equal to 0 and less than or equal to 1.
According to an embodiment of the present disclosure, the first determination module includes a first determination unit, a second determination unit.
The first determining unit is used for determining a candidate execution mode of each computing sub-task in the computing tasks; and the second determining unit is used for determining a candidate task execution strategy related to the computing task according to the candidate execution mode of each computing subtask.
According to an embodiment of the present disclosure, the first determination unit includes a determination subunit, a calculation subunit, and a screening subunit.
The determining subunit is configured to determine a maximum time delay allowed by the computation task and a maximum energy consumption allowed by the computation task; the calculation subunit is configured to calculate task delay and device energy consumption of each calculation sub-task in each original execution mode, where the original execution mode includes one of: executing at a local device, executing at a remote cloud server, executing at an edge cloud server; a screening subunit, configured to determine an original execution manner that satisfies a primary selection condition as a candidate execution manner, where the primary selection condition is: the task delay of the calculation subtask in the original execution mode is less than or equal to the maximum delay allowed by the calculation task, and the equipment energy consumption of the calculation subtask in the original execution mode is less than or equal to the maximum energy consumption allowed by the calculation task.
According to an embodiment of the present disclosure, the preset algorithm employs a simulated annealing algorithm.
According to an embodiment of the present disclosure, the second determination module includes a first setting unit, a second setting unit, an iteration unit, a calculation unit, and a third determination unit.
The first setting unit is used for setting iteration conditions of the simulated annealing algorithm; the second setting unit is used for setting the termination condition of the simulated annealing algorithm according to the preset threshold value; the iteration unit is used for carrying out one or more times of iteration solution according to the iteration conditions so as to obtain one or more times of current task execution strategies; the calculating unit is used for calculating an objective function increment by using the joint optimization model, wherein the objective function increment is the difference between the weighted value under the current task execution strategy in the current iteration and the weighted value under the current task execution strategy in the last iteration; and the third determining unit is used for determining the target task execution strategy from the current task execution strategies according to the target function increment and the termination condition.
Another aspect of the present disclosure provides an electronic device including: one or more processors, and a memory; wherein the memory is for storing one or more programs; wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the task execution policy determination method as above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the task execution policy determination method as above when executed.
Another aspect of the present disclosure provides a computer program product comprising computer executable instructions for implementing the task execution policy determination method as above when executed.
According to the embodiment of the disclosure, a target task execution strategy is further determined by using a joint optimization model and a preset algorithm through determining that the candidate execution mode of each computing subtask is executed at a local device, a remote cloud server or an edge cloud server. Because the combined optimization model is used for calculating the weighted value of the task delay and the equipment energy consumption of the calculation task under each candidate task execution strategy, and the calculation model comprehensively considers the influences of two factors of the task delay and the equipment energy consumption, the method disclosed by the invention realizes the combined optimization of the calculation task delay and the energy consumption of the mobile equipment by aiming at minimizing the weighted sum of the calculation task delay and the mobile equipment energy consumption and combining the characteristics that the remote cloud computing resources are rich and the edge cloud is close to the mobile equipment, and can simultaneously solve the problems of large time delay of the mobile calculation task and high energy consumption of the mobile equipment.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an exemplary system architecture to which the task execution policy determination methods and apparatus of the present disclosure may be applied;
FIG. 2 schematically illustrates a flow chart of a task execution policy determination method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow diagram for determining candidate task execution policies related to a computing task, according to an embodiment of the disclosure;
fig. 4 schematically shows a block diagram of a task execution policy determination apparatus according to an embodiment of the present disclosure; and
fig. 5 schematically shows a block diagram of an electronic device for implementing a task execution policy determination method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Before the embodiments of the present disclosure are explained in detail, the system structure and the application scenario related to the method provided by the embodiments of the present disclosure are described as follows.
Fig. 1 schematically illustrates an exemplary system architecture 100 to which the task execution policy determination methods and apparatus of the present disclosure may be applied. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include local devices 101, 102, 103, a network 104, and a remote cloud server 105, an edge cloud server 106. Network 104 is the medium used to provide communication links between local devices 101, 102, 103 and remote cloud servers 105, edge cloud servers 106. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The local devices 101, 102, 103 may be various electronic devices having display screens and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
A remote cloud server 105 and an edge cloud server 106, which can provide remote computing platform services for the local devices 101, 102, 103 through a network 104, and can be a service cluster formed by multiple servers; the remote cloud server 105 may be a server provided at a cloud computing center distant from the local devices 101, 102, 103, and the edge cloud server 106 may be a server provided at a base station distant from the local devices 101, 102, 103.
According to the embodiment of the present disclosure, the local devices 101, 102, 103 may be configured to perform various computing tasks, for example, computing tasks for applications such as face recognition, VR applications, and interactive games, where the computing tasks may include multiple computing sub-tasks, and the multiple computing sub-tasks perform parallel computing to collectively complete the computing task. According to an embodiment of the present disclosure, in order to overcome the defects of the local devices 101, 102, 103 in terms of computing resources and endurance, some or all of the computing subtasks may be migrated to the remote cloud server 105 or/and the edge cloud server 106 for execution.
It should be understood that the number of local devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of local devices, networks, and servers, as desired for an implementation.
It should be noted that the method and the device for determining the task execution strategy can be used in the technical field of cloud computing, the technical field of finance, any field except the technical field of cloud computing and the technical field of finance, and the application field of the method and the device for determining the task execution strategy is not limited by the disclosure.
In recent years, with the popularity and popularization of smart mobile devices such as smart phones, smart bracelets, VR (virtual reality) glasses and the like, many new applications with intensive computation and energy consumption, such as face recognition, VR applications, interactive games and the like, appear on the mobile devices, however, due to the limitations of the mobile devices on computing resources and cruising ability, the requirements of the applications in terms of time delay and energy consumption are difficult to meet, and user experience is seriously affected.
In the process of implementing the present disclosure, it is found that, in order to solve the above problems, a cloud computing technology may be introduced into a mobile application platform, a computing task on a mobile device is migrated to a remote cloud computing center for execution, and a result is returned finally, so that the problem of shortage of computing resources of the mobile device is solved to some extent, and energy consumption of the mobile device for executing the task is reduced. However, the remote cloud center is often far away from the mobile device, data needs to be transmitted through a wide area network, and network transmission delay is high.
Further, in order to solve the above problem, Mobile Edge Computing (MEC) may be adopted to solve the problem, and a server is disposed in a base station closer to the Mobile device, so as to migrate the Computing task on the Mobile device to a Mobile Edge cloud server for execution, thereby effectively reducing a time delay caused by network transmission, and having good performance in terms of reducing the energy consumption of the Mobile device and the execution time of the Computing task. However, as mobile devices and applications grow exponentially, the computing tasks migrated onto the mobile edge cloud become more and more prominent, and the resource bottleneck problem of the MEC server becomes more and more prominent.
Aiming at the problem of mobile computing migration, either a mobile cloud computing migration scheme or a mobile edge computing migration scheme is adopted, the problem of long computing task delay cannot be effectively solved only by adopting the mobile computing migration scheme, and computing resources of the mobile computing migration scheme become a bottleneck for solving the problem only by adopting the mobile edge computing migration scheme. Moreover, in the concept of optimizing the mobile computing task delay and the energy consumption of the mobile device, if the objective is to optimize the energy consumption of the mobile device, the delay of the computing task is only set as a threshold value as a condition of the optimization problem, but because the delay problem of the computing task is a very important index for evaluating the user experience, especially for the online application program, the technical solution under the above concept cannot completely solve the above technical problem.
Therefore, the method combines the characteristics of rich remote cloud computing resources and the characteristic that the edge cloud is close to the mobile device, simultaneously considers remote cloud computing and mobile edge cloud computing schemes based on an optical fiber-wireless network structure, aims at minimizing the weighted sum of the computing task time delay and the energy consumption of the mobile device, and performs combined optimization on the computing task time delay and the energy consumption of the mobile device. The disclosure provides a task execution strategy determination method, which aims to solve the problems of long time delay of a mobile computing task and high energy consumption of mobile and equipment at the same time.
Fig. 2 schematically shows a flowchart of a task execution policy determination method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S201 to S202.
In operation S201, a candidate task execution policy related to a computing task is determined, where the computing task includes a plurality of computing sub-tasks, and the candidate task execution policy is used to characterize a candidate execution manner of each computing sub-task in the computing task, where the candidate execution manner includes one of: the method is executed at a local device, a remote cloud server and an edge cloud server.
In operation S202, a target task execution policy is determined from the candidate task execution policies by using a joint optimization model and a preset algorithm, where the joint optimization model is used to calculate a weighted value of task delay and device energy consumption of a calculation task under each candidate task execution policy, and the weighted value is smaller than a preset threshold under the target task execution policy.
According to the embodiment of the disclosure, the computing task may be a computing task for any application, for example, the computing task may be a computing task for applications such as face recognition, VR applications, and interactive games, and the computing task may include multiple computing sub-tasks, and the multiple computing sub-tasks perform parallel computing to jointly complete the entire computing task.
According to the embodiment of the disclosure, the computing tasks may be executed on the local device, and in order to overcome the defects of the local device in computing resources and endurance, part or all of the computing subtasks in the computing tasks may be migrated to the remote cloud server or/and the edge cloud server for execution. The remote cloud server may be a server provided at a cloud computing center farther from the local device, and the edge cloud server may be a server provided at a base station closer to the local device.
According to an embodiment of the disclosure, the candidate task execution strategy is used for characterizing candidate execution manners of each computing sub-task in the computing tasks, for example, in one of the candidate task execution strategies, 100 computing sub-tasks in the computing tasks are executed at the local device, 30 computing sub-tasks are executed at the remote cloud server, and the remaining 40 computing sub-tasks are executed at the edge cloud server.
In the above operation S201, the candidate task execution strategies related to the calculation task are determined, which may be that all possible task execution strategies are pre-screened based on some initial selection conditions, and some task execution strategies that cannot be implemented are removed.
The main purpose of the above operation S202 is to determine a target task execution policy from the candidate task execution policies, where the target task execution policy can minimize the computation task delay and the energy consumption of the mobile device. According to the embodiment of the disclosure, in order to determine the target task execution strategy from the candidate task execution strategies, a preset algorithm can be adopted for solving the optimization problem, and the target task execution strategy is obtained through consuming less computing resources.
According to the embodiment of the disclosure, the joint optimization model is used for calculating the weighted values of the task delay and the equipment energy consumption of the calculation task under each candidate task execution strategy, and under the target task execution strategy, the weighted values are smaller than the preset threshold value, that is, the weighted values of the task delay and the equipment energy consumption are minimized.
According to the embodiment of the disclosure, a target task execution strategy is further determined by using a joint optimization model and a preset algorithm through determining that the candidate execution mode of each computing subtask is executed at a local device, a remote cloud server or an edge cloud server. Because the combined optimization model is used for calculating the weighted value of the task delay and the equipment energy consumption of the calculation task under each candidate task execution strategy, and the calculation model comprehensively considers the influences of two factors of the task delay and the equipment energy consumption, the method disclosed by the invention realizes the combined optimization of the calculation task delay and the energy consumption of the mobile equipment by aiming at minimizing the weighted sum of the calculation task delay and the mobile equipment energy consumption and combining the characteristics that the remote cloud computing resources are rich and the edge cloud is close to the mobile equipment, and can simultaneously solve the problems of large time delay of the mobile calculation task and high energy consumption of the mobile equipment.
According to the embodiment of the present disclosure, the joint optimization model includes: the time delay parameter is used for representing the execution time of the calculation subtask, the energy consumption parameter is used for representing the execution energy consumption of the calculation subtask, the task number parameter is used for representing the number of the calculation subtasks, and the balance parameter is used for representing the task time delay weight and the equipment energy consumption weight of the calculation subtask.
According to the embodiment of the disclosure, each parameter in the joint optimization model satisfies a preset constraint condition, and the preset constraint condition includes: the numerical value of the delay parameter is less than or equal to the maximum delay allowed by the calculation task, the numerical value of the energy consumption parameter is less than or equal to the maximum energy consumption allowed by the calculation task, the numerical value of the task number parameter is less than or equal to the number of wireless channels, and the numerical value of the balance parameter is more than or equal to 0 and less than or equal to 1.
According to an embodiment of the present disclosure, the joint optimization model may be expressed as the following formula (one):
Figure BDA0003126203010000111
the preset constraint conditions met by the parameters in the combined optimization model are as follows:
alpha is an element [0, 1] (two)
Figure BDA0003126203010000112
Figure BDA0003126203010000113
Figure BDA0003126203010000114
Figure BDA0003126203010000115
Figure BDA0003126203010000116
Figure BDA0003126203010000117
In the above formula, D is a task number parameter for representing the number of computing subtasks, K represents the number of wireless channels, and formula (seven) represents that, in the computing task migration policy, the number of tasks actually migrated to the cloud (executed by the remote cloud server or at the edge cloud server) may not be more than the number of wireless channels.
In the above formula, the time delay parameter used for characterizing the execution time of the computation subtask is, for example: t is tiRepresenting the computation of the task completion delay time,
Figure BDA0003126203010000118
represents the maximum delay time of the calculation task, wherein i is the index of the calculation task, t'iThe normalized value is the actual execution time of the computing task.
In the above formula, the energy consumption parameters for characterizing the execution energy consumption of the computation subtask are, for example: giRepresenting the energy consumption of the mobile device to accomplish the computational task,
Figure BDA0003126203010000119
representing the maximum energy consumption, e ', allowed by the mobile device to complete the computing task'iAnd normalizing the value of the actual energy consumption of the mobile equipment.
The formulas (three) and (five) aim to normalize the calculation task time delay and the energy consumption of the mobile equipment to the same dimension; formula (IV) represents that the execution time of the computing task does not exceed the maximum time delay allowed by the application; equation (six) represents that the actual mobile device energy consumption must not exceed the maximum energy consumption allowed by the mobile device when the computing task is performed in some way.
In the above formula, α is a trade-off parameter for characterizing a task delay weight and an equipment energy consumption weight of a computation subtask, and has a range of [0, 1], and may be adjusted according to an application program or a mobile equipment state, for example, when the mobile equipment has sufficient electric quantity or pursues an extreme experience of an application, a value of α may be set to be larger, even to be 1, which may be regarded as a minimization of a computation task delay.
In the above formula, λiRepresenting the execution mode of the computing task, the value of 0 represents local execution, the value of 1 represents migration to the edge cloud server for execution, the value of-1 represents migration to the remote cloud server for execution, and D ═ T { (T })1,T2,...,TNRepresents a set of computing tasks. Equation (eight) indicates that in a certain computation task, the candidate execution modes of each computation sub-task can be the following three types: the cloud server is executed at a local device, a remote cloud server and an edge cloud server, but only one of the cloud server and the edge cloud server can be selected in actual execution.
According to the embodiment of the disclosure, the weighted values of the time delay and the energy consumption can be set according to actual requirements through the balance parameters, when the real-time application pursues extreme experience, the time delay weighted value can be set to be 1, if the equipment energy consumption is saved, the energy consumption weighted value can be set to be 1, and accordingly, the method which is unilaterally used for performing single optimization only aiming at the task time delay weighted value or the equipment energy consumption index can be compatible.
FIG. 3 schematically illustrates a flow diagram for determining candidate task execution policies related to a computing task, according to an embodiment of the disclosure.
As shown in FIG. 3, determining a candidate task execution policy related to a computing task according to an embodiment of the present disclosure includes operations S301-S304.
Firstly, determining a candidate execution mode of each computing sub-task in the computing task, specifically:
in operation S301, the maximum time delay allowed by the computing task, the maximum energy consumption allowed by the computing task, and the occupation situation of the wireless channel are determined.
In operation S302, task latency and device energy consumption of each calculation sub-task in each original execution mode are respectively calculated, where the original execution mode includes one of: the method is executed at a local device, a remote cloud server and an edge cloud server. That is, for all the calculation sub-tasks, the time t of their local execution without channel interference is calculated separatelyi locEnergy consumption ei locTime to offload to edge cloud Server execution ti mecEnergy consumption ei mecAnd time t of offloading to a centralized remote cloud serveri cccEnergy consumption ei ccc
In operation S303, an original execution manner satisfying a primary selection condition is determined as a candidate execution manner, where the primary selection condition is: the task delay of the calculation subtask in the original execution mode is less than or equal to the maximum delay allowed by the calculation task, and the equipment energy consumption of the calculation subtask in the original execution mode is less than or equal to the maximum energy consumption allowed by the calculation task. That is, the minimum of these three times for any computing task i is greater than the delay time t required for that taski maxOr the minimum value of the three corresponding energy consumptions is larger than the energy consumption e allowed by the mobile equipmenti maxIf the computing task cannot be completed in any of the three ways, the task is removed from the task set.
In operation S304, a candidate task execution policy related to the computation task is determined according to the candidate execution manner of each computation task.
According to the embodiment of the disclosure, the task delay and the equipment energy consumption of each calculation subtask in each original execution mode are calculated by adopting the following calculation formula:
1) the computing task is performed on the local mobile device:
Figure BDA0003126203010000131
Figure BDA0003126203010000132
wherein f isi locWhich represents the computing power, i.e. CPU frequency,
Figure BDA0003126203010000134
representing the power consumption per CPU cycle of the mobile device performing the computational task,
Figure BDA0003126203010000135
and
Figure BDA0003126203010000136
representing the time required for the computing task to be performed locally and the energy consumption of the mobile device, respectively.
2) The computing task is unloaded to an edge cloud server (MEC) to be executed:
Figure BDA0003126203010000133
Figure BDA0003126203010000137
wherein m isiInput data size representing task, ciIndicating the number of CPU cycles required to complete the task, fi mecRepresenting CPU frequency of MEC server, c representing data upload rate of dedicated optical fiber network, riIndicating the data upload rate, p, of a wireless network channeliWhich represents the transmission power of the mobile device,
Figure BDA0003126203010000142
and
Figure BDA0003126203010000143
respectively representing the processing time and energy consumption of the computation task to be offloaded to the MEC server for execution.
3) The computing task is unloaded to a remote cloud server to be executed:
Figure BDA0003126203010000141
Figure BDA0003126203010000144
where n is the number of optical amplifiers between the wireless base station and the centralized cloud computing center, and τ is the uplink transmission delay.
Figure BDA0003126203010000145
And
Figure BDA0003126203010000146
respectively represent the processing time and energy consumption of the computation task to be unloaded to the remote cloud server for execution.
According to the embodiment of the disclosure, the candidate execution mode of each calculation subtask in the calculation tasks is determined by the method, the original execution mode of the calculation subtask can be pre-screened, and then the task execution strategy which cannot be realized can be excluded in advance in the candidate task execution strategy determined according to the candidate execution mode of each calculation subtask, so that unnecessary calculation resource expenditure can be saved.
According to an embodiment of the present disclosure, the preset algorithm employs a simulated annealing algorithm.
Specifically, according to the embodiment of the present disclosure, determining a target task execution policy from candidate task execution policies by using a joint optimization model and a preset algorithm includes:
setting iteration conditions of a simulated annealing algorithm;
setting a termination condition of the simulated annealing algorithm according to a preset threshold value;
performing one or more times of iterative solution according to the iterative conditions to obtain one or more times of current task execution strategies;
calculating an objective function increment by using a joint optimization model, wherein the objective function increment is the difference between a weighted value under the current task execution strategy in the current iteration and a weighted value under the current task execution strategy in the last iteration;
and determining a target task execution strategy from the current task execution strategies according to the target function increment and the termination condition.
The specific implementation process of the method is as follows:
step 1: setting an initial temperature T0Termination temperature TfTemperature change rate u, internal cycle number L; and executing initialization, randomly selecting a task execution strategy as an initial task execution strategy, namely randomly generating a solution as an initial solution Sinit
Step 2: and (5) iteratively executing the step 3 to the step 5 in a loop.
And step 3: generating a new solution SnewAnd calculating the increment delta f of the target function;
and 4, step 4: if Δ f < 0, accept the new solution SnewTo the current solution (current task execution policy); otherwise, with
Figure BDA0003126203010000151
Probability accepting it;
and 5: lowering the temperature to TkIf T isk<TfIf the current solution is the optimal solution (namely the target task execution strategy), the loop is terminated; otherwise, continuing.
According to the embodiment of the disclosure, the target task execution strategy is determined from the candidate task execution strategies by adopting the simulated annealing algorithm, the target task execution strategy can be determined from the massive task execution strategies more quickly, and meanwhile, the local optimal solution is prevented from being used as the global optimal solution by combining the advantages of the algorithm, so that the calculation precision is improved.
Fig. 4 schematically shows a block diagram of a task execution policy determination apparatus 400 according to an embodiment of the present disclosure.
The task execution policy determination apparatus 400 may be used to implement the method described with reference to fig. 2.
As shown in fig. 4, the apparatus includes a first determining module 401 and a second determining module 402.
The first determining module 401 is configured to determine a candidate task execution policy related to a computing task, where the computing task includes multiple computing sub-tasks, and the candidate task execution policy is used to characterize a candidate execution manner of each computing sub-task in the computing task, where the candidate execution manner includes one of: the method is executed at a local device, a remote cloud server and an edge cloud server.
A second determining module 402, configured to determine a target task execution policy from the candidate task execution policies by using a joint optimization model and a preset algorithm, where the joint optimization model is used to calculate a weighted value of task delay and device energy consumption of a computation task under each candidate task execution policy, and the weighted value is smaller than a preset threshold under the target task execution policy.
According to the embodiment of the disclosure, the first determining module 401 determines that the candidate execution mode of each computation subtask is executed on a local device, a remote cloud server, or an edge cloud server, and further the second determining module 402 determines the target task execution strategy by using a joint optimization model and a preset algorithm. Because the combined optimization model is used for calculating the weighted value of the task delay and the equipment energy consumption of the calculation task under each candidate task execution strategy, and the calculation model comprehensively considers the influences of two factors of the task delay and the equipment energy consumption, the device disclosed by the invention can be combined with the characteristics of rich remote cloud calculation resources and the fact that the edge cloud is close to the mobile equipment, and aims at minimizing the weighted sum of the calculation task delay and the mobile equipment energy consumption, the combined optimization is carried out on the calculation task delay and the mobile equipment energy consumption, and the problems of large mobile calculation task delay and high mobile and equipment energy consumption can be solved at the same time.
According to the embodiment of the present disclosure, the joint optimization model includes: the time delay parameter is used for representing the execution time of the calculation subtask, the energy consumption parameter is used for representing the execution energy consumption of the calculation subtask, the task number parameter is used for representing the number of the calculation subtasks, and the balance parameter is used for representing the task time delay weight and the equipment energy consumption weight of the calculation subtask.
According to the embodiment of the disclosure, each parameter in the joint optimization model satisfies a preset constraint condition, and the preset constraint condition includes: the numerical value of the delay parameter is less than or equal to the maximum delay allowed by the calculation task, the numerical value of the energy consumption parameter is less than or equal to the maximum energy consumption allowed by the calculation task, the numerical value of the task number parameter is less than or equal to the number of wireless channels, and the numerical value of the balance parameter is more than or equal to 0 and less than or equal to 1.
According to an embodiment of the present disclosure, the first determination module includes a first determination unit, a second determination unit.
The first determining unit is used for determining a candidate execution mode of each computing sub-task in the computing tasks; and the second determining unit is used for determining a candidate task execution strategy related to the computing task according to the candidate execution mode of each computing subtask.
According to an embodiment of the present disclosure, the first determination unit includes a determination subunit, a calculation subunit, and a screening subunit.
The determining subunit is configured to determine a maximum time delay allowed by the computation task and a maximum energy consumption allowed by the computation task; the calculation subunit is configured to calculate task delay and device energy consumption of each calculation sub-task in each original execution mode, where the original execution mode includes one of: executing at a local device, executing at a remote cloud server, executing at an edge cloud server; a screening subunit, configured to determine an original execution manner that satisfies a primary selection condition as a candidate execution manner, where the primary selection condition is: the task delay of the calculation subtask in the original execution mode is less than or equal to the maximum delay allowed by the calculation task, and the equipment energy consumption of the calculation subtask in the original execution mode is less than or equal to the maximum energy consumption allowed by the calculation task.
According to an embodiment of the present disclosure, the preset algorithm employs a simulated annealing algorithm.
According to an embodiment of the present disclosure, the second determination module includes a first setting unit, a second setting unit, an iteration unit, a calculation unit, and a third determination unit.
The first setting unit is used for setting iteration conditions of the simulated annealing algorithm; the second setting unit is used for setting the termination condition of the simulated annealing algorithm according to the preset threshold value; the iteration unit is used for carrying out one or more times of iteration solution according to the iteration conditions so as to obtain one or more times of current task execution strategies; the calculating unit is used for calculating an objective function increment by using the joint optimization model, wherein the objective function increment is the difference between the weighted value under the current task execution strategy in the current iteration and the weighted value under the current task execution strategy in the last iteration; and the third determining unit is used for determining the target task execution strategy from the current task execution strategies according to the target function increment and the termination condition.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the first determining module 401 and the second determining module 402 may be combined and implemented in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the first determining module 401 and the second determining module 402 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware by any other reasonable way of integrating or packaging a circuit, or in any one of three implementations of software, hardware, and firmware, or in a suitable combination of any of them. Alternatively, at least one of the first determining module 401 and the second determining module 402 may be at least partly implemented as a computer program module, which when executed may perform a corresponding function.
Fig. 5 schematically shows a block diagram of an electronic device for implementing a task execution policy determination method according to an embodiment of the present disclosure.
Fig. 5 schematically shows a block diagram of an electronic device adapted to implement the above described method according to an embodiment of the present disclosure. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, an electronic device 500 according to an embodiment of the present disclosure includes a processor 501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 501 may also include onboard memory for caching purposes. Processor 501 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are stored. The processor 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 502 and/or the RAM 503. Note that the programs may also be stored in one or more memories other than the ROM 502 and the RAM 503. The processor 501 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, electronic device 500 may also include an input/output (I/O) interface 505, input/output (I/O) interface 505 also being connected to bus 504. The system 500 may also include one or more of the following components connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program, when executed by the processor 501, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include ROM 502 and/or RAM 503 and/or one or more memories other than ROM 502 and RAM 503 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method provided by the embodiments of the present disclosure, when the computer program product is run on an electronic device, the program code being configured to cause the electronic device to implement the method of task execution policy determination provided by the embodiments of the present disclosure.
The computer program, when executed by the processor 501, performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed in the form of a signal on a network medium, downloaded and installed through the communication section 509, and/or installed from the removable medium 511. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (11)

1. A task execution policy determination method includes:
determining a candidate task execution strategy related to a computing task, wherein the computing task comprises a plurality of computing subtasks, and the candidate task execution strategy is used for characterizing a candidate execution mode of each computing subtask in the computing task, and the candidate execution mode comprises one of the following modes: executing at a local device, executing at a remote cloud server, executing at an edge cloud server;
and determining a target task execution strategy from the candidate task execution strategies by using a joint optimization model and a preset algorithm, wherein the joint optimization model is used for calculating weighted values of task delay and equipment energy consumption of the calculation task under each candidate task execution strategy, and the weighted values are smaller than a preset threshold under the target task execution strategy.
2. The method of claim 1, wherein the joint optimization model comprises: the time delay parameter is used for representing the execution time of the computing subtask, the energy consumption parameter is used for representing the execution energy consumption of the computing subtask, the task number parameter is used for representing the number of the computing subtask, and the balance parameter is used for representing the task time delay weight and the equipment energy consumption weight of the computing subtask.
3. The method of claim 2, wherein each parameter in the joint optimization model satisfies a preset constraint condition, and the preset constraint condition includes: the numerical value of the delay parameter is less than or equal to the maximum delay allowed by the computing task, the numerical value of the energy consumption parameter is less than or equal to the maximum energy consumption allowed by the computing task, the numerical value of the task number parameter is less than or equal to the number of wireless channels, and the numerical value of the balance parameter is greater than or equal to 0 and less than or equal to 1.
4. The method of claim 1, the determining a candidate task execution policy related to a computing task comprising:
determining the candidate execution mode of each of the computing sub-tasks in the computing task;
and determining the candidate task execution strategy related to the calculation task according to the candidate execution mode of each calculation sub task.
5. The method of claim 4, determining the candidate execution of each of the computing sub-tasks comprises:
determining the maximum time delay allowed by the computing task and the maximum energy consumption allowed by the computing task;
respectively calculating the task delay and the equipment energy consumption of each calculation subtask under each original execution mode, wherein the original execution mode comprises one of the following modes: executing at a local device, executing at a remote cloud server, executing at an edge cloud server;
determining the original execution mode meeting the initial selection condition as the candidate execution mode, wherein the initial selection condition is as follows: the task delay of the calculation sub task in the original execution mode is less than or equal to the maximum delay allowed by the calculation task, and the equipment energy consumption of the calculation sub task in the original execution mode is less than or equal to the maximum energy consumption allowed by the calculation task.
6. The method of claim 1, wherein the pre-set algorithm employs a simulated annealing algorithm.
7. The method of claim 6, wherein determining a target task execution policy from the candidate task execution policies using a joint optimization model and a pre-set algorithm comprises:
setting iteration conditions of the simulated annealing algorithm;
setting a termination condition of the simulated annealing algorithm according to the preset threshold;
performing one or more times of iterative solution according to the iterative conditions to obtain one or more times of current task execution strategies;
calculating an objective function increment by using the joint optimization model, wherein the objective function increment is the difference between the weighted value under the current task execution strategy in the current iteration and the weighted value under the current task execution strategy in the last iteration;
and determining the target task execution strategy from the current task execution strategies according to the target function increment and the termination condition.
8. A task execution policy determination apparatus comprising:
a first determining module, configured to determine a candidate task execution policy related to a computing task, where the computing task includes a plurality of computing sub-tasks, and the candidate task execution policy is used to characterize candidate execution manners of each of the computing sub-tasks in the computing task, where the candidate execution manners include one of: executing at a local device, executing at a remote cloud server, executing at an edge cloud server;
and the second determining module is used for determining a target task execution strategy from the candidate task execution strategies by using a joint optimization model and a preset algorithm, wherein the joint optimization model is used for calculating a weighted value of task delay and equipment energy consumption of the calculation task under each candidate task execution strategy, and the weighted value is smaller than a preset threshold under the target task execution strategy.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 7.
11. A computer program product comprising computer executable instructions for implementing the method of any one of claims 1 to 7 when executed.
CN202110695651.7A 2021-06-22 2021-06-22 Task execution strategy determining method and device, electronic equipment and storage medium Pending CN113419853A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110695651.7A CN113419853A (en) 2021-06-22 2021-06-22 Task execution strategy determining method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110695651.7A CN113419853A (en) 2021-06-22 2021-06-22 Task execution strategy determining method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113419853A true CN113419853A (en) 2021-09-21

Family

ID=77717430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110695651.7A Pending CN113419853A (en) 2021-06-22 2021-06-22 Task execution strategy determining method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113419853A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114217933A (en) * 2021-12-27 2022-03-22 北京百度网讯科技有限公司 Multi-task scheduling method, device, equipment and storage medium
CN114548830A (en) * 2022-04-18 2022-05-27 支付宝(杭州)信息技术有限公司 Selection operator determining method, strategy combination optimizing method and device
WO2023155820A1 (en) * 2022-02-21 2023-08-24 阿里巴巴(中国)有限公司 Method and system for processing computing task

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310922A (en) * 2020-03-27 2020-06-19 北京奇艺世纪科技有限公司 Method, device, equipment and storage medium for processing deep learning calculation task
CN111953759A (en) * 2020-08-04 2020-11-17 国网河南省电力公司信息通信公司 Collaborative computing task unloading and transferring method and device based on reinforcement learning
CN112817741A (en) * 2021-01-05 2021-05-18 中国科学院计算技术研究所 DNN task control method for edge calculation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310922A (en) * 2020-03-27 2020-06-19 北京奇艺世纪科技有限公司 Method, device, equipment and storage medium for processing deep learning calculation task
CN111953759A (en) * 2020-08-04 2020-11-17 国网河南省电力公司信息通信公司 Collaborative computing task unloading and transferring method and device based on reinforcement learning
CN112817741A (en) * 2021-01-05 2021-05-18 中国科学院计算技术研究所 DNN task control method for edge calculation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭金林: "光纤-无线网络中协同计算迁移策略的研究", 中国优秀硕士学位论文全文数据库•信息科技辑, pages 4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114217933A (en) * 2021-12-27 2022-03-22 北京百度网讯科技有限公司 Multi-task scheduling method, device, equipment and storage medium
WO2023155820A1 (en) * 2022-02-21 2023-08-24 阿里巴巴(中国)有限公司 Method and system for processing computing task
CN114548830A (en) * 2022-04-18 2022-05-27 支付宝(杭州)信息技术有限公司 Selection operator determining method, strategy combination optimizing method and device
CN114548830B (en) * 2022-04-18 2022-07-29 支付宝(杭州)信息技术有限公司 Selection operator determining method, strategy combination optimizing method and device

Similar Documents

Publication Publication Date Title
US11848826B2 (en) Hyperparameter and network topology selection in network demand forecasting
CN113419853A (en) Task execution strategy determining method and device, electronic equipment and storage medium
CN110198244B (en) Heterogeneous cloud service-oriented resource configuration method and device
CN108885571B (en) Input of batch processing machine learning model
US9612878B2 (en) Resource allocation in job scheduling environment
US9524009B2 (en) Managing the operation of a computing device by determining performance-power states
US11321123B2 (en) Determining an optimum number of threads to make available per core in a multi-core processor complex to executive tasks
US11315120B2 (en) Implementing a marketplace for risk assessed smart contracts issuers and execution providers in a blockchain
CN112579194A (en) Block chain consensus task unloading method and device based on time delay and transaction throughput
US20150234677A1 (en) Dynamically adjusting wait periods according to system performance
CN113867843B (en) Mobile edge computing task unloading method based on deep reinforcement learning
US20230244530A1 (en) Flexible optimized data handling in systems with multiple memories
CN110766185A (en) User quantity determination method and system, and computer system
CN112905315A (en) Task processing method, device and equipment in Mobile Edge Computing (MEC) network
CN116700931A (en) Multi-target edge task scheduling method, device, equipment, medium and product
US11824731B2 (en) Allocation of processing resources to processing nodes
CN111859775A (en) Software and hardware co-design for accelerating deep learning inference
CN112527509B (en) Resource allocation method and device, electronic equipment and storage medium
CN110635961A (en) Pressure measurement method, device and system of server
CN113076224A (en) Data backup method, data backup system, electronic device and readable storage medium
CN114785693B (en) Virtual network function migration method and device based on layered reinforcement learning
CN117311973A (en) Computing device scheduling method and device, nonvolatile storage medium and electronic device
CN116450290A (en) Computer resource management method and device, cloud server and storage medium
CN115658287A (en) Method, apparatus, medium, and program product for scheduling execution units
CN114637809A (en) Method, device, electronic equipment and medium for dynamic configuration of synchronous delay time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination