CN117707797A - Task scheduling method and device based on distributed cloud platform and related equipment - Google Patents

Task scheduling method and device based on distributed cloud platform and related equipment Download PDF

Info

Publication number
CN117707797A
CN117707797A CN202410168488.2A CN202410168488A CN117707797A CN 117707797 A CN117707797 A CN 117707797A CN 202410168488 A CN202410168488 A CN 202410168488A CN 117707797 A CN117707797 A CN 117707797A
Authority
CN
China
Prior art keywords
task
slave node
target slave
node
task scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410168488.2A
Other languages
Chinese (zh)
Other versions
CN117707797B (en
Inventor
陈晓红
唐鸿凯
梁伟
杨秋月
石家帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangjiang Laboratory
Original Assignee
Xiangjiang Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangjiang Laboratory filed Critical Xiangjiang Laboratory
Priority to CN202410168488.2A priority Critical patent/CN117707797B/en
Publication of CN117707797A publication Critical patent/CN117707797A/en
Application granted granted Critical
Publication of CN117707797B publication Critical patent/CN117707797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to the field of task scheduling, and discloses a task scheduling method, a device and related equipment based on a distributed cloud platform, wherein the method comprises the following steps: acquiring a task request sent by a target slave node, wherein the task request comprises graph structural characteristics determined based on each task on the target slave node; based on a feature extraction module on a master node of an area where the target slave node is located, aiming at each task node on the graph structural feature, carrying out feature aggregation processing on the task node and the field information of the task node to obtain an aggregation graph structural feature; inputting the structural features of the aggregation graph into a deep reinforcement learning module to obtain a task scheduling strategy of the target slave node; and according to the task scheduling strategy, scheduling and distributing each task on the target slave node. By adopting the method and the device, the intelligentization of task scheduling based on the distributed cloud platform is improved, and the network load of the scheduling node is reduced.

Description

Task scheduling method and device based on distributed cloud platform and related equipment
Technical Field
The present invention relates to the field of task scheduling, and in particular, to a task scheduling method and apparatus based on a distributed cloud platform, a computer device, and a storage medium.
Background
In the era of explosive growth of network data, cloud platforms are becoming increasingly popular due to their high performance computing capabilities. The currently used distributed cloud platform is an emerging cloud platform architecture for sinking cloud computing services to edge nodes in a distributed manner, and the distributed cloud platform is driven by virtual machines or containers so that computing resources can be better isolated and managed. However, as the terminal equipment accessed to the cloud platform has explosive growth, the scheduling mode of the cloud platform is that a user sends the service to a scheduling node, and the scheduling node transfers the service data to a target computing node after generating a scheduling strategy, so that the network load of a scheduling stage is heavy.
Therefore, the scheduling node of the existing cloud platform has the problem of heavy network load.
Disclosure of Invention
The embodiment of the invention provides a task scheduling method, device, computer equipment and storage medium based on a distributed cloud platform, which are used for improving the intelligentization of task scheduling based on the distributed cloud platform and reducing the network load of scheduling nodes.
In order to solve the above technical problems, an embodiment of the present application provides a task scheduling method based on a distributed cloud platform, including:
Acquiring a task request sent by a target slave node, wherein the task request comprises graph structural characteristics determined based on each task on the target slave node;
based on a feature extraction module on a master node of an area where the target slave node is located, aiming at each task node on the graph structural feature, carrying out feature aggregation processing on the task node and the field information of the task node to obtain an aggregation graph structural feature;
inputting the structural features of the aggregation graph into a deep reinforcement learning module to obtain a task scheduling strategy of the target slave node;
and according to the task scheduling strategy, scheduling and distributing each task on the target slave node.
In order to solve the above technical problem, an embodiment of the present application further provides a task scheduling device based on a distributed cloud platform, including:
the task request acquisition module is used for acquiring a task request sent by a target slave node, wherein the task request comprises a graph structural feature which is determined to correspond to each task on the target slave node;
the aggregation module is used for carrying out feature aggregation processing on the task nodes and the field information of the task nodes aiming at each task node on the graph structural feature based on the feature extraction module on the master node of the area where the target slave node is positioned to obtain the aggregation graph structural feature;
The task scheduling strategy determining module is used for inputting the structural characteristics of the aggregation graph into the deep reinforcement learning module to obtain the task scheduling strategy of the target slave node;
and the scheduling module is used for scheduling and distributing each task on the target slave node according to the task scheduling strategy.
In order to solve the above technical problems, the embodiments of the present application further provide a computer device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the steps of the task scheduling method based on the distributed cloud platform are implemented when the processor executes the computer program.
In order to solve the above technical problem, the embodiments of the present application further provide a computer readable storage medium, where a computer program is stored, where the computer program, when executed by a processor, implements the steps of the task scheduling method based on the distributed cloud platform.
The task scheduling method, the task scheduling device, the computer equipment and the storage medium based on the distributed cloud platform provided by the embodiment of the invention are characterized in that a task request sent by a target slave node is obtained, wherein the task request comprises graph structural characteristics determined based on each task on the target slave node; based on a feature extraction module on a master node of an area where the target slave node is located, aiming at each task node on the graph structural feature, carrying out feature aggregation processing on the task node and the field information of the task node to obtain an aggregation graph structural feature; inputting the structural features of the aggregation graph into a deep reinforcement learning module to obtain a task scheduling strategy of the target slave node; and according to the task scheduling strategy, scheduling and distributing each task on the target slave node. By adopting the method and the device, the intelligentization of task scheduling based on the distributed cloud platform is improved, and the network load of the scheduling node is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a distributed cloud platform based task scheduling method of the present application;
FIG. 3 is a schematic diagram of a scheduling application framework of a specific embodiment of a task scheduling method based on a distributed cloud platform according to the present application;
FIG. 4 is a schematic diagram of a scheduling application framework of yet another specific embodiment of a task scheduling method based on a distributed cloud platform of the present application;
FIG. 5 is a schematic diagram of a scheduling application framework of another specific embodiment of a task scheduling method based on a distributed cloud platform of the present application;
FIG. 6 is a schematic structural diagram of one embodiment of a distributed cloud platform based task scheduler according to the present application;
FIG. 7 is a schematic structural diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the description of the figures above are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, as shown in fig. 1, a distributed cloud platform 100 may include a plurality of regions, each of which may include slave nodes 101, 102, 103, a network 104, and a master node 105. Network 104 is the medium used to provide communication links between slave nodes 101, 102, 103 and master node 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the master node 105 over the network 104 using the slave nodes 101, 102, 103 to receive or send messages, etc.
Slave nodes 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, e-book readers, MP3 players (Moving Picture Eperts Group Audio Layer III, mpeg 3), MP4 (Moving Picture Eperts Group Audio Layer IV, mpeg 4) players, laptop and desktop computers, and the like.
The master node 105 may be a master node that provides various services, such as a background server that provides support for pages displayed on the slave nodes 101, 102, 103.
It should be noted that, since the distributed cloud platform is widely distributed in geographic locations, services with lower delay, better fault tolerance and availability are provided to clients. In the embodiment of the application, the distributed cloud platform is divided into different service areas, each area comprises a plurality of distributed cloud nodes, and in one area, the distributed cloud nodes can comprise a master node and a plurality of slave nodes, and the master node is responsible for managing and coordinating all the slave nodes in the same area. The load conditions of the slave nodes in the same area are periodically reported to the master node of the area.
Illustratively, the signal coverage of one base station is taken as one service area. The master node may directly select cloud nodes within or near the base station. With the development of edge computing and internet of things, some base stations begin to integrate a certain degree of computing power. A plurality of slave nodes under a service area, such as roadside monitoring and the like, can be calculated as a slave node, and can provide service.
It should be noted that, the task scheduling method based on the distributed cloud platform provided by the embodiment of the application is executed by the master node of the target area, and accordingly, the task scheduling device based on the distributed cloud platform is set in the master node of the target area.
It should be understood that the number of slave nodes, networks and master nodes in fig. 1 is merely illustrative. According to implementation requirements, there may be any number of slave nodes, networks and master nodes, and the slave nodes 101, 102, 103 in the embodiment of the present application may specifically correspond to an application system in actual production.
Referring to fig. 2, fig. 2 shows a task scheduling method based on a distributed cloud platform, where a main node of a target area in the distributed cloud platform is illustrated by using a main node of the target area in fig. 1 as an example, and the distributed cloud platform includes a plurality of areas, where each area includes a main node and a plurality of slave nodes, which are described in detail as follows:
s201, acquiring a task request sent by a target slave node, wherein the task request comprises graph structural characteristics determined based on each task on the target slave node.
The target slave node refers to a slave node initiating a task request in a target area. The task request refers to a request for task scheduling of a task on a target slave node.
In the embodiment of the application, the task request sent by the target slave node is acquired based on the master node of the area where the target slave node is located.
In the embodiment of the application, the graph structural feature is constructed according to the priority constraint and the dependency relationship among the tasks on the target slave node. In the graph structural feature, vertices are included and edges connect between the two vertices. The vertex is a task node and is used for representing tasks on the target slave node, and the number of the vertex is equal to the number of the tasks on the target slave node. Edges represent inheritance order between tasks.
In embodiments of the present application, the graph structural features may be directed acyclic graphs. The graph structure feature may be a structure diagram including only task vertices and connection relationships between task vertices. The specific content of the structural features of the drawing can be adjusted according to practical situations, and the embodiment of the application is not limited.
The graph structure features, by way of example, are directed acyclic graphs, and the workflow is typically represented by a directed acyclic graph consisting of vertices and edges. The directed acyclic graph is oneTuple g=<T,E>WhereinT is the vertex set corresponding to the workflow task, N is the total number of tasks, +.>Is a directed edge set that reflects data dependencies between tasks. For example, edge->Meaning +.>And->There is a preferential constraint between- >Is->Is (father) directly before (A) is (B)>Is->Is directly subsequent (sub) to (a) the above. Each edge->There is a weight to represent the slave +.>To->The size of the data transmitted. A task may have one or more parent or child tasks that cannot be performed until all parent tasks have been performed and all input data required for the task has been received.
Through the directed acyclic graph, the master node (scheduling node) of the area where the target slave node is located can understand the characteristics of the service (each task composition) on the target slave node in a fine granularity. Meanwhile, the sending structure features are smaller than the total data of each task, so that the sending rate of the task requests is improved.
S202, performing feature aggregation processing on task nodes and field information of the task nodes according to each task node on the graph structural feature based on a feature extraction module on a main node of an area where a target slave node is located, and obtaining an aggregated graph structural feature.
The feature extraction module comprises a module for extracting global features of the graph structural features. The feature extraction module is implemented as a graph neural network, including but not limited to a recurrent neural network, and a markov network, the specific selection of which can be adjusted according to the adjustment of the structural features of the graph, and the embodiments of the present application are not limited.
In the embodiment of the present application, the domain information refers to information corresponding to a node having a connection relationship with a task node. The nodes with connection relations comprise neighbor nodes and multi-hop nodes. The multi-hop node refers to other nodes of non-neighbor nodes having a connection relationship with the task node.
In the embodiment of the present application, specifically, based on a feature extraction module on a master node of an area where a target slave node is located, for each task node on a graph structural feature, domain information is aggregated on each task node, and dependency relationships (relationship information of edges) and feature representations between tasks are learned, so as to form a global feature corresponding to the graph structural feature, that is, an aggregated graph structural feature.
S203, inputting the structure characteristics of the aggregate graph into a deep reinforcement learning module to obtain a task scheduling strategy of the target slave node.
The deep reinforcement learning module comprises an agent and is used for receiving relevant information of the target slave node and intelligently deciding a task scheduling strategy for each task on the target slave node according to the relevant information of the target slave node. The task scheduling policy includes a computing task scheduling policy and a communication task scheduling policy.
In the embodiment of the application, the task request includes channel state information of at least one other slave node connected with the target slave node, wherein the target slave node can send and receive the channel state information through the probe data packet, evaluate the channel state information between the target slave node and each area slave node before sending the task request, and send the channel state information as one data in the task request to the master node of the area where the target slave node is located.
S204, scheduling and distributing each task on the target slave node according to the task scheduling strategy.
The method comprises the specific processes that a master node replies a task scheduling strategy to a target slave node, and the target slave node performs scheduling distribution on each task on the target slave node according to the task scheduling strategy.
In the embodiment, a task request sent by a target slave node is obtained, wherein the task request comprises graph structural characteristics determined based on each task on the target slave node; based on a feature extraction module on a master node of an area where a target slave node is located, aiming at each task node on the graph structural feature, carrying out feature aggregation processing on the task node and the field information of the task node to obtain an aggregated graph structural feature; inputting the structural features of the aggregate graph into a deep reinforcement learning module to obtain a task scheduling strategy of the target slave node; and according to the task scheduling strategy, scheduling and distributing each task on the target slave node. By adopting the method and the device, the intelligentization of task scheduling based on the distributed cloud platform is improved, and the network load of the scheduling node is reduced.
Referring to fig. 3, fig. 3 is a schematic diagram of a scheduling application framework of a specific embodiment of a task scheduling method based on a distributed cloud platform, as shown in fig. 3, a client (a target slave node) prepares for sending a service request (a task request), the client sends the service request to a master node in an area where the client is located, the master node determines a resource scheduling policy according to the obtained information, replies the resource scheduling policy (the task scheduling policy) to the target slave node, and the client sends service data (data related to each task) according to the task scheduling policy.
In this embodiment, before the aggregate graph structural feature is input into the deep reinforcement learning module to obtain the task scheduling policy of the target slave node, the method further includes:
and extracting the characteristics of the channel state information based on a perceptron module on the master node of the area where the target slave node is located, and obtaining the channel state characteristics.
The perceptron module can be a single-layer perceptron module or a multi-layer perceptron module, and can be specifically adjusted according to actual conditions.
In this embodiment, channel state information is processed using a perceptron module on a master node of an area where a target slave node is located, the channel state information of the target slave node to each slave node being expressed as Wherein->Representing client to->The channel state information of each slave node, J, is the number of slave nodes in the service area. And taking channel state information from the target slave node to each slave node as input, and extracting characteristic representation of the channel state information through a multi-layer perceptron network to obtain channel state characteristics.
In this embodiment of the present application, before obtaining the task request sent by the target slave node, the method further includes:
and acquiring the load characteristics of each slave node in the area where the target slave node is located at each time step.
And when the time sequence feature extraction condition is met, performing time sequence feature extraction on the load feature based on the master node of the area where the target slave node is located, and determining the time sequence feature of each slave node of the area where the target slave node is located.
In this embodiment, it is understood that the master node is responsible for managing and coordinating all the slave nodes in the same area, and that the load conditions of the slave nodes are reported to the master node in which the area is located periodically or in real time.
In this embodiment, the master node in the area where the target slave node is located periodically or in real time acquires the load characteristics of each slave node in the area where the target slave node is located at each time step.
It will be appreciated that each slave node contains a plurality of containers, within which one container may calculate one or several tasks, Indicate->On the slave nodeThe overall load condition of the individual containers, wherein +.>Representing CPU utilization, & lt + & gt>Representing the memory utilization rate,,/>Representing the disk I/O rate. The load situation of the regional distributed cloud in each time step is represented by a matrix M. There are N slave nodes in the service area, each slave node having K containers. The matrix M can be expressed as the following formula (1).
(1)
In this embodiment, the manner in which the load characteristics of each slave node in the area where the target slave node is located at each time step are obtained includes, but is not limited to, a recurrent neural network. The specific selection of the device can be adjusted according to actual conditions, and the embodiment of the application is not limited.
Illustratively, the recurrent neural network pairs in order of slave nodesRegional distributed cloud load scenario for T time stepsExtracting a timing characteristic (extracting a timing characteristic for the matrix M), wherein the input of each time step is +.>One row in the matrix. T is the total number of time steps, and the inputs of all time steps are time-sequentially formed into a sequenceWherein N is each->Is a node number in (a). The recurrent neural network traverses the input sequence according to time steps for each +.>And processing each row of data in the matrix to form a time sequence characteristic of each node, and splicing the obtained time sequence characteristics to obtain the time sequence characteristics of each slave node in the area where the target slave node is located.
In the embodiment of the present application, the time sequence feature extraction condition includes obtaining a task request sent by a target slave node, inputting the structure feature of the aggregate graph into a deep reinforcement learning module, and obtaining a task scheduling policy of the target slave node, including: and inputting the channel state characteristics, the aggregate graph structural characteristics and the time sequence characteristics into a deep reinforcement learning module to obtain a task scheduling strategy of the target slave node.
In the embodiment of the application, the task scheduling policy includes a calculation task scheduling policy, the deep reinforcement learning module includes an estimation network sub-module and a scheduling sub-module, the output of the estimation network sub-module is the input of the scheduling sub-module, the estimation network sub-module includes a state cost function estimation network and a dominance function estimation network, and the state cost function estimation network and the dominance function estimation network share the characteristics of the input deep reinforcement learning module.
Obtaining a task scheduling strategy of a target slave node, comprising:
a first output value of the state-cost function estimation network and a second output value of the dominance function estimation network are obtained.
The first output value and the second output value are used as inputs to a scheduling sub-module, which also includes a state value network.
And based on a preset state value condition set in the state value network, performing calculation resource analysis on the first output value and the second output value, and determining a calculation task scheduling strategy for the target slave node according to the obtained calculation resource result information, wherein the calculation resource result information at least comprises the slave node corresponding to the calculation resource result information.
The scheduling sub-module is a state value network, and it can be understood that the state value function estimation network, the advantage function estimation network and the state value network are all implemented by corresponding functions, which can be specifically adjusted according to actual conditions, and the embodiment of the application is not limited.
And taking the channel state characteristics, the aggregate graph structural characteristics and the time sequence characteristics as state inputs, and inputting the state inputs into an agent of the deep reinforcement learning module so that the deep reinforcement learning module determines a calculation task scheduling strategy of the target slave node.
The deep reinforcement learning module comprises a state space, wherein the state space comprises the computing resource state (state of each slave node) of the distributed cloud platform, channel state information and each task of the target slave node.
The channel state characteristics, the aggregate graph structural characteristics and the time sequence characteristics are shared by the estimation networks in the estimation network submodule, the estimation network submodule comprises a state cost function estimation network and a dominance function estimation network, and output values of the two estimation networks form the input of the scheduling submodule. The scheduling sub-module is a state cost function Q, the state cost function Q aims at maximizing a reward value, delta values are output in total, delta is determined by an action space, the action space is a strategy space of the deep reinforcement learning module, and the action space has a plurality of dimensions, wherein the dimensions comprise a slave node sequence number, the specification sizes of four computing resources and a reference codebook. The values of all dimensions of the action must not exceed the corresponding resource constraint range, and delta is the total number of values of different dimensions in the action. And taking the action which enables the output Q value sequence to be maximum as a resource scheduling strategy for the current state, namely determining a calculation task scheduling strategy for a target slave node according to the obtained calculation resource result information, wherein the calculation resource result information at least comprises the slave node corresponding to the calculation resource result information.
In the process of determining the calculation task scheduling strategy, an optimization process of an objective function is also needed.
Exemplary, with the execution time and cost of each task as dual optimization objectives, the execution time T of each task is divided into transmission timesAnd calculate time +.>. The cost of task execution is mainly the computational power consumption P on the slave node.
The optimization of the objective function can be performed according to the following formula (2):
(2)
wherein,is the average execution time of all tasks in the past, +.>Is the average calculated energy consumption. />Whether the weighting factor is used to adjust the scheduling of the deep reinforcement learning module is focused on time or energy consumption. />For monitoring the case of execution failure. If the task execution fails or the resource allocation is not reasonable, resulting in +.>、/>Or P tends to infinity, a negative value is introduced to penalize this case.
The deep reinforcement learning module makes a decision in terms of computation for the current task based on the state, i.e., selects a most appropriate slave node and computing resource. Notably, the choice of computing resources is to limit the CPU utilization of the current traffic's computation to its maximum usageMemory usage->Magnetic disk I/O rate->,/>. To reduce the dimension of the motion space.
Specifically, referring to fig. 4, fig. 4 is a schematic diagram of a scheduling application framework of still another specific embodiment of a task scheduling method based on a distributed cloud platform of the present application, as shown in fig. 4, an agent of a deep reinforcement learning module receives multiple inputs (channel state features, aggregate graph structure features, and time sequence features) from the distributed cloud platform, performs slave node selection according to an agent decision network and a reward function on the agent, and performs determination of a computing resource specification and selection of a reference codebook, where the selection of the reference codebook may be configured according to a customized codebook pool, so as to determine a computing task scheduling policy (selection from a node, determination of a computing resource specification, etc.) for a target slave node and a communication task scheduling policy (selection of the reference codebook).
In the embodiment of the present application, the task scheduling policy includes a communication task scheduling policy, and obtaining a task scheduling policy of a target slave node includes:
clustering the structure characteristics of the aggregation graph and the history records of all tasks on the target slave node, and determining a first reference codebook corresponding to the structure characteristics of the aggregation graph, wherein the history records of all tasks on the target slave node comprise the structure characteristics of the history aggregation graph and a second reference codebook corresponding to the structure characteristics of the history aggregation graph.
And based on the preset codebook pool, performing similarity calculation on the first reference codebook and the codebooks in the preset codebook pool, and determining a communication task scheduling strategy for the target slave node according to a calculation result.
In the embodiment of the present application, the first reference codebook is a reference codebook when the target slave node sends a task to each slave node.
It will be appreciated that in a conventional communication resource scheduling scenario, the transmission of large amounts of data (e.g., large files or large data sets) at once can place a significant load on the wireless communication network and exacerbate resource contention. The decision on the communication aspect is ambiguous, and the reference codebook when the target slave node transmits tasks to each slave node can be determined through the reference codebook, and the specification of the reference codebook is closely related to the data transmission time.
In the embodiments of the present application, the reference codebook includes, but is not limited to, an IrSCMA codebook (Irregular Sparse Code Multiple Access, denormal sparse code division multiple access), an lds_cdma codebook (low-density spreading CDMA, low density spread CDMA), an LDS-OFDM codebook (low-density spreading OFDM, low density spread OFDM), a MUSA codebook (Multi User Shared Access, multi-user shared access). The predetermined codebook pool is the codebook pool corresponding to the reference codebook. For example, when the reference codebook is an IrSCMA codebook, the predetermined codebook pool is a codebook pool corresponding to the IrSCMA codebook. The specific contents of the reference codebook and the predetermined codebook pool can be adjusted according to practical situations, and the embodiment of the application is not limited.
For example, in order to solve the above problem, the IrSCMA codebook given by the deep reinforcement learning module is only used as a reference codebook of the task, and a codebook unused at that moment and a total IrSCMA codebook pool (a predetermined codebook pool) are extracted from the codebooks corresponding to the historical tasks belonging to one class, so as to construct a customized codebook pool for the current task, support the fine-grained data transmission of each task on the target slave node, thereby improving the communication efficiency and reducing the network load. That is, the codebook is related to communication, and if the action space is too large, the communication is directly scheduled according to the reference codebook.
In the embodiment of the application, the specific process is as follows; the aggregate graph structural features or graph structural features are clustered with a history record ([ aggregate graph structural features, second reference codebook), from which a reference codebook (first reference codebook) of similar task is obtained. And then starting from the first reference codebook, calculating the structural similarity with other codebooks in the IrSCMA codebook pool, wherein the calculation of the structural similarity is the number of lines, the number of columns, the positions and the number of non-zero lines of the codebook and the values of elements in the codebook.
The reference codebook is exemplified by A, and the other codebooks in IrSCMA are shown as. NZ (a) represents the index set of non-zero rows in a, and NZ (B) represents the index set of non-zero rows in B. Wherein, Wherein->Representing the number of lines of the codebook,/->Representing the number of columns of the codebook,/->Representing the non-zero number of rows of the codebook. /> ,/>The standard deviation of the non-zero row elements of matrices a and B, respectively. Similarity is calculated according to the following formula (3):
(3)
through the mode, the similarity with the reference codebook is screened out to be in the rangeToo small codebook similarity can increase or decrease interference among data in the transmission process, and too large codebook similarity can lead the transmission of tasks not to reach fine granularity, and finally, a customized IrSCMA codebook pool for the current task is formed together. The number of codebooks is not less than the number of tasks within the target slave node.
It is to be understood that in general, inheritance relationships involve that some tasks must be completed before others, which is significant in the impact of task computation order. However, for data transmission, the dependency between tasks is more concerned with the availability of data than the ordering of tasks. Thus, the inheritance relationship between tasks has no direct effect on the order of data transmission, and the availability of data may be adjusted and optimized according to the actual situation and requirements of task execution. Furthermore, data is transmitted in parallel means that multiple data transmission tasks can be processed simultaneously without having to wait for the previous transmission to complete.
Referring to fig. 5, fig. 5 is a schematic diagram of a scheduling application framework of a johnson rule in IrSCMA in the task scheduling method based on the distributed cloud platform.
As shown in fig. 5, tasks are divided into different stages of data transmission according to inheritance relationships among tasks in the graph structural features, specifically, all the ingress tasks are regarded as a first stage, a directly subsequent task of the ingress tasks is regarded as a second stage, and so on until an egress task without a directly subsequent task is regarded as a last stage. Each task is encoded and modulated, and each parallel IrSCMA transmitting device can perform the encoding and transmission operations of the data block simultaneously, but both operations of one task are performed by the same IrSCMA transmitting device. The overlap time of the execution of the different tasks of the two procedures is optimized by using the johnson rule with the aim of minimizing makespan. In each stage, the order of task packets is readjusted according to Johnson's rules to minimize the end time (makespan) of the last task to complete the second process transfer. The time required to encode and transmit the two operations is first determined for each task. All tasks in each stage are ordered in accordance with the time required for the first procedure. And for the tasks in the first stage with good order, re-adjusting the order of the tasks in each stage according to the shortest transmission time of the second process, and transmitting the tasks according to the adjusted order so as to minimize the end time of the last task for completing the transmission of the second process. Data of each task (service) on the target slave node is directly transmitted to the decision-designated slave node by the target slave node (client).
The Johnson rule is an existing pipeline scheduling method. The specific process comprises the following steps:
a. listing the process matrix of the business, wherein the process matrix of the task of each stage is regarded as a single sub-process matrix;
b. and selecting the working procedure with the shortest processing time from each sub-working procedure matrix. If the process belongs to the 1 st process, the workpiece to which the process belongs is arranged in front. Otherwise, the minimum procedure is the 2 nd procedure, and the work piece to which the procedure belongs is arranged at the rearmost side;
c. eliminating the ordered tasks from the process matrix;
d. and d, continuing to sort according to the steps a, b and c, and ending the sorting if all the tasks are scheduled to put into production.
In the embodiment of the application, the data transmission time is reduced by a johnson rule in IrSCMA.
In the embodiment of the application, the network congestion condition of the dispatching node is effectively relieved by combining the IrSCMA communication technology while the dispatching strategy is issued to the user and the service data is directly sent to the slave node. Furthermore, the mixed resource allocation method based on deep reinforcement learning utilizes a graph convolution neural network to extract the directed acyclic graph characteristics of the service, transfers the attention of the deep reinforcement learning to the allocation of computing resources, and simultaneously adopts a fuzzy mode to process the communication resource allocation so as to accelerate the decision process. And finally, constructing a customized codebook pool aiming at the current service from the historical scheduling record and the IrSCMA codebook pool by using a clustering algorithm, converting the inheritance relationship between tasks into different stages of data transmission, and optimizing the time overhead of the whole transmission process by using Johnson rules so as to improve the transmission efficiency and the overall performance.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Fig. 6 shows a schematic block diagram of a task scheduling device based on a distributed cloud platform, which corresponds to the task scheduling method based on the distributed cloud platform in one-to-one correspondence in the above embodiment. As shown in fig. 6, the task scheduling device based on the distributed cloud platform includes a task request acquisition module 301, an aggregation module 302, a task scheduling policy determination module 303, and a scheduling module 304. The functional modules are described in detail as follows:
the task request acquisition module 301 is configured to acquire a task request sent by a target slave node, where the task request includes determining corresponding graph structural features based on each task on the target slave node.
The aggregation module 302 is configured to perform feature aggregation processing on the task node and domain information of the task node for each task node on the graph structural feature based on the feature extraction module on the master node in the area where the target slave node is located, so as to obtain an aggregate graph structural feature.
The task scheduling policy determining module 303 is configured to input the aggregate graph structural feature into the deep reinforcement learning module, and obtain a task scheduling policy of the target slave node.
And the scheduling module 304 is configured to schedule and allocate each task on the target slave node according to the task scheduling policy.
Optionally, in an apparatus in an embodiment of the present application, the task request includes channel state information of at least one other slave node connected to the target slave node.
Before the task scheduling policy determining module 303, the method further includes:
and the channel state characteristic determining module is used for extracting the characteristics of the channel state information based on the perceptron module on the master node of the area where the target slave node is located, and obtaining the channel state characteristics.
Optionally, in the apparatus in this embodiment of the present application, before the task scheduling policy determining module 303, before acquiring a task request sent by a target slave node, the method further includes:
and the load characteristic acquisition module is used for acquiring the load characteristics of each slave node in the area where the target slave node is located at each time step.
And the time sequence feature acquisition module is used for extracting the time sequence feature of the load feature based on the master node of the area where the target slave node is located when the time sequence feature extraction condition is met, and determining the time sequence feature of each slave node of the area where the target slave node is located.
Optionally, in the apparatus in the embodiment of the present application, the time sequence feature extraction condition includes obtaining a task request sent by a target slave node, and in the task scheduling policy determining module 303, the aggregate graph structural feature is input into the deep reinforcement learning module, to obtain a task scheduling policy of the target slave node, including:
and the task scheduling strategy determining unit is used for inputting the channel state characteristics, the aggregate graph structural characteristics and the time sequence characteristics into the deep reinforcement learning module to obtain the task scheduling strategy of the target slave node.
Optionally, in the apparatus in the embodiment of the present application, the task scheduling policy includes a calculation task scheduling policy, the deep reinforcement learning module includes an estimation network sub-module and a scheduling sub-module, an output of the estimation network sub-module is an input of the scheduling sub-module, the estimation network sub-module includes a state cost function estimation network and a dominance function estimation network, and the state cost function estimation network and the dominance function estimation network share a feature of the deep reinforcement learning module.
In the task scheduling policy determining module 303, obtaining a task scheduling policy of the target slave node includes:
and the output value determining unit is used for acquiring a first output value of the state cost function estimation network and a second output value of the dominance function estimation network.
And the input unit is used for taking the first output value and the second output value as the input of the scheduling sub-module, and the scheduling sub-module further comprises a state value network.
The computing task scheduling strategy determining unit is used for analyzing the computing resources of the first output value and the second output value based on the preset state value condition set in the state value network, and determining the computing task scheduling strategy of the target slave node according to the obtained computing resource result information, wherein the computing resource result information at least comprises the slave node corresponding to the computing resource result information.
Optionally, in the device in the embodiment of the present application, the task scheduling policy includes a communication task scheduling policy, and the task scheduling policy determining module 303 obtains a task scheduling policy of a target slave node, where the task scheduling policy includes:
and the second reference codebook determining unit is used for clustering the structure characteristics of the aggregation graph and the history records of the tasks on the target slave node, and determining a first reference codebook corresponding to the structure characteristics of the aggregation graph, wherein the history records of the tasks on the target slave node comprise the structure characteristics of the history aggregation graph and a second reference codebook corresponding to the structure characteristics of the history aggregation graph.
And the communication task scheduling strategy determining unit is used for carrying out similarity calculation on the first reference codebook and the codebook in the preset codebook pool based on the preset codebook pool, and determining the communication task scheduling strategy of the target slave node according to the calculation result.
For specific limitation of the task scheduling device based on the distributed cloud platform, reference may be made to the limitation of the task scheduling method based on the distributed cloud platform, which is not described herein. The modules in the task scheduling device based on the distributed cloud platform can be all or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 7, fig. 7 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only a computer device 4 having a component connection memory 41, a processor 42, a network interface 43 is shown in the figures, but it is understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculations and/or information processing in accordance with predetermined or stored instructions, the hardware of which includes, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud host node and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or D interface display memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 4. Of course, the memory 41 may also comprise both an internal memory unit of the computer device 4 and an external memory device. In this embodiment, the memory 41 is typically used for storing an operating system and various application software installed on the computer device 4, such as program codes for controlling electronic files, etc. Further, the memory 41 may be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute a program code stored in the memory 41 or process data, such as a program code for executing control of an electronic file.
The network interface 43 may comprise a wireless network interface or a wired network interface, which network interface 43 is typically used for establishing a communication connection between the computer device 4 and other electronic devices.
The present application also provides another embodiment, namely, a computer readable storage medium, where an interface display program is stored, where the interface display program is executable by at least one processor, so that the at least one processor performs the steps of the task scheduling method based on the distributed cloud platform.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a slave node (which may be a mobile phone, a computer, a master node, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
It is apparent that the embodiments described above are only some embodiments of the present application, but not all embodiments, the preferred embodiments of the present application are given in the drawings, but not limiting the patent scope of the present application. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a more thorough understanding of the present disclosure. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing, or equivalents may be substituted for elements thereof. All equivalent structures made by the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the protection scope of the application.

Claims (10)

1. The task scheduling method based on the distributed cloud platform is characterized by being applied to a master node of a target area in the distributed cloud platform, wherein the distributed cloud platform comprises a plurality of areas, each area comprises the master node and a plurality of slave nodes, and the task scheduling method based on the distributed cloud platform comprises the following steps:
Acquiring a task request sent by a target slave node, wherein the task request comprises graph structural features determined based on each task on the target slave node;
based on a feature extraction module on a master node of an area where the target slave node is located, aiming at each task node on the graph structural feature, carrying out feature aggregation processing on the task node and the field information of the task node to obtain an aggregation graph structural feature;
inputting the structural features of the aggregation graph into a deep reinforcement learning module to obtain a task scheduling strategy of the target slave node;
and according to the task scheduling strategy, scheduling and distributing each task on the target slave node.
2. The distributed cloud platform-based task scheduling method of claim 1, wherein said task request includes channel state information of at least one other slave node connected to said target slave node; before the aggregate graph structural features are input into the deep reinforcement learning module to obtain the task scheduling strategy of the target slave node, the method further comprises the following steps:
and extracting the characteristics of the channel state information based on a perceptron module on a master node of the area where the target slave node is located, so as to obtain the channel state characteristics.
3. The task scheduling method based on the distributed cloud platform according to claim 2, further comprising, before the task request sent by the acquisition target slave node:
acquiring the load characteristics of each slave node in the area where the target slave node is located at each time step;
and when the time sequence feature extraction condition is met, performing time sequence feature extraction on the load feature based on the master node of the area where the target slave node is located, and determining the time sequence feature of each slave node of the area where the target slave node is located.
4. The task scheduling method based on the distributed cloud platform as claimed in claim 3, wherein the time sequence feature extraction condition includes obtaining a task request sent by a target slave node, the inputting the aggregate graph structural feature into a deep reinforcement learning module, and obtaining a task scheduling policy of the target slave node, including:
and inputting the channel state characteristics, the aggregate graph structural characteristics and the time sequence characteristics into a deep reinforcement learning module to obtain the task scheduling strategy of the target slave node.
5. The distributed cloud platform-based task scheduling method of claim 4, wherein said task scheduling policy comprises a computational task scheduling policy, said deep reinforcement learning module comprises an estimation network sub-module and a scheduling sub-module, an output of said estimation network sub-module being an input of said scheduling sub-module, said estimation network sub-module comprising a state cost function estimation network and a dominance function estimation network, said state cost function estimation network and said dominance function estimation network sharing characteristics of inputs to said deep reinforcement learning module;
The task scheduling strategy for obtaining the target slave node comprises the following steps:
acquiring a first output value of the state cost function estimation network and a second output value of the dominance function estimation network;
taking the first output value and the second output value as inputs to the scheduling sub-module, the scheduling sub-module further comprising a state value network;
and carrying out calculation resource analysis on the first output value and the second output value based on a preset state value condition set in the state value network, and determining a calculation task scheduling strategy for the target slave node according to obtained calculation resource result information, wherein the calculation resource result information at least comprises slave nodes corresponding to the calculation resource result information.
6. The task scheduling method based on the distributed cloud platform according to claim 4, wherein the task scheduling policy includes a communication task scheduling policy, and the obtaining the task scheduling policy of the target slave node includes:
clustering the structure characteristics of the aggregation graph and the history records of the tasks on the target slave node, and determining a first reference codebook corresponding to the structure characteristics of the aggregation graph, wherein the history records of the tasks on the target slave node comprise the structure characteristics of the history aggregation graph and a second reference codebook corresponding to the structure characteristics of the history aggregation graph;
And based on a preset codebook pool, performing similarity calculation on the first reference codebook and a codebook in the preset codebook pool, and determining a communication task scheduling strategy for the target slave node according to a calculation result.
7. The task scheduling device based on the distributed cloud platform is characterized by comprising:
the task request acquisition module is used for acquiring a task request sent by a target slave node, wherein the task request comprises a graph structural feature which is determined to correspond to each task on the target slave node;
the aggregation module is used for carrying out feature aggregation processing on the task nodes and the field information of the task nodes aiming at each task node on the graph structural feature based on the feature extraction module on the master node of the area where the target slave node is positioned to obtain the aggregation graph structural feature;
the task scheduling strategy determining module is used for inputting the structural characteristics of the aggregation graph into the deep reinforcement learning module to obtain the task scheduling strategy of the target slave node;
and the scheduling module is used for scheduling and distributing each task on the target slave node according to the task scheduling strategy.
8. The distributed cloud platform based task scheduling apparatus of claim 7, wherein said task request includes channel state information of at least one other slave node connected to said target slave node; before the task scheduling policy determining module, the method further includes:
and the channel state characteristic determining module is used for extracting the characteristics of the channel state information based on the perceptron module on the master node of the area where the target slave node is positioned, so as to obtain the channel state characteristics.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the distributed cloud platform based task scheduling method of any of claims 1 to 6 when the computer program is executed.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the distributed cloud platform-based task scheduling method according to any one of claims 1 to 6.
CN202410168488.2A 2024-02-06 2024-02-06 Task scheduling method and device based on distributed cloud platform and related equipment Active CN117707797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410168488.2A CN117707797B (en) 2024-02-06 2024-02-06 Task scheduling method and device based on distributed cloud platform and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410168488.2A CN117707797B (en) 2024-02-06 2024-02-06 Task scheduling method and device based on distributed cloud platform and related equipment

Publications (2)

Publication Number Publication Date
CN117707797A true CN117707797A (en) 2024-03-15
CN117707797B CN117707797B (en) 2024-05-03

Family

ID=90144730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410168488.2A Active CN117707797B (en) 2024-02-06 2024-02-06 Task scheduling method and device based on distributed cloud platform and related equipment

Country Status (1)

Country Link
CN (1) CN117707797B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080159425A1 (en) * 2006-12-20 2008-07-03 Khojastepour Mohammad A Design of multi-user downlink linear MIMO precoding systems
CN105005570A (en) * 2014-04-23 2015-10-28 国家电网公司 Method and apparatus for mining massive intelligent power consumption data based on cloud computing
CN112148451A (en) * 2020-09-27 2020-12-29 南京大学 Low-delay collaborative self-adaptive CNN inference system and method
CN112486641A (en) * 2020-11-18 2021-03-12 鹏城实验室 Task scheduling method based on graph neural network
US20210081787A1 (en) * 2019-09-12 2021-03-18 Beijing University Of Posts And Telecommunications Method and apparatus for task scheduling based on deep reinforcement learning, and device
CN112817728A (en) * 2021-02-20 2021-05-18 咪咕音乐有限公司 Task scheduling method, network device and storage medium
CN114860398A (en) * 2022-04-21 2022-08-05 郑州大学 Task scheduling method, device and equipment of intelligent cloud platform
CN115309521A (en) * 2022-07-25 2022-11-08 哈尔滨工业大学(深圳) Marine unmanned equipment-oriented deep reinforcement learning task scheduling method and device
CN115480882A (en) * 2021-05-31 2022-12-16 中移雄安信息通信科技有限公司 Distributed edge cloud resource scheduling method and system
CN115794341A (en) * 2022-11-16 2023-03-14 中国平安财产保险股份有限公司 Task scheduling method, device, equipment and storage medium based on artificial intelligence
CN116302467A (en) * 2022-12-09 2023-06-23 中国联合网络通信集团有限公司 Task allocation method, device and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080159425A1 (en) * 2006-12-20 2008-07-03 Khojastepour Mohammad A Design of multi-user downlink linear MIMO precoding systems
CN105005570A (en) * 2014-04-23 2015-10-28 国家电网公司 Method and apparatus for mining massive intelligent power consumption data based on cloud computing
US20210081787A1 (en) * 2019-09-12 2021-03-18 Beijing University Of Posts And Telecommunications Method and apparatus for task scheduling based on deep reinforcement learning, and device
CN112148451A (en) * 2020-09-27 2020-12-29 南京大学 Low-delay collaborative self-adaptive CNN inference system and method
CN112486641A (en) * 2020-11-18 2021-03-12 鹏城实验室 Task scheduling method based on graph neural network
CN112817728A (en) * 2021-02-20 2021-05-18 咪咕音乐有限公司 Task scheduling method, network device and storage medium
CN115480882A (en) * 2021-05-31 2022-12-16 中移雄安信息通信科技有限公司 Distributed edge cloud resource scheduling method and system
CN114860398A (en) * 2022-04-21 2022-08-05 郑州大学 Task scheduling method, device and equipment of intelligent cloud platform
CN115309521A (en) * 2022-07-25 2022-11-08 哈尔滨工业大学(深圳) Marine unmanned equipment-oriented deep reinforcement learning task scheduling method and device
CN115794341A (en) * 2022-11-16 2023-03-14 中国平安财产保险股份有限公司 Task scheduling method, device, equipment and storage medium based on artificial intelligence
CN116302467A (en) * 2022-12-09 2023-06-23 中国联合网络通信集团有限公司 Task allocation method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱映映 等: "云系统中面向海量多媒体数据的动态任务调度算法", 小型微型计算机系统, no. 04, 15 April 2013 (2013-04-15) *

Also Published As

Publication number Publication date
CN117707797B (en) 2024-05-03

Similar Documents

Publication Publication Date Title
CN107018175B (en) Scheduling method and device of mobile cloud computing platform
CN109783237B (en) Resource allocation method and device
CN108540568B (en) Computing capacity sharing method and intelligent equipment
CN115543577B (en) Covariate-based Kubernetes resource scheduling optimization method, storage medium and device
CN110968366A (en) Task unloading method, device and equipment based on limited MEC resources
Li et al. Resource scheduling based on improved spectral clustering algorithm in edge computing
CN114816738A (en) Method, device and equipment for determining calculation force node and computer readable storage medium
CN111511028A (en) Multi-user resource allocation method, device, system and storage medium
CN116881009A (en) GPU resource scheduling method and device, electronic equipment and readable storage medium
CN114629960A (en) Resource scheduling method, device, system, device, medium, and program product
CN114741200A (en) Data center station-oriented computing resource allocation method and device and electronic equipment
CN116456496B (en) Resource scheduling method, storage medium and electronic equipment
CN112488563A (en) Determination method and device for force calculation parameters
CN117707797B (en) Task scheduling method and device based on distributed cloud platform and related equipment
Guo et al. PARA: Performability‐aware resource allocation on the edges for cloud‐native services
CN116402318A (en) Multi-stage computing power resource distribution method and device for power distribution network and network architecture
CN114003238B (en) Container deployment method, device, equipment and storage medium based on transcoding card
CN115840649A (en) Method and device for allocating partitioned capacity block type virtual resources, storage medium and terminal
CN116185578A (en) Scheduling method of computing task and executing method of computing task
CN115334001A (en) Data resource scheduling method and device based on priority relation
CN113535378A (en) Resource allocation method, storage medium and terminal equipment
CN115361285B (en) Method, device, equipment and medium for realizing off-line service mixed deployment
CN112019368B (en) VNF migration method, VNF migration device and VNF migration storage medium
CN117891618B (en) Resource task processing method and device of artificial intelligent model training platform
CN113535388B (en) Task-oriented service function aggregation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant