CN114416329A - Computing task deployment method and device, electronic equipment and storage medium - Google Patents

Computing task deployment method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114416329A
CN114416329A CN202111449936.9A CN202111449936A CN114416329A CN 114416329 A CN114416329 A CN 114416329A CN 202111449936 A CN202111449936 A CN 202111449936A CN 114416329 A CN114416329 A CN 114416329A
Authority
CN
China
Prior art keywords
computing
sub
node
task
computation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111449936.9A
Other languages
Chinese (zh)
Inventor
张岩
曹畅
刘莹
李建飞
张帅
何涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202111449936.9A priority Critical patent/CN114416329A/en
Publication of CN114416329A publication Critical patent/CN114416329A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5033Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering data affinity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Hardware Redundancy (AREA)

Abstract

The application provides a computing task deployment method, a computing task deployment device, electronic equipment and a storage medium, relates to the technical field of software, and is used for solving the problem that computing resources of a single computing node cannot meet computing requirements of computing tasks in the prior art. The method comprises the following steps: acquiring a calculation task; decomposing the computing task into a plurality of sub-computing tasks; determining a plurality of sub-computation task groups according to the plurality of sub-computation tasks; each of the plurality of sub-computation task groups comprises at least one sub-computation task of the same computation type; determining a first computing node corresponding to each sub-computing task group; the first computing node is a first computing node in a computing network; and deploying each sub-computing task group to the corresponding first computing node. The method and the device are used in the process of deploying the computing task.

Description

Computing task deployment method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of software technologies, and in particular, to a method and an apparatus for deploying a computing task, an electronic device, and a storage medium.
Background
In current computing networks, after a computing task reaches the computing network, the computing task is typically computed by the computing node receiving the computing task. However, the current computing task is more and more complex, and the computing requirement of the computing task cannot be met only by computing by the current node. At this time, the edge computing node may send the computing task to other computing nodes, perform the computation by the other computing nodes, and return the computation result.
However, in the related art, one calculation task is generally performed by one calculation node. With the increase of the complexity of the computing task and the increase of the resources required by the computing task, the resources of a single computing node cannot meet the computing requirements of the computing task, and the computing task deployment fails.
Disclosure of Invention
The application provides a computing task deployment method, a computing task deployment device, electronic equipment and a storage medium, which can solve the problem that computing resources of a single computing node cannot meet computing requirements of computing tasks in the prior art.
In a first aspect, a computing task deployment method is provided, including: acquiring a calculation task; decomposing the computing task into a plurality of sub-computing tasks; determining a plurality of sub-computation task groups according to the plurality of sub-computation tasks; each of the plurality of sub-computation task groups comprises at least one sub-computation task of the same computation type; determining a first computing node corresponding to each sub-computing task group; the first computing node is a first computing node in a computing network; and deploying each sub-computing task group to the corresponding first computing node.
With reference to the first aspect, in a possible implementation manner, the determining a first computing node corresponding to each sub-computing task group includes: determining an incidence relation among a plurality of sub-computing tasks; the incidence relation among the sub-computing tasks is used for representing the incidence relation among the sub-computing tasks with computing incidence; determining the incidence relation among a plurality of sub-computing task groups according to the incidence relation among a plurality of sub-computing tasks; the incidence relation between one sub-computing task group and other sub-computing task groups comprises the following steps: the incidence relation between each sub-computing task in one sub-computing task group and the sub-computing tasks in other sub-computing task groups; and determining the first computing node corresponding to each sub-computing task group according to the number of other sub-computing task groups associated with each sub-computing task group and the computing resources of each first computing node in the computing network.
With reference to the first aspect, in a possible implementation manner, determining an association relationship between a plurality of sub-computation task groups according to an association relationship between a plurality of sub-computation tasks includes: determining a first topology according to the incidence relation among a plurality of sub-computing tasks; each node in the first topology is each sub-computation task in a plurality of sub-computation tasks; the connection relation of the nodes in the first topology is used for representing the incidence relation among a plurality of sub-computing tasks; combining the nodes with the same calculation type in the first topology, and determining a second topology; one node in the second topology corresponds to one sub-computing task in the plurality of sub-computing task groups; the connection relation of each node in the second topology comprises the connection relation of each node in the first topology; and determining the association relation among the plurality of sub-computing task groups according to the connection relation of each node in the second topology.
With reference to the first aspect, in a possible implementation manner, determining, according to the number of other sub-computation task groups associated with each sub-computation task group and the computation resource of each first computation node in the computation network, a first computation node corresponding to each sub-computation task group includes: step 1, determining at least one first node in a second topology; the first node is a node which is connected with other nodes in the second topology and has the largest number; step 2, determining a second node from at least one first node; the second node is the node which needs the most computing resources in at least one first node; step 3, deleting the second node from the second topology, and taking the second topology after the second node is deleted as the second topology; step 4, sequencing each first computing node according to the current available resource size of each first computing node; step 5, deploying a sub-computing task group corresponding to the currently deleted second node on the first computing node with the most currently available resources; and 6, repeatedly executing the step 1, the step 2, the step 3, the step 4, the step 5 and the step 6 until all the sub-computing task groups are deployed on the first computing node.
With reference to the first aspect, in a possible implementation manner, deploying a sub-computation task group corresponding to a currently deleted second node on a first computation node with the most currently available resources includes: if the deployment of the sub-computing task group corresponding to the currently deleted second node fails, determining a second computing node; the second computing node is connected with the first computing node with the minimum current available resources; and (5) taking the second computing node as the computing node in the first computing node, and repeatedly executing the step 4 and the step 5.
In a second aspect, a computing task deployment device is provided, including: an acquisition unit and a processing unit; an acquisition unit configured to acquire a calculation task; a processing unit for decomposing the computation task into a plurality of sub-computation tasks; the processing unit is further used for determining a plurality of sub-computing task groups according to the plurality of sub-computing tasks; each of the plurality of sub-computation task groups comprises at least one sub-computation task of the same computation type; the processing unit is also used for determining a first computing node corresponding to each sub-computing task group; the first computing node is a first computing node in a computing network; and the processing unit is also used for deploying each sub-computing task group to the corresponding first computing node.
With reference to the second aspect, in a possible implementation manner, the processing unit is specifically configured to: determining an incidence relation among a plurality of sub-computing tasks; the incidence relation among the sub-computing tasks is used for representing the incidence relation among the sub-computing tasks with computing incidence; determining the incidence relation among a plurality of sub-computing task groups according to the incidence relation among a plurality of sub-computing tasks; the incidence relation between one sub-computing task group and other sub-computing task groups comprises the following steps: the incidence relation between each sub-computing task in one sub-computing task group and the sub-computing tasks in other sub-computing task groups; and determining the first computing node corresponding to each sub-computing task group according to the number of other sub-computing task groups associated with each sub-computing task group and the computing resources of each first computing node in the computing network.
With reference to the second aspect, in a possible implementation manner, the processing unit is specifically configured to: determining a first topology according to the incidence relation among a plurality of sub-computing tasks; each node in the first topology is each sub-computation task in a plurality of sub-computation tasks; the connection relation of the nodes in the first topology is used for representing the incidence relation among a plurality of sub-computing tasks; combining the nodes with the same calculation type in the first topology, and determining a second topology; one node in the second topology corresponds to one sub-computing task in the plurality of sub-computing task groups; the connection relation of each node in the second topology comprises the connection relation of each node in the first topology; and determining the association relation among the plurality of sub-computing task groups according to the connection relation of each node in the second topology.
With reference to the second aspect, in a possible implementation manner, the processing unit is specifically configured to execute the following steps: step 1, determining at least one first node in a second topology; the first node is a node which is connected with other nodes in the second topology and has the largest number; step 2, determining a second node from at least one first node; the second node is the node which needs the most computing resources in at least one first node; step 3, deleting the second node from the second topology, and taking the second topology after the second node is deleted as the second topology; step 4, sequencing each first computing node according to the current available resource size of each first computing node; step 5, deploying a sub-computing task group corresponding to the currently deleted second node on the first computing node with the most currently available resources; and 6, repeatedly executing the step 1, the step 2, the step 3, the step 4, the step 5 and the step 6 until all the sub-computing task groups are deployed on the first computing node.
With reference to the second aspect, in a possible implementation manner, the processing unit is specifically further configured to: determining a second computing node under the condition that the deployment of a sub-computing task group corresponding to the currently deleted second node fails; the second computing node is connected with the first computing node with the minimum current available resources; and (5) taking the second computing node as the computing node in the first computing node, and repeatedly executing the step 4 and the step 5.
In a third aspect, the present application provides an electronic device comprising: a processor and a communication interface; the communication interface is coupled to a processor for executing a computer program or instructions for implementing the method for deploying computing tasks as described in the first aspect and any possible implementation form of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having instructions stored therein, which when executed by a processor of an electronic device, enable the electronic device to perform the method of deploying computing tasks as described in the first aspect and any one of the possible implementations of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising instructions, the computer program product comprising computer instructions that, when run on an electronic device, cause the electronic device to perform the method for deploying computing tasks as described in the first aspect and any one of the possible implementations of the first aspect.
In a sixth aspect, the present application provides a chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute a computer program or instructions to implement the method for deploying computing tasks as described in the first aspect and any one of the possible implementations of the first aspect.
The technical effects brought by any implementation manner of the second aspect to the fifth aspect may be referred to the technical effects brought by the corresponding design of the first aspect, and are not described herein again.
In the present application, the names of the above-mentioned data processing apparatuses do not limit the devices or functional modules themselves, and in actual implementation, the devices or functional modules may appear by other names. Insofar as the functions of the respective devices or functional blocks are similar to those of the present invention, they are within the scope of the claims of the present invention and their equivalents.
These and other aspects of the invention will be more readily apparent from the following description.
The scheme at least has the following beneficial effects: after the electronic equipment acquires the computing task, the computing task is divided into a plurality of sub-computing task groups, and the sub-computing tasks are respectively deployed on different computing nodes. In this way, each sub-computation task group can only occupy less resources of each computation node, and the problem of computation task deployment failure caused by insufficient resources of the computation nodes is avoided.
In addition, the sub-computing tasks with the same computing type are combined into one sub-computing task group. Since the sub-computation tasks with the same computation type generally use the same resources for computation, the sub-computation tasks in the sub-computation task group can be computed by multiplexing the same resources according to the computation timing sequence of the sub-computation tasks, and the amount of resources required by the computation tasks is further reduced. In addition, the problem of resource waste caused by the fact that the same sub-computing tasks similar to the computing tasks are deployed on different computing nodes can be solved.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a network architecture of an edge computing network according to the present application;
fig. 3 is a schematic flowchart of a computing task deployment method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another method for deploying computing tasks according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another method for deploying computing tasks according to an embodiment of the present application;
fig. 6(a) is a schematic topology structure diagram of a first topology provided in an embodiment of the present application;
fig. 6(b) is a schematic topology structure diagram of a second topology provided in the embodiment of the present application;
fig. 6(c) is a schematic topology structure diagram of a second topology after a second node is deleted according to the embodiment of the present application;
fig. 7(a) is a schematic topology diagram of a computing network provided in an embodiment of the present application;
fig. 7(b) is a schematic topology structure diagram of a first computing node according to an embodiment of the present application;
fig. 7(c) is a schematic diagram of a topology structure between standby nodes according to an embodiment of the present application;
fig. 7(d) is a schematic diagram of a topology structure of a first computing node after the N4 node and the N8 node are taken as the first computing node according to the embodiment of the present application;
FIG. 8 is a schematic diagram of a sub-computation task group deployed by a compute node according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second" and the like in the description and drawings of the present application are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the present application, the meaning of "a plurality" means two or more unless otherwise specified.
In order to implement the method for deploying computing tasks provided in the embodiments of the present application, embodiments of the present application provide an electronic device, which is used for executing the method for deploying computing tasks, where the electronic device may be an electronic device related in the present application or a module in the electronic device; or a chip in the electronic device, or other devices for executing the method for deploying the computing task, which is not limited in this application.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 1, the electronic device 100 includes at least one processor 101, a communication line 102, and at least one communication interface 104, and may also include a memory 103. The processor 101, the memory 103 and the communication interface 104 may be connected via a communication line 102.
The processor 101 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present application, such as: one or more Digital Signal Processors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs).
The communication link 102 may include a path for communicating information between the aforementioned components.
The communication interface 104 is used for communicating with other devices or a communication network, and may use any transceiver or the like, such as ethernet, Radio Access Network (RAN), Wireless Local Area Network (WLAN), and the like.
The memory 103 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that may store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that may store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to include or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In a possible design, the memory 103 may exist independently from the processor 101, that is, the memory 103 may be a memory external to the processor 101, in which case, the memory 103 may be connected to the processor 101 through the communication line 102, and is used for storing execution instructions or application program codes, and is controlled by the processor 101 to execute, so as to implement the computing task deployment method provided by the following embodiments of the present application. In yet another possible design, the memory 103 may also be integrated with the processor 101, that is, the memory 103 may be an internal memory of the processor 101, for example, the memory 103 is a cache memory, and may be used for temporarily storing some data and instruction information.
As one implementation, the processor 101 may include one or more CPUs, such as CPU0 and CPU1 of FIG. 1. As another implementation, the electronic device 100 may include multiple processors, such as the processor 101 and the processor 107 of FIG. 1. As yet another implementable manner, the electronic device 100 may also include an output device 105 and an input device 106.
A computing network (also referred to as a computational power network) refers to a technology for flexibly scheduling computing resources, storage resources and network resources among cloud computing nodes, network nodes and edge cloud computing nodes according to computing requirements of a computing task to complete computing of the computing task.
Current computing networks mainly include: the method comprises the steps of control plane computation network coordinated scheduling, data plane network convergence perception, management, service plane computation resource arrangement and the like. The computing network can uniformly coordinate and manage computing resources, storage resources and network resources, and can measure the computing resources, the storage resources and the network resources according to a uniform standard. In a computing network, the computing resources, storage resources and network resources of each node may be represented in a specific parameter form, and these parameters may inform the resource conditions of other nodes themselves by carrying data messages mutually transmitted between nodes. The ability to provide intuitive components and services to users is also often added to current computing networks; visualization in arrangement, scheduling and application is realized through communication among the service layer, the bottom layer resources and the network interface.
Computing networks typically include: mobile edge computing, edge cloud computing, and the like. In current computing networks, computing nodes primarily provide virtual machine resources as computing resources to user devices. When computing tasks, each computing node usually performs computing only according to tasks deployed at the node, and cooperation between the nodes is lacked. Current strategies for scheduling of computing tasks in a computing network, and configuration of computing resources, are difficult to apply jointly and lack resource constraints on the computing nodes. Therefore, the method is difficult to be directly applied to the computing network, and as the number of current computing tasks is increased, the problem of insufficient computing resources of the computing network occurs. Moreover, as computing tasks are more and more abstract (e.g., computing tasks such as container-carried function computation), the same computing function may be repeatedly deployed when the computing resources are deployed, which results in wasted computing resources.
An example, as shown in fig. 2, is a schematic network architecture diagram of an edge computing network 20 provided in the present application. As shown in fig. 2, the edge computing network includes: a plurality of edge computing nodes 201, a plurality of network devices 202, a computing network management system 203, and a user device 204.
The user equipment 204 is connected to the edge computing node 201, and sends the computing task to the edge computing node 201.
The edge computing node 201 is configured to receive a computing task from the user equipment, deploy the computing task, compute the computing task, and return a computation result to the user equipment. The edge force node 201 is connected to a network device 202. The edge computing force nodes 201 may communicate with each other through a network device. In addition, the edge computing node 201 may also communicate with the computing network management system 203 through a network device
The network device 202 is used to implement communication between the edge computing node 201 and other edge computing nodes, as well as communication between the edge computing node 201 and the computing network management system 203.
The computing network management system 203 includes a service abstraction function, a computing resource management function, a computing network integration analysis function, and a control function of a network controller.
In the related art, when a computing task is deployed, the computing task is generally deployed into a computing node as a whole, and the computing node deploying the computing task performs computing on the computing task to obtain a computing result. However, as the complexity of the computing task increases, the computing task becomes more abstract, the resources required by the computing task continuously increase, and the resources of the computing node may not meet the requirements of the computing task, resulting in failure of deployment of the computing task. In order to solve the problems in the prior art, the application provides a computing task deployment method. The computing task can be decomposed into a plurality of sub-computing task groups and respectively deployed on different computing nodes. Therefore, each sub-computing task group can complete computing only by less computing resources, the problem that the resources of the computing nodes cannot meet the requirements of the computing tasks is avoided, and the success rate of computing task deployment is improved.
The computing task deployment scheme provided by the embodiment of the application can be applied to the edge electronic device shown in fig. 1. The electronic device related to the embodiment of the present application may be an edge computing node shown in fig. 2, or a computing network management system, or may be a newly added independent device, which is not limited in this application.
As shown in fig. 3, the method for deploying computing tasks provided in the embodiment of the present application may be specifically implemented by the following steps S301 to S305:
s301, the electronic equipment acquires a calculation task.
In a possible implementation manner, the present application mainly takes an electronic device as an example of a computing network management system for explanation.
Specifically, the method comprises the following steps: the user equipment sends a computing task to a computing node (marked as a target computing node) accessed by the user equipment; and after receiving the computing task, the target computing node sends the computing task to a computing network management system through network equipment.
S302, the electronic equipment decomposes the calculation task into a plurality of sub-calculation tasks.
In one possible implementation manner, an electronic device is taken as an example of a computing network management system to explain: after receiving the computing task, the computing network management system calls a computing task template of the computing task, determines computing resources, network resources, storage resources and the like required by the computing task, and determines computing task logic and constraint conditions of each sub-computing task in the computing task. And the computing network management system decomposes the computing task into a plurality of sub-computing tasks according to the computing task logic and the constraint conditions of each sub-computing task.
The computing task template comprises the composition of sub-computing tasks of the computing task, and information such as task logic, constraint conditions and the like among the sub-computing tasks.
It should be noted that, after the electronic device decomposes the computing task into a plurality of sub-computing tasks, the electronic device can determine the association relationship between the plurality of sub-computing tasks. For example, example 1 below:
example 1, an electronic device decomposes a computing task into: the sub-computing task 1, the sub-computing task 2, the sub-computing task 3 and the sub-computing task 4 are 4 sub-computing tasks. The sub-computation task 1 and the sub-computation task 2 are parallel sub-computation tasks, and the sub-computation task 3 needs to use computation results of the sub-computation task 1 and the sub-computation task 2. The sub-calculation task 4 requires the calculation results of the sub-calculation task 3. At this time, the electronic device determines that there is no association between the sub-computing task 1 and the sub-computing task 2, there is an association between the sub-computing task 1 and the sub-computing task 2 and the computing task 3, and there is an association between the sub-computing task 3 and the sub-computing task 4.
S303, the electronic equipment determines a plurality of sub-calculation task groups according to the plurality of sub-calculation tasks.
Each of the plurality of sub-computing task groups includes at least one sub-computing task of a same computing type.
In one possible implementation, the computing network management system determines the computing type of each sub-computing task, and merges the sub-computing tasks of the same computing type into one sub-computing task group to obtain a plurality of sub-computing task groups.
With reference to example 1, the electronic device determines that sub-computing task 1 and sub-computing task 4 are the same type of sub-computing task, and sub-computing task 2 and sub-computing task 3 are the same type of sub-computing task. At this time, the electronic device merges the sub-computation task 1 and the sub-computation task 4 into a sub-computation task group 1, and merges the sub-computation task 2 and the sub-computation task 3 into a sub-computation task group 2.
S304, the electronic equipment determines first computing nodes corresponding to the sub-computing task groups.
The first computing node is a computing node in the computing network.
In one example, the electronic device determines the computing resources of each sub-computing task group, as well as the priorities of the individual sub-computing tasks. The electronic equipment preferentially distributes the sub-computing tasks with high priority and more required computing resources to the first computing node with more available resources.
In yet another example, the electronic device preferentially allocates the sub-computing task group with the highest priority to the first computing node that is located closer to the user device and whose available resources can meet the computing requirements of the sub-computing task group.
The electronic device may also determine, in other manners, the first computing node corresponding to each sub-computing task group, which is not limited in this application.
S305, the electronic equipment deploys each sub-computing task group to the corresponding first computing node.
Specifically, the electronic device sends the sub-computation task group to the first computation node corresponding to the sub-computation task group determined according to S305. After the first computing node receives the subtask group, the sub-computing task group is deployed, each sub-computing task in the computing resource computing subtask group is called, the computing result of the sub-computing task is sent to the corresponding next-hop first computing node according to the computing logic of the computing task, and the corresponding sub-computing task is computed by the next-hop first computing node according to the computing result. And releasing the resources for computing the computing task by each first computing node until the computing task is computed.
With reference to example 1, the electronic device deploys the sub-computation task group 1 into the first computation node 1, and the electronic device deploys the sub-computation task group 2 into the first computation node 2.
The first computing node 1 computes the sub-computing task 1, determines a computation result of the sub-computing task 1, and sends the computation result of the sub-computing task 1 to the first computing node 2.
The first computing node 2 firstly computes the sub-computing task 2, determines the computing result of the sub-computing task 2, and after the first computing node 2 receives the computing result of the sub-computing task 1, performs the computation of the sub-computing task 3 according to the computing result of the sub-computing task 1 and the computing result of the sub-computing task 2, and determines the computing result of the sub-computing task 3. The first computing node 2 sends the computation results of the sub-computation tasks 3 to the first computing node 1.
And the first computing node 1 computes the sub-computing task 4 according to the computing result of the sub-computing task 3, determines the computing result of the sub-computing task 4 and completes the computing of the computing task.
The scheme at least has the following beneficial effects: after the electronic equipment acquires the computing task, the computing task is divided into a plurality of sub-computing task groups, and the sub-computing tasks are respectively deployed on different computing nodes. In this way, each sub-computation task group can only occupy less resources of each computation node, and the problem of computation task deployment failure caused by insufficient resources of the computation nodes is avoided.
In addition, the sub-computing tasks with the same computing type are combined into one sub-computing task group. Since the sub-computation tasks with the same computation type generally use the same resources for computation, the sub-computation tasks in the sub-computation task group can be computed by multiplexing the same resources according to the computation timing sequence of the sub-computation tasks, and the amount of resources required by the computation tasks is further reduced. In addition, the problem of resource waste caused by the fact that the same sub-computing tasks similar to the computing tasks are deployed on different computing nodes can be solved.
In a possible implementation manner, referring to fig. 3, as shown in fig. 4, the above S304 may be specifically implemented by the following S401 to S403:
s401, the electronic equipment determines the incidence relation among a plurality of sub-computing tasks.
And the incidence relation among the sub-computing tasks is used for representing the incidence relation among the sub-computing tasks with computing incidence.
For example, the association relationship between the sub-calculation tasks is used to represent the sequence between the sub-calculation tasks, or the association between the calculation results between the sub-calculation tasks, and the like, which is not limited in this application.
With reference to the above example 1, the association relationship between the plurality of sub-computing tasks determined by the electronic device is: and the association relationship between the sub-computing tasks 1 and 2 and the computing task 3, and the association relationship between the sub-computing tasks 3 and 4 are respectively.
S402, the electronic equipment determines the incidence relation among the plurality of sub-computing task groups according to the incidence relation among the plurality of sub-computing tasks.
The incidence relation between one sub-computing task group and other sub-computing task groups comprises the following steps: and each sub-computing task in one sub-computing task group is associated with the sub-computing tasks in other sub-computing task groups.
Specifically, after determining the association relationship between the sub-computing tasks, the electronic device merges the sub-computing tasks into a sub-computing task group. The electronic device determines other sub-computing task groups associated with each target sub-computing task in the target sub-computing task group. And the electronic equipment takes the sub-computing task group associated with each target sub-computing task in the target sub-computing task group as the sub-computing task group associated with the target sub-computing task group.
The sub-computation task group associated with the target sub-computation task refers to: there is a sub-computation task group of sub-computation tasks having an association relationship with the target sub-computation task.
In connection with example 1 above, the electronic device determines that there is an association between the sub-computing task group 1 and the sub-computing task group 2.
S403, the electronic device determines the first computing node corresponding to each sub-computing task group according to the number of other sub-computing task groups associated with each sub-computing task group and the computing resources of each first computing node in the computing network.
In a possible implementation manner, if the number of sub-computation task groups associated with the target sub-computation task group is larger, it is indicated that the sub-computation task group is an important sub-computation task group, or the sub-computation task group requires more computation resources. At this time, the electronic device may deploy the sub-computation task group on the first computation node with more available resources, so as to ensure that the sub-computation tasks in the target sub-computation task group can be successfully computed.
In a possible implementation manner, the electronic device deploys a sub-computation task group having a start sub-computation task and an end sub-computation task on the computation node receiving the computation task, so that the computation task starts computation at the computation node and completes computation at the node, and a computation result can be directly returned to the user terminal after computation is completed, thereby improving computation efficiency.
The scheme at least has the following beneficial effects: the electronic equipment determines the incidence relation among the sub-calculation task groups according to the incidence relation among the sub-calculation tasks, and determines the first calculation nodes corresponding to the sub-calculation task groups according to the number of the sub-calculation task groups associated with the sub-calculation task groups. Therefore, the electronic equipment can preferentially ensure that more computing resources are distributed to the sub-computing task groups with more association relations, the deployment success rate of the sub-computing tasks is improved, and the computing efficiency of the sub-computing task groups is improved.
With reference to fig. 4, as shown in fig. 5, in a possible implementation manner, the above S402 may be specifically implemented by the following S501 to S503.
S501, the electronic equipment determines a first topology according to the incidence relation among the plurality of sub-computing tasks.
Each node in the first topology is each sub-computation task in a plurality of sub-computation tasks; the connection relationships of the nodes in the first topology are used to characterize the association relationships between the plurality of sub-computing tasks.
In a possible implementation, the first topology is further used for characterizing the computation type of each sub-computation task.
An example, the topology of the first topology is shown in fig. 6(a), and the electronic device decomposes the computing task into: CP1-CP 10. The connection relationship between the sub-computing tasks includes: L1-L11. The computing types of the sub-computing tasks include: D1-D6.
S502, the electronic equipment merges the nodes with the same calculation type in the first topology, and determines a second topology.
Wherein a node in the second topology corresponds to a sub-compute task in the plurality of sub-compute task groups; the connection relation of each node in the second topology comprises the connection relation of each node in the first topology.
In one possible implementation, the electronic device calculates the type of computation for each sub-computation task in the first topology. And the electronic equipment merges the sub-computing task nodes with the same computing type to obtain a second topology.
For example, the topology structure of the second topology is as shown in fig. 6(b), the electronic device merges the sub-computation tasks CP1-CP10 according to the computation types D1-D6 to obtain 6 computation type sub-computation task groups, and the connection relationship between the sub-computation task groups inherits the connection relationship between the sub-computation tasks in the first topology.
S503, the electronic device determines the association relationship among the plurality of sub-calculation task groups according to the connection relationship of each node in the second topology.
The scheme brings at least one beneficial effect: the electronic equipment can more clearly and intuitively display the sub-computing tasks and the association relation among the sub-computing task groups in a topological form.
As shown in fig. 5, the above S403 may be implemented by the following S504 to S509.
S504, the electronic device determines at least one first node in the current second topology.
The first node is a node which is connected with the maximum number of other nodes in the current second topology.
S505, the electronic equipment determines a second node from at least one first node; the second node is the node with the most needed computing resources in the at least one first node.
In one possible implementation manner, the electronic device performs the sorting according to the number of sub-computation tasks connected to each sub-computation task group and the computation resources required by the sub-computation task group.
After that, the electronic device deletes the sub-computing tasks which are ranked at the top in the second topology in sequence according to the ranking of the sub-computing task group until the topology which is not directly connected exists in the second topology.
Specifically, in conjunction with the second topology in fig. 6(b) described above, the electronic device calculates a normalized connection number C for each nodeiAnd a computing resource Ri. The type of calculation is DiNormalized connection number and computing resources of the sub-computing task group of (1) and (D)i(Ci,Ri) And (4) showing. The normalized connection number and the computing resources of each sub-computing task group determined by the electronic device are respectively as follows: d1(0.2,0.4),D2(0.5, 0.7),D3(0.5,0.6),D4(0.3,0.5),D5(0.2,0.4),D6(0.3,0.5)。
The electronic equipment performs sequencing determination according to the normalized connection quantity and the computing resources of each sub-computing task group: d2 ═ D3> D4 ═ D6> D1 ═ D5.
S506, the electronic device deletes the second node from the current second topology, and takes the current second topology with the second node deleted as the second topology.
According to the above sequence, the second node determined in the first iteration of the electronic device is D2And D3Two nodes. The electronic device deletes D from the current second topology2And D3Two nodes.
D remaining in the current second topology after deletion4、D6、D1And D5And four nodes. Wherein D is4、D6Two nodes have a connection relationship between them. D1And D5And has no connection relation with other nodes.
A second topology with the second node removed after the first iteration in conjunction with fig. 6(b) is shown in fig. 6 (c).
S507, the electronic equipment sorts the first computing nodes according to the current available resource size of the first computing nodes.
The available resources of the computing node include computing resources, storage resources, network resources and the like of the computing node.
The electronic device, according to the size of the current available resource of the first computing node, specifically sorting each first computing node includes: the electronic equipment comprehensively ranks the first computing nodes according to available computing resources, storage resources and network resources of the first computing nodes.
In one example, a topological schematic of a computing network is shown in FIG. 7 (a); the electronic device determines the topology between the first computing nodes as shown in fig. 7 (b).
Wherein, the available resource of the N1 node is RN1The available resource of the N2 node is RN2The available resource of the N3 node is RN3The available resource of the N9 node is RN9. The size relationship of the available resources among the nodes is as follows: rN2>RN1>RN9>RN3
S508, the electronic device deploys the sub-computing task group corresponding to the currently deleted second node on the first computing node with the most current available resources.
With reference to FIGS. 6(b) and 7(b), in the second placeIn one iteration process, the electronic equipment deletes the sub-computing task group D in the second topology2And D3. At this time, the electronic device will calculate task group D2And D3And the node is deployed on the N2 node with the most resources.
During the second iteration, the electronic device will delete the sub-computing task group D in the second topology4And D6. The electronic device will calculate task group D4And D5Deployed on node N1, which is the second most resource.
At this time, there remains a sub-computation task group D without connection relation in the second topology1And D5. The electronic device will calculate task group D1Deployed on the N3 node, the electronic device will perform sub-computing task D5Deployed on the N9 node.
It should be noted that, before deploying the sub-computation task group on each first computation node, the electronic device may further determine whether a sub-computation task that cannot be deployed on the same computation node as the currently deployed sub-computation task in the sub-computation task group is deployed on the first computation node. And if so, the electronic node determines the computing nodes sequenced behind the computing node and deploys the sub-computing task group. If not, the electronic device directly deploys the sub-computing task on the node.
The cases of the sub-computation tasks that cannot be deployed in the same computation node include, but are not limited to: the resource types required by the sub-computing tasks are different; the sub-computing tasks are computed in different ways.
As shown in fig. 8, a schematic diagram of a sub-computation task group deployed by each computation node after the computation node deployment is completed is shown.
S509, the electronic device repeatedly executes the steps S504-S509 until all the sub-computing task groups are deployed on the first computing node.
Based on the scheme, the sub-computing tasks can be merged into the sub-computing task group in a topological form, and the sub-computing task group is deployed to the corresponding computing node.
In a possible implementation manner, the first computing node is a computing node within h hops of a computing node receiving the computing task, and h is a positive integer. In this way, the time delay of the transmission of the sub-computation tasks and the computation task results between the respective computation nodes can be reduced.
In the present application, in order to avoid the problem that deployment fails due to insufficient computing resources when only the first computing node is used, a node that hops away from a computing node that receives a computing task by h + i may also be used as a standby node, and i is a positive integer. An exemplary value of i is 1.
As shown in fig. 7(c), the value of i is 1, which is a schematic diagram of a standby node determined by the electronic device.
In a possible implementation manner, the above S508 may be specifically implemented as: and the electronic equipment determines whether the sub-computing task group corresponding to the currently deleted second node is successfully deployed.
If the deployment is successful, the electronic device returns to execute the above S504.
If the deployment fails, the electronic device determines a second computing node. S507 and S508 are repeatedly performed with the second computing node as the computing node in the first computing node. And the second computing node is the computing node connected with the first computing node with the minimum current available resources.
That is, the electronic device takes the second computing node as the first computing node, and sorts the first computing nodes according to the available resources of each of the first computing nodes after the computing nodes are added.
An example, in conjunction with fig. 7(b) above, the first computing node with the smallest currently available resources is the N3 node, and in conjunction with fig. 7(c), the computing nodes connected by the N3 node are the N4 node and the N8 node. The electronic device determines the N4 node and the N8 node to be second computing nodes.
An example, as shown in fig. 7(d), is a schematic diagram of the topology of the first computing node after the N4 node and the N8 node are taken as the first computing node in fig. 7 (d).
For example, the computing device is in the sub-computing task group D2After deployment to the N2 node, the remaining available resources of the N2 node are not sufficient for deploying the child computeTask group D3. At this time, the electronic device adds the N4 node and the N8 node of the standby nodes to the first computing node. The electronic device reorders the available resources for the current N1, N2, N3, N4, N8, and N9 nodes and groups the sub-computing tasks D3The most top ranked node is deployed.
In a possible implementation manner, when a certain deployed sub-computing task needs to be migrated, the electronic device performs sorting according to available resources of a current first computing node, and deploys the sub-computing task to be migrated on a first computing node ranked the top.
If none of the available resources of the current first computing node can meet the computing requirements of the sub-computing task to be migrated, the electronic device performs the above S902, and selects a computing node from the standby nodes to be added to the first computing node. The electronic equipment performs sequencing according to available resources of the current first computing node, and deploys the sub-computing tasks to be migrated on the first computing node which is sequenced most at the front.
The reason for migration of the sub-computing task includes but is not limited to: and maintaining the computing node and failing the computing node.
It should be noted that, after all the sub-computation task groups are deployed, the electronic device configures the network connection relationship between the first computation nodes, the computation sequence and the forwarding process of the computation result according to the association relationship between the sub-computation tasks, so as to ensure that the computation tasks can be completed normally. After the computation of the computation task is completed, each first computation node releases the resources used for computing the computation task.
It should be noted that, the above mainly takes the electronic device as an example of the computing network management system, and the computing task deployment scheme in the present application is described. In practice, the electronic device may also be a computing node. At this time, the computing node does not need to send the computing task to the computing network management system, and the transmission delay of the computing task is reduced. When the electronic device is also a computing node, the computing node needs to have the capability of acquiring the network topology of the computing network, and selecting a neighboring node to determine the computing resources of the neighboring node.
It can be seen that the technical solutions provided in the embodiments of the present application are mainly introduced from the perspective of methods. To implement the above functions, it includes hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiment of the present application, the electronic device may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. Optionally, the division of the modules in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device includes: an acquisition unit 901 and a processing unit 902.
An obtaining unit 901 configured to obtain a calculation task; a processing unit 902 for decomposing a computation task into a plurality of sub-computation tasks; the processing unit 902 is further configured to determine a plurality of sub-computation task groups according to the plurality of sub-computation tasks; each of the plurality of sub-computation task groups comprises at least one sub-computation task of the same computation type; the processing unit 902 is further configured to determine a first computing node corresponding to each sub-computing task group; the first computing node is a first computing node in a computing network; the processing unit 902 is further configured to deploy each sub-computation task group to the corresponding first computation node.
Optionally, the processing unit 902 is specifically configured to: determining an incidence relation among a plurality of sub-computing tasks; the incidence relation among the sub-computing tasks is used for representing the incidence relation among the sub-computing tasks with computing incidence; determining the incidence relation among a plurality of sub-computing task groups according to the incidence relation among a plurality of sub-computing tasks; the incidence relation between one sub-computing task group and other sub-computing task groups comprises the following steps: the incidence relation between each sub-computing task in one sub-computing task group and the sub-computing tasks in other sub-computing task groups; and determining the first computing node corresponding to each sub-computing task group according to the number of other sub-computing task groups associated with each sub-computing task group and the computing resources of each first computing node in the computing network.
Optionally, the processing unit 902 is specifically configured to: determining a first topology according to the incidence relation among a plurality of sub-computing tasks; each node in the first topology is each sub-computation task in a plurality of sub-computation tasks; the connection relation of the nodes in the first topology is used for representing the incidence relation among a plurality of sub-computing tasks; combining the nodes with the same calculation type in the first topology, and determining a second topology; one node in the second topology corresponds to one sub-computing task in the plurality of sub-computing task groups; the connection relation of each node in the second topology comprises the connection relation of each node in the first topology; and determining the association relation among the plurality of sub-computing task groups according to the connection relation of each node in the second topology.
Optionally, the processing unit 902 is specifically configured to execute the following steps: step 1, determining at least one first node in a second topology; the first node is a node which is connected with other nodes in the second topology and has the largest number; step 2, determining a second node from at least one first node; the second node is the node which needs the most computing resources in at least one first node; step 3, deleting the second node from the second topology, and taking the second topology after the second node is deleted as the second topology; step 4, sequencing each first computing node according to the current available resource size of each first computing node; step 5, deploying a sub-computing task group corresponding to the currently deleted second node on the first computing node with the most currently available resources; and 6, repeatedly executing the step 1, the step 2, the step 3, the step 4, the step 5 and the step 6 until all the sub-computing task groups are deployed on the first computing node.
Optionally, the processing unit 902 is further specifically configured to: determining a second computing node under the condition that the deployment of a sub-computing task group corresponding to the currently deleted second node fails; the second computing node is connected with the first computing node with the minimum current available resources; and (5) taking the second computing node as the computing node in the first computing node, and repeatedly executing the step 4 and the step 5.
Optionally, the electronic device may further comprise a storage module for storing program codes and/or data of the electronic device.
The processing unit 902 may be a processor or a controller, among others. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a DSP and a microprocessor, or the like. The obtaining unit 901 may be a transceiver circuit or a communication interface. The storage unit may be a memory. When the processing unit 902 is a processor, the obtaining unit 901 is a communication interface, and the storage unit is a memory, the electronic device according to the embodiment of the present application may be the electronic device shown in fig. 1.
Through the description of the above embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the foregoing function distribution may be completed by different functional modules according to needs, that is, the internal structure of the network node is divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the module and the network node described above, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed by a computer, the computer executes each step in the method flow shown in the above method embodiment.
Embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of deploying computing tasks in the above-described method embodiments.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, and a hard disk. Random Access Memory (RAM), Read-Only Memory (ROM), Erasable Programmable Read-Only Memory (EPROM), registers, a hard disk, an optical fiber, a portable Compact disk Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any other form of computer-readable storage medium, in any suitable combination, or as appropriate in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). In embodiments of the invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Since the apparatus, the device, the computer-readable storage medium, and the computer program product in the embodiments of the present invention may be applied to the method described above, for technical effects obtained by the apparatus, the computer-readable storage medium, and the computer program product, reference may also be made to the method embodiments described above, and details of the embodiments of the present application are not repeated herein.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A computing task deployment method, comprising:
acquiring a calculation task;
decomposing the computing task into a plurality of sub-computing tasks;
determining a plurality of sub-computation task groups according to the plurality of sub-computation tasks; each of the plurality of sub-computation task groups comprises at least one sub-computation task of the same computation type;
determining a first computing node corresponding to each sub-computing task group; the first computing node is a first computing node in a computing network;
and deploying each sub-computing task group to the corresponding first computing node.
2. The method of claim 1, wherein determining the first computing node corresponding to each sub-computing task group comprises:
determining an incidence relation among the plurality of sub-computing tasks; the incidence relation among the sub-computing tasks is used for representing the incidence relation among the sub-computing tasks with computing incidence;
determining the incidence relation among the plurality of sub-computing task groups according to the incidence relation among the plurality of sub-computing tasks; the incidence relation between one sub-computing task group and other sub-computing task groups comprises the following steps: the incidence relation between each sub-computing task in one sub-computing task group and the sub-computing tasks in other sub-computing task groups;
and determining the first computing node corresponding to each sub-computing task group according to the number of other sub-computing task groups associated with each sub-computing task group and the computing resources of each first computing node in the computing network.
3. The method of claim 2, wherein determining the associations between the plurality of sub-computing task groups based on the associations between the plurality of sub-computing tasks comprises:
determining a first topology according to the incidence relation among the plurality of sub-computing tasks; wherein each node in the first topology is each of a plurality of sub-computation tasks; the connection relation of the nodes in the first topology is used for representing the incidence relation among the plurality of sub-computing tasks;
combining the nodes with the same calculation type in the first topology to determine a second topology; a node in the second topology corresponds to a sub-compute task in the plurality of sub-compute task groups; the connection relation of each node in the second topology comprises the connection relation of each node in the first topology;
and determining the association relation among the plurality of sub-computing task groups according to the connection relation of each node in the second topology.
4. The method according to claim 3, wherein the determining, according to the number of other sub-computation task groups associated with each sub-computation task group and the computation resource of each first computation node in the computation network, the first computation node corresponding to each sub-computation task group comprises:
step 1, determining at least one first node in the current second topology; the first node is a node which is connected with other nodes in the current second topology and has the largest number;
step 2, determining a second node from the at least one first node; the second node is the node which needs the most computing resources in the at least one first node;
step 3, deleting the second node from the current second topology, and taking the current second topology after the second node is deleted as the current second topology;
step 4, sequencing each first computing node according to the size of the current available resource of each first computing node;
step 5, deploying a sub-computing task group corresponding to the currently deleted second node on the first computing node with the most currently available resources;
and 6, repeatedly executing the step 1, the step 2, the step 3, the step 4, the step 5 and the step 6 until all the sub-computing task groups are deployed on the first computing node.
5. The method of claim 4, wherein deploying the sub-computing task group corresponding to the currently deleted second node on the first computing node with the most currently available resources comprises:
if the deployment of the sub-computing task group corresponding to the currently deleted second node fails, determining a second computing node; the second computing node is connected with the first computing node with the smallest currently available resources;
and taking the second computing node as a computing node in the first computing node, and repeatedly executing the step 4 and the step 5.
6. A computing task deployment device, comprising: an acquisition unit and a processing unit;
the acquisition unit is used for acquiring a calculation task;
the processing unit is used for decomposing the computing task into a plurality of sub-computing tasks;
the processing unit is further configured to determine a plurality of sub-computation task groups according to the plurality of sub-computation tasks; each of the plurality of sub-computation task groups comprises at least one sub-computation task of the same computation type;
the processing unit is further configured to determine a first computing node corresponding to each sub-computing task group; the first computing node is a first computing node in a computing network;
the processing unit is further configured to deploy the sub-computation task groups to the corresponding first computation nodes.
7. The apparatus according to claim 6, wherein the processing unit is specifically configured to:
determining an incidence relation among the plurality of sub-computing tasks; the incidence relation among the sub-computing tasks is used for representing the incidence relation among the sub-computing tasks with computing incidence;
determining the incidence relation among the plurality of sub-computing task groups according to the incidence relation among the plurality of sub-computing tasks; the incidence relation between one sub-computing task group and other sub-computing task groups comprises the following steps: the incidence relation between each sub-computing task in one sub-computing task group and the sub-computing tasks in other sub-computing task groups;
and determining the first computing node corresponding to each sub-computing task group according to the number of other sub-computing task groups associated with each sub-computing task group and the computing resources of each first computing node in the computing network.
8. The apparatus according to claim 7, wherein the processing unit is specifically configured to:
determining a first topology according to the incidence relation among the plurality of sub-computing tasks; wherein each node in the first topology is each of a plurality of sub-computation tasks; the connection relation of the nodes in the first topology is used for representing the incidence relation among the plurality of sub-computing tasks;
combining the nodes with the same calculation type in the first topology to determine a second topology; a node in the second topology corresponds to a sub-compute task in the plurality of sub-compute task groups; the connection relation of each node in the second topology comprises the connection relation of each node in the first topology;
and determining the association relation among the plurality of sub-computing task groups according to the connection relation of each node in the second topology.
9. The apparatus according to claim 8, wherein the processing unit is specifically configured to perform the following steps:
step 1, determining at least one first node in the second topology; the first node is a node which is connected with other nodes in the second topology and has the largest number;
step 2, determining a second node from the at least one first node; the second node is the node which needs the most computing resources in the at least one first node;
step 3, deleting the second node from the second topology, and taking the second topology after the second node is deleted as the second topology;
step 4, sequencing each first computing node according to the size of the current available resource of each first computing node;
step 5, deploying a sub-computing task group corresponding to the currently deleted second node on the first computing node with the most currently available resources;
and 6, repeatedly executing the step 1, the step 2, the step 3, the step 4, the step 5 and the step 6 until all the sub-computing task groups are deployed on the first computing node.
10. The apparatus according to claim 9, wherein the processing unit is further configured to:
determining a second computing node under the condition that the deployment of the sub-computing task group corresponding to the currently deleted second node fails; the second computing node is connected with the first computing node with the smallest currently available resources;
and taking the second computing node as a computing node in the first computing node, and repeatedly executing the step 4 and the step 5.
11. An electronic device, comprising: a processor and a memory; wherein the memory is configured to store computer-executable instructions that, when executed by the electronic device, are executed by the processor to cause the electronic device to perform the computing task deployment method of any of claims 1-5.
12. A computer-readable storage medium comprising instructions that, when executed by an electronic device, cause the computer to perform the computing task deployment method of any of claims 1-5.
CN202111449936.9A 2021-11-30 2021-11-30 Computing task deployment method and device, electronic equipment and storage medium Pending CN114416329A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111449936.9A CN114416329A (en) 2021-11-30 2021-11-30 Computing task deployment method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111449936.9A CN114416329A (en) 2021-11-30 2021-11-30 Computing task deployment method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114416329A true CN114416329A (en) 2022-04-29

Family

ID=81266413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111449936.9A Pending CN114416329A (en) 2021-11-30 2021-11-30 Computing task deployment method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114416329A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114584627A (en) * 2022-05-09 2022-06-03 广州天越通信技术发展有限公司 Middle station dispatching system and method with network monitoring function

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030120700A1 (en) * 2001-09-11 2003-06-26 Sun Microsystems, Inc. Task grouping in a distributed processing framework system and methods for implementing the same
CN111049900A (en) * 2019-12-11 2020-04-21 中移物联网有限公司 Internet of things flow calculation scheduling method and device and electronic equipment
CN113157430A (en) * 2020-12-14 2021-07-23 浙大城市学院 Low-cost task allocation and service deployment method for mobile group perception system in edge computing environment
CN113391914A (en) * 2020-03-11 2021-09-14 上海商汤智能科技有限公司 Task scheduling method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030120700A1 (en) * 2001-09-11 2003-06-26 Sun Microsystems, Inc. Task grouping in a distributed processing framework system and methods for implementing the same
CN111049900A (en) * 2019-12-11 2020-04-21 中移物联网有限公司 Internet of things flow calculation scheduling method and device and electronic equipment
CN113391914A (en) * 2020-03-11 2021-09-14 上海商汤智能科技有限公司 Task scheduling method and device
CN113157430A (en) * 2020-12-14 2021-07-23 浙大城市学院 Low-cost task allocation and service deployment method for mobile group perception system in edge computing environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TUYEN X. TRAN等: "Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks", IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 13 November 2018 (2018-11-13) *
杨茂林等: "一种共享资源敏感的实时任务分配算法", 万方, 11 September 2014 (2014-09-11) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114584627A (en) * 2022-05-09 2022-06-03 广州天越通信技术发展有限公司 Middle station dispatching system and method with network monitoring function
CN114584627B (en) * 2022-05-09 2022-09-06 广州天越通信技术发展有限公司 Middle station dispatching system and method with network monitoring function

Similar Documents

Publication Publication Date Title
US11704144B2 (en) Creating virtual machine groups based on request
CN112153700B (en) Network slice resource management method and equipment
CN107888425B (en) network slice deployment method and device for mobile communication system
de Souza Carvalho et al. Dynamic task mapping for MPSoCs
US10572290B2 (en) Method and apparatus for allocating a physical resource to a virtual machine
CN108243044B (en) Service deployment method and device
CN110162388A (en) A kind of method for scheduling task, system and terminal device
CN106293950A (en) A kind of resource optimization management method towards group system
CN108512782A (en) Accesses control list is grouped method of adjustment, the network equipment and system
Hong et al. Adaptive allocation of independent tasks to maximize throughput
CN111159859B (en) Cloud container cluster deployment method and system
CN114416329A (en) Computing task deployment method and device, electronic equipment and storage medium
CN109729731B (en) Accelerated processing method and device
WO2018170732A1 (en) Method and device for service deployment under edge cloud architecture
CN112968794A (en) Network function chain deployment method, device, terminal device and storage medium
CN109450684A (en) A kind of network slice systems physical node capacity extensions method and device
Wu et al. TVM: Tabular VM migration for reducing hop violations of service chains in Cloud datacenters
KR101558807B1 (en) Processor scheduling method for the cooperation processing between host processor and cooperation processor and host processor for performing the method
CN114996199A (en) Many-core route mapping method, device, equipment and medium
Dalzotto et al. Dynamic Mapping for Many-cores using Management Application Organization
CN107729154A (en) Resource allocation methods and device
Khuat et al. Communication cost reduction for hardware tasks placed on homogeneous reconfigurable resource
CN111030844B (en) Method and device for establishing flow processing framework
Ghose et al. A universal approach for task scheduling for distributed memory multiprocessors
Surma et al. Application specific communication scheduling on parallel systems,"

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination