CN111866043B - Task processing method, device, computing equipment and computer storage medium - Google Patents

Task processing method, device, computing equipment and computer storage medium Download PDF

Info

Publication number
CN111866043B
CN111866043B CN201910357047.6A CN201910357047A CN111866043B CN 111866043 B CN111866043 B CN 111866043B CN 201910357047 A CN201910357047 A CN 201910357047A CN 111866043 B CN111866043 B CN 111866043B
Authority
CN
China
Prior art keywords
computing
task
processed
cluster
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910357047.6A
Other languages
Chinese (zh)
Other versions
CN111866043A (en
Inventor
王森
姚朋伟
李秀清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Hebei Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Hebei Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Hebei Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910357047.6A priority Critical patent/CN111866043B/en
Publication of CN111866043A publication Critical patent/CN111866043A/en
Application granted granted Critical
Publication of CN111866043B publication Critical patent/CN111866043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • G06Q10/063114Status monitoring or status determination for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06312Adjustment or analysis of established resource schedule, e.g. resource or task levelling, or dynamic rescheduling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention relates to the technical field of computer network information security, and discloses a task processing method, a device, a computing device and a computer storage medium, wherein the method comprises the following steps: acquiring a task to be processed; obtaining surplus operation resources of the computing nodes according to the task to be processed; selecting at least one computing node from the computing nodes as a core computing node according to the surplus computing resources of the computing nodes; forming at least one core computing node into at least one computing cluster according to the task to be processed; and selecting one computing cluster from the at least one computing cluster to process the task to be processed. Through the mode, the embodiment of the invention realizes effective allocation of resources, thereby improving the efficiency of task processing.

Description

Task processing method, device, computing equipment and computer storage medium
Technical Field
The embodiment of the invention relates to the technical field of business support, in particular to a task processing method, a task processing device, computing equipment and a computer storage medium.
Background
At present, two main modes are adopted for weak password verification: repeatedly trying to log in an account number password for weak password verification through remote connection equipment; and the other is to obtain a password file and then crack the password file to verify the weak password. And the login account is repeatedly tried to verify the weak password by adopting a remote connection mode, so that the speed is low, the risk is high, and the hardware resources are consumed.
The method for verifying the password file can split the working unit according to the encryption algorithm, and distribute the split working unit to a plurality of processing cores of one or more cracking servers to be executed respectively in a load balancing mode for cracking tasks corresponding to the split working unit, so that distributed weak password verification can be performed based on a weak password dictionary. The defect of low weak password analysis efficiency in the prior art is overcome.
However, the realization of distributed weak password verification through load balancing has a plurality of defects, when the operation amount is huge, the load balancer can generate a bottleneck, and the weak password verification request can reach a real service node only through the load balancer, so that extra delay is introduced, the cost is high, the equipment price is high, the configuration redundancy is poor, the flexibility is poor, the dynamic distribution of tasks can not be carried out according to the resource loss of the computing node, and the server and the application state can not be effectively mastered.
Disclosure of Invention
In view of the foregoing, embodiments of the present invention provide a task processing method, apparatus, computing device, and computer storage medium, which overcome or at least partially solve the foregoing problems.
According to an aspect of an embodiment of the present invention, there is provided a task processing method, including: acquiring a task to be processed; obtaining surplus operation resources of the computing nodes according to the task to be processed; selecting at least one computing node from the computing nodes as a core computing node according to the surplus computing resources of the computing nodes; forming at least one core computing node into at least one computing cluster according to the task to be processed; and selecting one computing cluster from the at least one computing cluster to process the task to be processed.
In an alternative manner, obtaining surplus operation resources of the computing node according to the task to be processed includes: and acquiring the weight of the resource required by the task to be processed according to the type of the task to be processed, and calculating the surplus operation resource of the calculation node according to the weight of the resource and the residual resource of the calculation node.
In an alternative manner, at least one computing node is selected from the computing nodes according to the surplus computing resources of the computing nodesAs a core computing node, comprising: according to the formula:
Figure BDA0002045739940000021
calculating the number N of required computing nodes, wherein: ceil is an upward rounding function, λ is a correction factor less than 1, Q i G is the task quantity of the task to be processed, and is the surplus operation resource of a single computing node; and selecting the computing node with the maximum N surplus computing resources as a core computing node.
In an alternative manner, according to the task to be processed, at least one core computing node forms at least one computing cluster, including: and acquiring at least one computing node except the core node according to the task to be processed and the service computing resource of the core computing node, and forming a computing cluster by the core computing node.
In an alternative manner, selecting a computing cluster from the at least one computing cluster to process the task to be processed includes: obtaining surplus operation resources of all computing nodes in the at least one computing cluster as the surplus operation resources of the computing cluster; and selecting one computing cluster to process the task according to the task to be processed and the computing cluster surplus operation resource.
In an optional manner, a computing cluster is selected to process a task according to the task to be processed and the computing cluster surplus operation resource, specifically: acquiring the task quantity of the distributed processing tasks of all the computing clusters, and setting a competition function:
Figure BDA0002045739940000022
wherein is sigma T M i The amount of processing tasks that have been allocated for each computing cluster, Q i Surplus operation resources for a single computing node; and selecting a computing cluster to process tasks according to the competition function value xi.
In one alternative, the method further comprises: decomposing the task to be processed; distributing the decomposed tasks to computing nodes in the computing cluster; the computing node distributes the decomposed tasks to different containers in the computing node for processing.
In an alternative manner, the computing node creates a new container according to the decomposed task and assigns the decomposed character to the created new container for processing.
In an alternative manner, the decomposed tasks are distributed to different containers in the computing node for processing, specifically:
the decomposed task set is as follows: t= { T 1 ,t 2 ,…t n };
All containers a within the computing cluster i Is P i ={p i1 ,p i2 ,…,p ij ,…};
Obtaining a dissipation table S of the container for the potential task set i ={s i1 ,s i2 ,…,s ij ,…};
According to the dissipation sheet S i Giving the minimum dissipation value of the task, and forming a minimum dissipation table as follows: e= { E 1 ,e 2 ,…,e k ,…,en};
The potential task group P i The task combination benefit weight of (a) is W ij /S i Wherein
Figure BDA0002045739940000031
Will W ij /S i Task group P corresponding to the maximum value of (2) ij Assigned to container a i And (5) processing.
According to another aspect of the embodiment of the present invention, there is provided a task processing device including: the device comprises a first acquisition module, a second acquisition module, a first selection module, a forming module and a second selection module. The first acquisition module is used for acquiring the task to be processed. And the second acquisition module is used for acquiring surplus operation resources of the computing node according to the task to be processed. The first selecting module is used for selecting at least one computing node from the computing nodes as a core computing node according to the surplus computing resources of the computing nodes. And the forming module is used for forming at least one core computing node into at least one computing cluster according to the task to be processed. And the second selecting module is used for selecting one computing cluster from the at least one computing cluster to process the task to be processed.
In an optional manner, the second obtaining module is further configured to obtain a weight of a resource required by the task to be processed according to a type of the task to be processed, and calculate to obtain a surplus operation resource of the computing node according to the weight of the resource and a remaining resource of the computing node.
In an alternative manner, the first selection module is further configured to, according to the formula:
Figure BDA0002045739940000032
calculating the number N of required computing nodes, wherein: ceil is an upward rounding function, λ is a correction factor less than 1, Q i G is the task quantity of the task to be processed, and is the surplus operation resource of a single computing node; and selecting the computing node with the maximum N surplus computing resources as a core computing node.
In an optional manner, the forming module is further configured to obtain, according to the task to be processed and a service computing resource of one core computing node, at least one computing node other than the core node, and the core computing nodes form a computing cluster.
In an optional manner, the second selecting module is further configured to obtain, as the computing cluster surplus computing resources, surplus computing resources of all computing nodes in the at least one computing cluster; and selecting one computing cluster to process the task according to the task to be processed and the computing cluster surplus operation resource.
In an optional manner, the selecting a computing cluster according to the task to be processed and the computing cluster surplus computing resource to perform task processing specifically includes: acquiring the task quantity of the distributed processing tasks of all the computing clusters, and setting a competition function:
Figure BDA0002045739940000041
Wherein is sigma T M i The amount of processing tasks that have been allocated for each computing cluster, Q i Surplus operation resources for a single computing node; and selecting a computing cluster to process tasks according to the competition function value xi.
In an alternative, the apparatus further comprises: the system comprises a decomposition module and an allocation module. The decomposition module is used for decomposing the task to be processed. The allocation module is used for allocating the decomposed tasks to the computing nodes in the computing cluster so that the computing nodes allocate the decomposed tasks to different containers in the computing nodes for processing.
In an alternative manner, the computing node creates a new container according to the decomposed task, and distributes the decomposed task to the created new container for task processing.
In an alternative manner, the decomposed task set is: t= { T 1 ,t 2 ,…t n -a }; all containers a within the computing cluster i Is P i ={p i1 ,p i2 ,…,p ij … }; the allocation module is further configured to obtain a dissipation table S of the container for the set of potential tasks i ={s i1 ,s i2 ,…,s ij … }; according to the dissipation sheet S i Giving the minimum dissipation value of the task, and forming a minimum dissipation table as follows: e= { E 1 ,e 2 ,…,e k …, en }; the potential task group P i The task combination benefit weight of (a) is W ij /S i Wherein
Figure BDA0002045739940000042
Will W ij /S i Task group P corresponding to the maximum value of (2) ij Assigned to container a i And (5) processing.
According to yet another aspect of an embodiment of the present invention, there is provided a computing device including: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute a task processing method.
According to yet another aspect of the present invention, there is provided a computer storage medium having stored therein at least one executable instruction for causing a processor to perform a task processing method.
According to the embodiment of the invention, at least one computing node is selected as a core computing node through the surplus computing resource of the computing node, at least one core computing node is formed into at least one computing cluster according to the task to be processed, and one computing cluster is selected from the at least one computing cluster to process the task to be processed. Therefore, effective allocation of resources is realized, and the task processing efficiency is improved.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and may be implemented according to the content of the specification, so that the technical means of the embodiments of the present invention can be more clearly understood, and the following specific embodiments of the present invention are given for clarity and understanding.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 shows an application scenario diagram of a task processing method according to an embodiment of the present invention;
FIG. 2 shows a flow chart of a task processing method provided by an embodiment of the present invention;
FIG. 3 is a flow chart of a task processing method according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of a task processing device according to an embodiment of the present invention;
FIG. 5 illustrates a schematic diagram of a computing device provided by an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 shows an application scenario diagram of a task processing method according to an embodiment of the present invention. The tasks in the embodiment of the invention can be various types of tasks, such as a weak password verification task, a user identity verification task and the like, particularly a task with large data processing capacity and more resources required to be occupied. As shown in fig. 1, the method is applied to a computing center 10 of a center server. The computing center 10 is configured to communicate with a number of computing nodes 20, the computing nodes 20 being devices where a weak password authentication service resides. The computing center 10 includes a resource management module 101, a performance monitoring module 102, a data management module 103, and a task management module 104, where the resource management module 101 is used for information management of each computing node 20, such as access mode, device performance, and usage information in idle and busy hours. The performance monitoring module 102 is configured to collect device resources and performance consumption of each computing node 20 in real time, and send monitoring information to the task management module 104. The data management module 103 is configured to allocate tasks to each computing node 20 according to the number of weak password dictionaries, the number of resources, the number of accounts, and the computing capability of devices, and calculate and summarize a processing result of each task, where the tasks allocated to each computing node 20 are issued by the task management module 104. Specifically, the data management module 103 notifies the task management module 104 of the object that performs the task, where the object that performs the task is a specific computing node 20. The task processing result is sent by the computing node 20 to the task management module 104, and then sent to the data management module 103 via the task management module 104. The task management module 104 is configured to issue a task scheduling instruction to an available device according to the device condition, the task scheduling period, and the job content to generate a scheduling job and a task path, and perform weak password verification. The task management module 104 receives the monitoring information sent by the performance monitoring module 102, automatically adjusts the task path and the task execution frequency on the path according to the monitoring information, and feeds back the execution result to the data management module 103.
The compute node 20 includes a number of containers 201, a term for computing units within the compute node, which assign tasks to the containers 201 for processing when the compute node 20 receives the issued tasks.
Fig. 2 shows a flowchart of a task processing method according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
step 110: and acquiring a task to be processed.
The task to be processed refers to a weak password verification task received by the central server. The task to be processed can be obtained from a security management platform or an identity and access management control class system, and can also be obtained from a storage unit of a central server.
Step 120: and obtaining surplus operation resources of the computing nodes according to the task to be processed.
The surplus operation resource refers to the remaining callable resource of the computing node, and the resource comprises the remaining CPU resource and the remaining memory resource of the computing node. Specifically, the surplus operation resources of the computing node are obtained by weighting the remaining CPU resources and the remaining memory resources of the computing node. The computing center can acquire the weight of the resources required by the task to be processed according to the type of the task to be processed, the corresponding relation between the weight and the type of the task to be processed can be preset, the resource management module stored in the computing center can be set in advance, and the computing center can inform the weight of each resource required by the task to be processed by the task issuing unit to be processed when receiving the task to be processed. And the computing center can obtain the surplus operation resources of the computing node by weighted average computation according to the weight, the residual CPU resources and the residual memory resources of the computing node. In the weighted average process, the weight occupied by the residual CPU resources and the residual memory resources is related to the type of the task to be processed, and if the CPU resources required by the task to be processed occupy a relatively large amount, the weight of the CPU resources is increased; if the memory resource occupation needed by the task to be processed is larger, the weight value of the memory resource is increased.
Step 130: and selecting at least one computing node from the computing nodes as a core computing node according to the surplus computing resources of the computing nodes.
The computing center selects at least one computing node from the computing nodes as a core computing node according to the surplus computing resources of the computing nodes, the quantity selected by the core computing nodes depends on the size of the task quantity, and preferably, the surplus computing quantity predicted values of all the computing nodes are summed and multiplied by a correction coefficient, and then divided by the task quantity, and the obtained computing result is rounded up and used as the quantity of the core computing nodes required by the task.
The specific calculation formula is as follows:
Figure BDA0002045739940000071
wherein: ceil is an upward rounding function, λ is a correction factor less than 1, Q i G is the task quantity of the task to be processed, and is the surplus operation resource of a single computing node;
and selecting the computing node with the maximum N surplus computing resources as a core computing node.
Considering that the consumption of resources rises exponentially when software runs, a certain calculation force is reserved by setting a correction coefficient, the optimal value range of the correction coefficient is 0.7-0.8, and the correction coefficient is multiplied by the surplus calculation resources of all calculation nodes to be used as a predicted value of the surplus calculation quantity of the calculation nodes.
Step S140: and forming at least one core computing node into at least one computing cluster according to the task to be processed.
According to the N core computing nodes selected in the steps, the core computing nodes are taken as centers to form a computing cluster, and each computing cluster specifically consists of a core computing node serving as a center and a plurality of non-core computing nodes. And acquiring computing nodes except at least one core node according to resources required by the task to be processed and service computing resources of the core computing node, wherein the computing nodes and the core computing nodes form a computing cluster. Selecting N core compute nodes as described above, the compute farm will form all compute nodes into N compute clusters centered on the N core compute nodes.
Step S150: and selecting one computing cluster from the at least one computing cluster to process the task to be processed.
The selection of the computing cluster is determined according to the task to be processed and the surplus computing resources of the computing cluster, wherein the surplus computing resources of the computing cluster are the sum of the surplus computing resources of all computing nodes in the computing cluster.
The computing center is provided with a competition unit, the core computing node sends the sum of surplus computation amounts of the computing clusters where the core computing node is located to the competition unit to participate in competition, and after competition is finished, one of the computing clusters is selected for task processing. The competition mainly refers to the surplus operation quantity of each cluster and the historical task quantity of the computing cluster in a period of time, and meanwhile, the competition result is taken as a sample of each cluster.
Specifically, in the contention unit, a contention function is preset
Figure BDA0002045739940000081
Wherein is sigma T M i The task quantity of the processing task which is already allocated for each computing cluster, qi is the surplus operation resource of a single computing node; and selecting a computing cluster to process tasks according to the competition function value xi.
According to the invention, the core computing nodes are selected according to the surplus computing resources of each computing node, the core computing nodes and the non-core computing nodes form the computing clusters, and the computing clusters are screened through the competition function, so that the computing clusters with the largest residual computing capacity are screened out for task processing, the effective distribution of the resources is realized, and the task processing efficiency is improved.
Fig. 3 shows a flowchart of a task processing method according to another embodiment of the present invention, and as shown in fig. 3, compared with the previous embodiment, after step S150, the present embodiment further includes the following steps:
step 310: and decomposing the task to be processed.
After the computing center selects one computing cluster to process the task, in order to determine the allocation of the task in the computing cluster, the computing center decomposes the task and allocates the decomposed task to the computing nodes in the computing cluster respectively.
Step 320: and distributing the decomposed tasks to the computing nodes in one computing cluster, so that the computing nodes distribute the decomposed tasks to different containers in the computing nodes for processing.
After assigning the tasks to the individual compute nodes within the compute cluster, the tasks are operated on in the container of compute nodes. In a computing node comprising at least one container, the allocation of tasks among the containers within the computing node is determined by dissipation values, the goal of the allocation being to minimize the dissipation values paid by the computing node upon completion of the tasks received by the computing node. The dissipation value refers to the total consumption of resources when the assigned task is completed, and specifically refers to the weighted sum of the CPU resources consumed by completing the task and the consumed memory resources.
The allocation of each task in a container within a compute node is determined as follows: assume that the decomposed task set is: t= { T 1 ,t 2 ,…t n All tasks in the task set T are eventually distributed to be executed in the container of computing nodes within the computing cluster. A task assigned by a certain computing node is T i Wherein T is i For a subset of the task set T, for task T i After being fully combined, the set of potential tasks that are containers are distributed among different containers within the compute node. Assume a certain container a within the compute node i Is P i ={p i1 ,p i2 ,…,p ij …, the container dissipates S for a set of potential tasks i ={s i1 ,s i2 ,…,s ij … }; according to the dissipation sheet S i Giving the minimum dissipation value of the task, and forming a minimum dissipation table as E= { E 1 ,e 2 ,…,e k …, en }; potential task group P i The task combination benefit weight of (a) is W ij /S i Wherein
Figure BDA0002045739940000091
Will W ij /S i Task group P corresponding to the maximum value of (2) ij Assigned to container a i And (5) processing.
It is worth noting that in the potential task group P ij After allocation is complete, P is deleted from the set of potential tasks ij And deleting the weight corresponding to the task group, and repeatedly executing the task group allocation step until all the task groups are allocated to the corresponding containers, wherein the potential task set is an empty set.
In other embodiments, when the number of tasks is large, the computing node creates a new container from the decomposed task and assigns the decomposed task to the created new container for task processing. Specifically, when the task amount is large, a waiting queue appears at the task, a request for creating a container is sent out by acquiring the current queue state information, a central server creates a new container on a computing cluster through a Docker according to the request, and the task in the waiting queue is sent to the created new container for processing.
It is worth to say that, in the process of processing tasks by the new container, the current queue state information is continuously acquired, if no task to be processed exists in the current queue, the task amount is considered to be reduced to a normal state, at this time, the created new container is destroyed, and resources are released.
According to the embodiment of the invention, the tasks are decomposed, and the decomposed tasks are distributed to different containers in the computing node for processing according to the dissipation value of each container in the computing node, so that the consumption of computing resources is reduced. In addition, when the task amount is large, a new container is created to process the task, so that the task processing efficiency is improved.
Fig. 4 shows a schematic structural diagram of an embodiment of a task processing device according to the present invention. As shown in fig. 4, the task processing device 400 includes: the first acquisition module 410, the second acquisition module 420, the first selection module 430, the forming module 440, and the second selection module 450. The first obtaining module 410 is configured to obtain a task to be processed. And the second obtaining module 420 is configured to obtain surplus operation resources of the computing node according to the task to be processed. The first selecting module 430 is configured to select at least one computing node from the computing nodes as a core computing node according to the spare computing resources of the computing nodes. A forming module 440, configured to form at least one core computing node into at least one computing cluster according to the task to be processed. And the second selection module 450 is configured to select one computing cluster from the at least one computing cluster to process the task to be processed.
In an optional manner, the second obtaining module 420 is further configured to obtain a weight of a resource required by the task to be processed according to a type of the task to be processed, and calculate, according to the weight of the resource and a remaining resource of the computing node, a surplus computing resource of the computing node.
In an alternative manner, the first selecting module 430 is further configured to:
Figure BDA0002045739940000101
calculating the number N of required computing nodes, wherein: ceil is an upward rounding function, λ is a correction factor less than 1, Q i G is the task quantity of the task to be processed, and is the surplus operation resource of a single computing node; and selecting the computing node with the maximum N surplus computing resources as a core computing node.
In an alternative manner, the forming module 440 is further configured to obtain, according to the task to be processed and a service computing resource of one core computing node, at least one computing node other than the core node, and the core computing nodes form a computing cluster.
In an optional manner, the second selecting module 450 is further configured to obtain, as the computing cluster surplus computing resources, surplus computing resources of all computing nodes in the at least one computing cluster; and selecting one computing cluster to process the task according to the task to be processed and the computing cluster surplus operation resource.
In an optional manner, the selecting a computing cluster according to the task to be processed and the computing cluster surplus computing resource to perform task processing specifically includes: acquiring the task quantity of the distributed processing tasks of all the computing clusters, and setting a competition function:
Figure BDA0002045739940000102
wherein is sigma T M i The amount of processing tasks that have been allocated for each computing cluster, Q i Surplus operation resources for a single computing node; and selecting a computing cluster to process tasks according to the competition function value xi.
In an alternative, the apparatus further comprises: the decomposition module 460 and the allocation module 470. The decomposing module 460 is configured to decompose the task to be processed. The allocation module 470 is configured to allocate the decomposed task to a computing node in the one computing cluster, so that the computing node allocates the decomposed task to a different container in the computing node for processing.
In an alternative manner, the computing node creates a new container according to the decomposed task, and distributes the decomposed task to the created new container for task processing.
In an alternative manner, the decomposed task set is: t= { T 1 ,t 2 ,…t n -a }; all containers a within the computing cluster i Is P i ={p i1 ,p i2 ,…,p ij … }; the allocation module is further configured to obtain a dissipation table S of the container for the set of potential tasks i ={s i1 ,s i2 ,…,s ij … }; according to the dissipation sheet S i Giving the minimum dissipation value of the task, and forming a minimum dissipation table as follows: e= { E 1 ,e 2 ,…,e k …, en }; the potential task group P i The task combination benefit weight of (a) is W ij /S i Wherein
Figure BDA0002045739940000111
Will W ij /S i Task group P corresponding to the maximum value of (2) ij Assigned to container a i And (5) processing.
The invention acquires the surplus operation resources of each calculation node through the second acquisition module 420, selects the core calculation node through the first selection module 430, forms the calculation clusters from the core calculation node and the non-core calculation node through the forming module 440, screens the calculation clusters through the competition function, screens the calculation clusters with the largest residual calculation capacity for task processing, realizes effective allocation of resources, and improves the task processing efficiency.
Embodiments of the present invention provide a computer program product comprising a computer program stored on a computer storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the steps of any of the method embodiments described above.
Embodiments of the present invention provide a non-transitory computer storage medium storing at least one executable instruction for performing the steps of any of the method embodiments described above.
FIG. 5 illustrates a schematic diagram of one embodiment of a computing device, and embodiments of the invention are not limited to a particular implementation of the computing device.
As shown in fig. 5, the computing device may include: a processor 502, a communication interface (Communications Interface) 504, a memory 506, and a communication bus 508.
Wherein: processor 502, communication interface 504, and memory 506 communicate with each other via communication bus 508. A communication interface 504 for communicating with network elements of other devices, such as clients or other servers. The processor 502 is configured to execute the program 510, and may specifically perform the relevant steps described above for any of the method embodiments.
In particular, program 510 may include program code including computer-operating instructions.
The processor 502 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included by the computing device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
A memory 506 for storing a program 510. Memory 506 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically operable to cause the processor 502 to:
acquiring a task to be processed;
obtaining surplus operation resources of the computing nodes according to the task to be processed;
selecting at least one computing node from the computing nodes as a core computing node according to the surplus computing resources of the computing nodes;
forming at least one core computing node into at least one computing cluster according to the task to be processed;
and selecting one computing cluster from the at least one computing cluster to process the task to be processed.
In an alternative, the program 510 further causes the processor 502 to:
and acquiring the weight of the resource required by the task to be processed according to the type of the task to be processed, and calculating the surplus operation resource of the calculation node according to the weight of the resource and the residual resource of the calculation node.
In an alternative, the program 510 further causes the processor 502 to:
according to the formula:
Figure BDA0002045739940000131
calculating the number N of required computing nodes, wherein: ceil is an upward rounding function, λ is a correction factor less than 1, Q i G is the task quantity of the task to be processed, and is the surplus operation resource of a single computing node;
and selecting the computing node with the maximum N surplus computing resources as a core computing node.
In an alternative, the program 510 further causes the processor 502 to:
and acquiring at least one computing node except the core node according to the task to be processed and the service computing resource of the core computing node, and forming a computing cluster by the core computing node.
In an alternative, the program 510 further causes the processor 502 to:
obtaining surplus operation resources of all computing nodes in the at least one computing cluster as the surplus operation resources of the computing cluster;
and selecting one computing cluster to process the task according to the task to be processed and the computing cluster surplus operation resource.
In an alternative, the program 510 further causes the processor 502 to:
acquiring the task quantity of the distributed processing tasks of all the computing clusters, and setting a competition function:
Figure BDA0002045739940000132
wherein is sigma T M i The amount of processing tasks that have been allocated for each computing cluster, Q i Surplus operation resources for a single computing node;
and selecting a computing cluster to process tasks according to the competition function value xi.
In an alternative, the program 510 further causes the processor 502 to:
decomposing the task to be processed;
and distributing the decomposed tasks to the computing nodes in the computing cluster, so that the computing nodes distribute the decomposed tasks to different containers in the computing nodes for processing.
In an alternative manner, the computing node creates a new container according to the decomposed task, and distributes the decomposed task to the created new container for task processing.
In one alternative, the program 510 further causes the processor 502 to:
the decomposed task set is as follows: t= { T 1 ,t 2 ,…t n };
All containers a within the computing cluster i Is P i ={p i1 ,p i2 ,…,p ij ,…};
Obtaining a dissipation table S of the container for the potential task set i ={s i1 ,s i2 ,…,s ij ,…};
According to the dissipation sheet S i Giving the minimum dissipation value of the task, and forming a minimum dissipation table as follows: e= { E 1 ,e 2 ,…,e k ,…,en};
The potential task group P i The task combination benefit weight of (a) is W ij /S i Wherein
Figure BDA0002045739940000141
Will W ij /S i Task group P corresponding to the maximum value of (2) ij Assigned to container a i And (5) processing.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specifically stated.

Claims (12)

1. A method of task processing, comprising:
acquiring a task to be processed;
obtaining surplus operation resources of the computing nodes according to the task to be processed;
selecting at least one computing node from the computing nodes as a core computing node according to the surplus computing resources of the computing nodes;
forming a corresponding computing cluster by taking the core computing node as a center according to the task to be processed to obtain at least one computing cluster;
and selecting one computing cluster from the at least one computing cluster to process the task to be processed.
2. The task processing method according to claim 1, wherein the obtaining the surplus operation resources of the computing node according to the task to be processed includes:
and acquiring the weight of the resource required by the task to be processed according to the type of the task to be processed, and calculating the surplus operation resource of the calculation node according to the weight of the resource and the residual resource of the calculation node.
3. The task processing method according to claim 1, wherein the selecting at least one computing node from the computing nodes as a core computing node according to the spare computing resources of the computing nodes includes:
according to the formula:
Figure FDA0004113113260000011
calculating the number N of required computing nodes, wherein: ceil is an upward rounding function, λ is a correction factor less than 1, Q i G is the task quantity of the task to be processed, and is the surplus operation resource of a single computing node;
and selecting the computing node with the maximum N surplus computing resources as a core computing node.
4. A task processing method according to claim 3, wherein said forming at least one core computing node into at least one computing cluster according to the task to be processed comprises:
and acquiring at least one computing node except the core computing node according to the task to be processed and the service computing resource of the core computing node, and forming a computing cluster by the core computing node.
5. The task processing method according to claim 1 or 4, wherein selecting one computing cluster from the at least one computing cluster to process the task to be processed comprises:
obtaining surplus operation resources of all computing nodes in the at least one computing cluster as the surplus operation resources of the computing cluster;
and selecting one computing cluster to process the task according to the task to be processed and the computing cluster surplus operation resource.
6. The method for processing tasks according to claim 5, wherein selecting a computing cluster for processing tasks according to the task to be processed and the computing cluster surplus operation resource comprises:
acquiring the task quantity of the distributed processing tasks of all the computing clusters, and setting a competition function:
Figure FDA0004113113260000021
wherein is sigma T M i The amount of processing tasks that have been allocated for each computing cluster, Q i Surplus operation resources for a single computing node;
and selecting a computing cluster to process tasks according to the competition function value xi.
7. A task processing method according to claim 1 or 6, characterized in that the method further comprises:
decomposing the task to be processed;
distributing the decomposed tasks to computing nodes in the computing cluster;
the computing node distributes the decomposed tasks to different containers in the computing node for processing.
8. The task processing method according to claim 7, characterized in that the method further comprises:
and the computing node creates a new container according to the decomposed task, and distributes the decomposed task to the created new container for task processing.
9. The task processing method according to claim 8, wherein the task after the decomposition is allocated to a different container in the computing node for processing, specifically:
the decomposed task set is as follows: t= { T 1 ,t 2 ,…t n };
All containers a within the computing cluster i Is P i ={p i1 ,p i2 ,…,p ij ,…};
Obtaining a dissipation table S of the container for the potential task set i ={s i1 ,s i2 ,…,s ij ,…};
According to the dissipation sheet S i Giving the minimum dissipation value of the task, and forming a minimum dissipation table as follows: e= { E 1 ,e 2 ,…,e k ,…,en};
The potential task set P i The task combination benefit weight of (a) is W ij /S i Wherein
Figure FDA0004113113260000031
Will W ij /S i Task group P corresponding to the maximum value of (2) ij Assigned to container a i And (5) processing.
10. A task processing device, comprising:
a first acquisition module: the method is used for acquiring a task to be processed;
and a second acquisition module: the surplus operation resource is used for acquiring the computing node according to the task to be processed;
the first selection module: the method comprises the steps of selecting at least one computing node from the computing nodes as a core computing node according to the surplus computing resources of the computing nodes;
and (3) forming a module: the core computing nodes are used for forming corresponding computing clusters by taking the core computing nodes as the center according to the task to be processed, so that at least one computing cluster is obtained;
the second selecting module: and the processing module is used for selecting one computing cluster from the at least one computing cluster to process the task to be processed.
11. A computing device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to hold at least one executable instruction that causes the processor to perform the method of any one of claims 1-9.
12. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform the method of any one of claims 1-9.
CN201910357047.6A 2019-04-29 2019-04-29 Task processing method, device, computing equipment and computer storage medium Active CN111866043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910357047.6A CN111866043B (en) 2019-04-29 2019-04-29 Task processing method, device, computing equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910357047.6A CN111866043B (en) 2019-04-29 2019-04-29 Task processing method, device, computing equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN111866043A CN111866043A (en) 2020-10-30
CN111866043B true CN111866043B (en) 2023-04-28

Family

ID=72966381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910357047.6A Active CN111866043B (en) 2019-04-29 2019-04-29 Task processing method, device, computing equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN111866043B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012003007A1 (en) * 2010-06-29 2012-01-05 Exxonmobil Upstream Research Company Method and system for parallel simulation models
CN103701629A (en) * 2013-11-27 2014-04-02 北京神州泰岳软件股份有限公司 Weak password analysis method and system
CN105808346A (en) * 2014-12-30 2016-07-27 华为技术有限公司 Task scheduling method and device
CN108924214A (en) * 2018-06-27 2018-11-30 中国建设银行股份有限公司 A kind of load-balancing method of computing cluster, apparatus and system
CN109408236A (en) * 2018-10-22 2019-03-01 福建南威软件有限公司 A kind of task load equalization methods of ETL on cluster

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9887937B2 (en) * 2014-07-15 2018-02-06 Cohesity, Inc. Distributed fair allocation of shared resources to constituents of a cluster

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012003007A1 (en) * 2010-06-29 2012-01-05 Exxonmobil Upstream Research Company Method and system for parallel simulation models
CN103701629A (en) * 2013-11-27 2014-04-02 北京神州泰岳软件股份有限公司 Weak password analysis method and system
CN105808346A (en) * 2014-12-30 2016-07-27 华为技术有限公司 Task scheduling method and device
CN108924214A (en) * 2018-06-27 2018-11-30 中国建设银行股份有限公司 A kind of load-balancing method of computing cluster, apparatus and system
CN109408236A (en) * 2018-10-22 2019-03-01 福建南威软件有限公司 A kind of task load equalization methods of ETL on cluster

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Management of distributed resource allocations in multi-cluster environments";Ewnetu Bayuh Lakew del;《2012 IEEE 31st International Performance Computing and Communications Conference (IPCCC)》;20130110;全文 *
"基于负载均衡度的云计算任务调度算法";叶波;《东北电力大学学报》;20190228;全文 *

Also Published As

Publication number Publication date
CN111866043A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN105159782B (en) Based on the method and apparatus that cloud host is Order splitting resource
US9069610B2 (en) Compute cluster with balanced resources
CN107688492B (en) Resource control method and device and cluster resource management system
CN110597639B (en) CPU distribution control method, device, server and storage medium
US11496413B2 (en) Allocating cloud computing resources in a cloud computing environment based on user predictability
CN111352736A (en) Method and device for scheduling big data resources, server and storage medium
CN113886089B (en) Task processing method, device, system, equipment and medium
KR20120082598A (en) Cost based scheduling algorithm for multiple workflow in cloud computing and system of the same
CN112181613B (en) Heterogeneous resource distributed computing platform batch task scheduling method and storage medium
CN106528288A (en) Resource management method, device and system
CN111488206A (en) Deep learning task scheduling method, system, terminal and storage medium
Mylavarapu et al. An optimized capacity planning approach for virtual infrastructure exhibiting stochastic workload
CN113886034A (en) Task scheduling method, system, electronic device and storage medium
CN108694083B (en) Data processing method and device for server
CN113946431B (en) Resource scheduling method, system, medium and computing device
CN116185623A (en) Task allocation method and device, electronic equipment and storage medium
CN114721818A (en) Kubernetes cluster-based GPU time-sharing method and system
CN110780991A (en) Deep learning task scheduling method and device based on priority
CN110673950A (en) Cloud computing task allocation method, device, equipment and storage medium
CN111866043B (en) Task processing method, device, computing equipment and computer storage medium
CN112860383A (en) Cluster resource scheduling method, device, equipment and storage medium
CN111858014A (en) Resource allocation method and device
CN110764876A (en) Cloud host creation method, device, equipment and readable storage medium
CN116010051A (en) Federal learning multitasking scheduling method and device
Mishra et al. A memory-aware dynamic job scheduling model in Grid computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant