CN111240848A - Task allocation processing method and system - Google Patents

Task allocation processing method and system Download PDF

Info

Publication number
CN111240848A
CN111240848A CN202010092802.5A CN202010092802A CN111240848A CN 111240848 A CN111240848 A CN 111240848A CN 202010092802 A CN202010092802 A CN 202010092802A CN 111240848 A CN111240848 A CN 111240848A
Authority
CN
China
Prior art keywords
task
devices
processes
target
running
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010092802.5A
Other languages
Chinese (zh)
Inventor
彭远权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010092802.5A priority Critical patent/CN111240848A/en
Publication of CN111240848A publication Critical patent/CN111240848A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Retry When Errors Occur (AREA)

Abstract

The application discloses a task allocation processing method and a system, wherein the method comprises the following steps: when the allocation processing condition of the target task is met, the target equipment divides the target task into subtasks operated by at least two processes; the target device allocates the running of the at least two processes to at least two devices; the at least two devices run the allocated processes; when any one of the at least two devices is abnormal when running the allocated process, the device which is not abnormal in the at least two devices hosts the non-running process running the abnormal device. By using the technical scheme provided by the application, the stability and disaster tolerance capability of the whole service system can be greatly improved.

Description

Task allocation processing method and system
Technical Field
The present application relates to the field of internet communications technologies, and in particular, to a method and a system for task allocation processing.
Background
With the development of internet communication technology, the internet is widely applied to daily learning, work and life of people, and a large number of matters are processed by switching from offline to online. With the continuous growth of services, the internet service system needs to process more and more transactions, which also poses a great challenge to the processing performance of the devices in the internet service system.
In some existing service systems, corresponding tasks are created for a large number of transactions respectively, and tasks to be executed are distributed on equipment in advance, a serial mode is adopted during task processing, 1 task is processed each time, and a task starts to process the next task after being completed. However, in the existing scheme, if the calculation amount of a certain task is large, the utilization rate of the CPU and the memory resource of the whole device is easily too high, and the device is halted. For example, in a fund brokering service system, for a large number of brokered fund services, a fund transaction is often taken as a task, and a device is designated to perform the task, for example, daily income posting of a fund is taken as a task; when a lot of clients exist in a fund, the daily income posting task equipment for processing the fund is caused, the utilization rate of CPU and memory resources of the whole equipment is too high, so that the operation of the whole equipment is influenced, and the stability and the disaster tolerance capability of the whole service system are poor. Therefore, there is a need to provide a more reliable or efficient solution.
Disclosure of Invention
The application provides a task allocation processing method and system, which can greatly improve the stability and disaster tolerance capability of the whole service system.
In one aspect, the present application provides a task allocation processing method, where the method includes:
when the allocation processing condition of the target task is met, the target equipment divides the target task into subtasks operated by at least two processes;
the target device allocates the running of the at least two processes to at least two devices;
the at least two devices run the allocated processes;
when any one of the at least two devices is abnormal when running the allocated process, the device which is not abnormal in the at least two devices hosts the non-running process running the abnormal device.
Another aspect provides a task allocation processing system, including:
the target device is used for dividing the target task into subtasks operated by at least two processes when the allocation processing condition of the target task is met; and allocating the execution of the at least two processes to at least two devices;
the at least two devices are used for running the distributed processes; and when any one of the at least two devices runs the allocated process, an exception occurs, and the device which does not have the exception in the at least two devices is used for hosting the non-running process running the device with the exception.
Another aspect provides a task allocation processing apparatus, including:
the target task dividing module can be used for dividing the target task into subtasks operated by at least two processes when the allocation processing condition of the target task is met;
a process allocation module, configured to allocate operations of the at least two processes to at least two devices, where the at least two devices include a target device;
the process running module is used for running the distributed processes;
and the managed operation module is used for managing and operating the non-operating process of the abnormal equipment when the equipment except the local equipment in the at least two pieces of equipment operates the distributed process.
Another aspect provides a task allocation processing device, which includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the task allocation processing method as described above.
Another aspect provides a computer-readable storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the task allocation processing method as described above.
The task allocation processing method and the task allocation processing system have the following technical effects:
the method and the device split the target task into the subtasks operated by at least two processes, and allocate the operation of the at least two processes to at least two devices for execution, so that the influence of overhigh utilization rate of CPU and memory resources of the devices on the whole devices is avoided; when any equipment is abnormal, the equipment which is not abnormal can host the non-running process of the equipment which is abnormal in running, the success rate of task execution is effectively guaranteed, and the stability and disaster tolerance capability of the whole business system for dealing with a large number of tasks are greatly improved.
Drawings
In order to more clearly illustrate the technical solutions and advantages of the embodiments of the present application or the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of a task allocation processing system provided in an embodiment of the present application;
fig. 2 is an alternative structural diagram of the distributed system 200 applied to the blockchain system according to the embodiment of the present application;
fig. 3 is a schematic flowchart of a task allocation processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a target device dividing a target task into subtasks executed by at least two processes according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a dispense operation provided by an embodiment of the present application;
FIG. 6 is a schematic flow chart of another dispensing operation provided by embodiments of the present application;
FIG. 7 is a schematic diagram of another dispense operation provided by embodiments of the present application;
FIG. 8 is a schematic diagram of a task assignment process provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of another task assignment process provided by embodiments of the present application;
FIG. 10 is a schematic flow chart of a task allocation processing system according to an embodiment of the present application;
fig. 11 is a schematic block structure diagram of a task allocation processing device according to an embodiment of the present application;
fig. 12 is a block diagram of a hardware structure of a server of a task allocation processing method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic diagram of a task allocation processing system according to an embodiment of the present application, and as shown in fig. 1, the system at least includes a device 01, a device 02, and a database 03.
In this embodiment, the device 01 and the device 02 may be configured to perform task allocation and task processing, and specifically, the device 02 and the device 03 may include independent physical servers, a server cluster or a distributed system formed by a plurality of physical servers, and may further include cloud servers that provide basic cloud computing services such as cloud services, a cloud database, cloud computing, a cloud function, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a CDN (Content delivery network), and a big data and artificial intelligence platform.
In other embodiments, devices 01 and 02 may also include terminal devices of the type of smartphones, desktop computers, tablet computers, laptops, smart speakers, digital assistants, Augmented Reality (AR)/Virtual Reality (VR) devices, smart wearable devices, and so forth.
In this embodiment, the database 03 may be configured to store a task file, a task table (the task table may be used to record an allocation state of a task), and a process table (the process table may be used to record an operation state of each process of the task), which are required to process the task; in particular, the database 03 may include, but is not limited to, MySQL (relational database management system), MongoDB (database based on distributed file storage), and the like.
In the embodiment of the present specification, the device 1 and the device 2 may be deployed in the same city, and preferably, may be deployed in different places.
In addition, it should be noted that fig. 1 is only an example, and in practical applications, the assignment for performing the task and the processing for performing the task may be different devices, for example, at least two devices are used for assigning the task, at least two other devices are used for processing the task, and the like.
Further, the task allocation processing system according to the embodiment of the present application may be a distributed system formed by connecting a client, a plurality of nodes (any form of computing devices in an access network, such as servers and user terminals) through a network communication form.
Taking a distributed system as an example of a blockchain system, referring to fig. 2, fig. 2 is an optional structural schematic diagram of the distributed system 200 applied to the blockchain system provided in this embodiment of the present application, and is formed by a plurality of nodes (computing devices in any form in an access network, such as servers and user terminals) and clients, and a peer-to-peer (P2P, PeerToPeer) network is formed between the nodes, and the P2P Protocol is an application layer Protocol operating on top of a Transmission Control Protocol (TCP). In a distributed system, any machine, such as a server or a terminal, can join to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer.
Referring to the functions of each node in the blockchain system shown in fig. 2, the functions involved include:
1) routing, a basic function that a node has, is used to support communication between nodes.
Besides the routing function, the node may also have the following functions:
2) the application is used for being deployed in a block chain, realizing specific services according to actual service requirements, recording data related to the realization functions to form recording data, carrying a digital signature in the recording data to represent a source of task data, and sending the recording data to other nodes in the block chain system, so that the other nodes add the recording data to a temporary block when the source and integrity of the recording data are verified successfully.
For example, the services implemented by the application include:
2.1) wallet, for providing the function of transaction of electronic money, including initiating transaction (i.e. sending the transaction record of current transaction to other nodes in the blockchain system, after the other nodes are successfully verified, storing the record data of transaction in the temporary blocks of the blockchain as the response of confirming the transaction is valid; of course, the wallet also supports the querying of the remaining electronic money in the electronic money address;
and 2.2) sharing the account book, wherein the shared account book is used for providing functions of operations such as storage, query and modification of account data, record data of the operations on the account data are sent to other nodes in the block chain system, and after the other nodes verify the validity, the record data are stored in a temporary block as a response for acknowledging that the account data are valid, and confirmation can be sent to the node initiating the operations.
2.3) Intelligent contracts, computerized agreements, which can enforce the terms of a contract, implemented by codes deployed on a shared ledger for execution when certain conditions are met, for completing automated transactions according to actual business requirement codes, such as querying the logistics status of goods purchased by a buyer, transferring the buyer's electronic money to the merchant's address after the buyer signs for the goods; of course, smart contracts are not limited to executing contracts for trading, but may also execute contracts that process received information.
3) And the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and recorded data submitted by nodes in the Block chain system are recorded in the blocks.
A task allocation processing method according to the present application is described below, and fig. 3 is a schematic flow chart of a task allocation processing method according to an embodiment of the present application, and the present specification provides the method operation steps according to the embodiment or the flow chart, but more or less operation steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In actual execution of an article, such as a system or a server, the method according to the embodiment or the figures may be executed sequentially or in parallel (for example, in the context of parallel processors or multi-threaded processing). Specifically, as shown in fig. 3, the method may include:
s301: when the allocation processing condition of the target task is met, the target device divides the target task into subtasks operated by at least two processes.
In practical applications, at least two devices for distributing tasks may be included, so that when an abnormal failure occurs in a device, the task distribution can still be realized by other devices. Specifically, the target device may include a device that performs assignment of the target task.
In a specific embodiment, as shown in fig. 4, the dividing, by the target device, the target task into subtasks run by at least two processes may include:
s3011: the target equipment acquires a task file of the target task;
s3013: the target device determines a subfile size threshold or a subfile quantity threshold;
s3015: the target equipment divides the task file into at least two subfiles based on the subfile size threshold or the subfile quantity threshold, wherein each subfile corresponds to one subtask;
s3017: and the target equipment creates a process for the subtask corresponding to each subfile.
In this embodiment of the present specification, the sub-file size threshold and the sub-file number threshold may be combined with preset values such as the size of a task file in an actual service and device performance, so as to ensure that the data processing amount of the split sub-tasks is relatively balanced. Further, in this embodiment of the present specification, the sub-files of the sub-tasks in the database may be striped to avoid disk conflicts when the sub-files are accessed.
In practical application, when a target task is an independent task, that is, the task does not need to depend on the completion condition of other tasks (for example, in a fund service system, for a single-day fund income posting task of a certain fund, a single-day fund income posting task of each user does not need to depend on the completion condition of a single-day fund income posting task of other users), the target task is not allocated, that is, the allocation processing condition is satisfied.
In other scenarios, some tasks are dependent, and when there is a task of a pre-task, specifically, for example, in a fund service, a fund corresponds to 2 tasks: one is asset reconciliation and one is revenue posting; because the revenue posting task depends on the asset checking task, the two tasks have execution sequence, and the revenue posting task needs to be distributed and the asset checking task is finished. Accordingly, the asset reconciliation task may be a pre-tasked to the revenue posting task. Here, the allocation processing condition of the asset reconciliation task may be that the revenue posting task is not allocated and the corresponding asset reconciliation task is completed.
In this embodiment of this specification, when a target task is a task in which a pre-task exists, before the target device divides the target task into subtasks run by at least two processes, the method may further include:
1) the target equipment confirms the completion condition of the preposed task based on the process table of the preposed task;
2) and when the preposed task is completed, the target equipment determines that the allocation processing condition of the target task is met.
In the embodiment of the present specification, in order to ensure that each device grasps whether a task in a service system is allocated or not and whether the task is completed or not, a task table and a process table may be maintained. Specifically, the task table may be configured to record an allocation status of a task, and accordingly, when a task is allocated, the allocation status of the task may be allocated; conversely, when a task is not allocated, the allocation status of the task may be unallocated; for a task whose allocation status is unallocated, the device may perform allocation processing on it.
Specifically, the process table may be used to record the running state of each process of the task; correspondingly, when a process is finished, the running state of the process can be finished; on the contrary, when a process is not completed, the running state of the process may be incomplete.
Further, when the running states of all processes of a task are completed, it may be determined that the task is completed.
S303: the target device allocates execution of the at least two processes to at least two devices.
In some scenarios, at least two processes corresponding to one task may be independent from each other, for example, a revenue posting task for a certain fund on a certain day, each process corresponds to a revenue posting subtask for a part of users of the fund, and because the revenue posting of each user needs to access files which are independent from each other, the corresponding processes are also independent from each other; correspondingly, at least two processes of the task may be split and then allocated to different devices to run respectively, as shown in fig. 5, for example, the revenue posting task on a certain day of the fund corresponds to 6 processes, at least two devices include a device a and a device B, 3 processes (process 1, process 2, process 3) of the revenue posting task may be allocated to the device a to run, and the other 3 processes (process 4, process 5, and process 6) may be allocated to the device B to run. Further, when allocating processes to corresponding devices, files required to run these allocated processes in the database may also be allocated to the devices, and in conjunction with fig. 5, files 1, 2, and 3 required to run processes 1, 2, and 3 (allocated processes) may be allocated to device a; files 4, 5 and 6 required to run process 4, 5 and 6 (allocated processes) can be allocated to device B.
In a specific embodiment, when at least two processes corresponding to a task need to be split and then distributed to at least two devices for operation, considering that the performance of different devices is good and bad, correspondingly, the process distribution may be performed in combination with the performance of the devices, and specifically, the target device may distribute the operation of the at least two processes to the at least two devices may include:
1) the target equipment acquires the process distribution weight of each equipment in the at least two equipment;
2) the target device allocates the runs of the at least two processes to the at least two devices based on the process allocation weights of each device.
In this embodiment of the present specification, the process allocation weight may be set in combination with performance of the device, and specifically, the process allocation weight of the device with the better performance is greater than the process allocation weight of the device with the worse performance.
In other embodiments, at least two processes corresponding to one task may be respectively and completely allocated to at least two devices; accordingly, the target device allocating the execution of the at least two processes to at least two devices may include: and the target device distributes the running of the at least two processes to the at least two devices respectively.
Specifically, for example, in the scenario of the revenue posting task on a certain fund on a certain day, 6 processes corresponding to the task may be respectively allocated to the device a and the device B, and since the device a and the device B correspond to the same task, correspondingly, the device a and the device B may perform preemption of the process lock before the 6 processes are run, and preempt the process lock to run the 6 processes, and the other device serves as a standby device, so that when the device that preempts the process lock is abnormal, the task is ensured to be continuously executed.
In some scenarios, for example, a task of generating a revenue statistic file of a fund is required, and even if the task is split into subtasks run by at least two processes, the database is stressed greatly due to the fact that the same file needs to be accessed among different processes; accordingly, such tasks that access the same data among the processes are required, and generally, all the processes corresponding to one task are respectively allocated to at least two devices. Correspondingly, one device can run all processes of the task by setting a process lock, and other devices are used as standby devices, so that the task can be continuously executed when the devices robbing the process lock are abnormal.
As shown in fig. 6, in a specific embodiment, it is assumed that the tasks of the profit statistics file of the fund correspond to 3 processes, and at least two devices include device a and device B, and accordingly, the 3 processes (process 1, process 2, and process 3) may be respectively allocated to device a and device B. Further, in conjunction with fig. 6, file 1, file 2, and file 3 required to run process 1, process 2, and process 3 (allocated processes) may be allocated to device a and device B, respectively.
In this embodiment, when the device performing the task is the same as the device performing the task, the at least two devices may include the target device.
Further, after the target device allocates the execution of the at least two processes to at least two devices, the method may further include: the target device updates the distribution state of the target task in the task distribution table to be distributed, so that other devices can master the distributed information of the target task and avoid repeated distribution.
S305: at least two devices run the assigned processes.
Specifically, after the device is allocated with a process to be executed, when the allocated process is an independent process, that is, when a file to be accessed for executing the process does not conflict with other processes, the device may directly execute the allocated process, and as shown in fig. 5, the device a may start to execute the process 1, the process 2, and the process 3 by respectively combining the file 1, the file 2, and the file 3; accordingly, device B may start running process 4, process 5, and process 6 in conjunction with file 4, file 5, and file 6, respectively.
In other embodiments, when the file that the assigned process needs to access conflicts with the file that the processes on other devices need to access (the access file has the same portion), the running of the assigned process by at least two devices may include:
1) at least two devices seize a process lock;
2) when any one of the at least two devices robs the process lock, the device which robs the process lock runs the at least two processes.
Specifically, as shown in fig. 6, it is assumed that the device a robs the process lock, and accordingly, the device a starts to start the process 1, the process 2, and the process 3 corresponding to the running task in combination with the file 1, the file 2, and the file 3; device B is in a wait state.
In some embodiments, in order to better ensure the stability of the device, a threshold of the running number of processes may be set for each device, and before the device performs lock preemption, it may be determined whether to perform lock preemption in combination with the threshold, and accordingly, before the at least two devices preempt the process lock, as shown in fig. 7, the method further includes:
1) the at least two devices respectively determine the running number of local processes and the number of distributed processes;
2) the at least two devices respectively calculate the sum of the running number of the local processes and the number of the distributed processes;
3) when the sum of the number is less than or equal to a preset process threshold value, the process lock is preempted by the at least two devices
In this embodiment of the present specification, the preset process threshold may be set by combining the device process running capability determined by the performance of each device itself; in some embodiments, the running capability of the device process may be characterized by an upper limit of the number of the runnable processes, and generally exceeding the upper limit of the number of the runnable processes may cause abnormality such as downtime of the device; in a specific embodiment, the preset process threshold may be slightly smaller than the upper limit of the number of executable processes, or may be equal to the upper limit of the number of executable processes.
In the embodiment of the present specification, before preempting the process lock, the number of processes running, the number of allocated processes, and the preset process threshold value that can represent the process running capability of the device are combined to ensure that the device that performs the process lock preemption has the capability of running the allocated processes, thereby better ensuring the stability and disaster tolerance capability of the device.
S307: when any one of the at least two devices is abnormal when running the allocated process, the device which is not abnormal in the at least two devices hosts the non-running process running the abnormal device.
In this embodiment of the present specification, in order to ensure that a device can accurately determine whether a process is abnormally operated when another device operates a process, when the at least two devices operate an allocated process, the at least two devices may monitor an operation state of a process in a process table of the target task; and determining whether any equipment is abnormal when running the distributed process based on the process state.
In this embodiment, the processes allocated to the same device in one task may be arranged in sequence according to the running order, and the running time of a general process may be determined by combining the task amount of the subtask corresponding to the process, and accordingly, when a device that has not experienced an abnormality monitors that a process run by another device is still in an incomplete state (the initial state of the running state of the process in the process table is incomplete) after a certain time (for example, a preset multiple of the running time of the process, which is generally greater than 1 and less than or equal to 2) elapses after the previous process has been completed, it may be determined that the device running the process is abnormal, and accordingly, as shown in fig. 8, in conjunction with the embodiment shown in fig. 5, assuming that device a is running process 2, when an exception occurs, the device (device B) that does not have the exception may host the non-running processes (process 2 and process 3) running the device (device a) that has the exception. Specifically, the device without exception may run the hosted process after the local process is run. In practical application, because the process 2 and the process 3 are originally allocated to the device a, the device B side does not run the files 2 and 3 required by the process 2 and the process 3; accordingly, when the device B hosts the running processes 2 and 3, the files 2 and 3 required for running the processes 2 and 3 can be acquired.
In other embodiments, as shown in fig. 9 and in conjunction with the embodiment shown in fig. 6, assuming that device a has an exception while running process 2, correspondingly, the device (device B) that has no exception may host the non-running processes (process 2 and process 3) running the device (device a) that has an exception. Since device B is also initially assigned tasks, accordingly, process 2 and process 3 may be started running directly in conjunction with file 2 and file 3.
In other embodiments, in order to ensure that the device without exception can normally operate after hosting the non-operating process of the device with exception; before hosting the non-running processes of the abnormal device, the non-abnormal device (which may include one or more non-abnormal devices) may determine, by combining the local number of processes running, the number of managed processes, and a preset process threshold of the device, whether or not it has the capability of running the managed process (here, determining whether or not it has the capability of running the managed process may refer to the above-mentioned steps of determining whether or not it has the capability of running the allocated process, and details are not described here again).
In the embodiment of the present specification, since a process corresponds to a subtask after task splitting, correspondingly, a subfile of the subtask accessed when the process runs needs to be split, and in order to ensure consistency of the split subfiles on different devices, the split subfiles can be verified; specifically, the verification of the split sub-files may be implemented based on a verification algorithm such as MD (Message-digest algorithm) 5, but not limited thereto.
In other embodiments, the method may further comprise:
after any process in the at least two processes is finished running, the equipment running the process updates the running state of the process in the process table of the target task to be finished.
In this embodiment of the present description, when a program code corresponding to a process scheme in which an apparatus processes a large number of tasks is switched into a specific apparatus service flow, an independently written function code may be automatically switched into an appropriate position of the flow in combination with an Assembly Ordered Programming (AOP) mechanism.
According to the technical scheme provided by the embodiment of the specification, the specification divides the target task into the subtasks operated by at least two processes, and allocates the operation of the at least two processes to at least two devices for execution, so that the influence of overhigh utilization rate of a CPU (Central processing Unit) and a memory resource of the devices on the whole devices is avoided; when any equipment is abnormal, the equipment which is not abnormal can host the non-running process of the equipment which is abnormal in running, the success rate of task execution is effectively guaranteed, and the stability and disaster tolerance capability of the whole business system for dealing with a large number of tasks are greatly improved.
An embodiment of the present application further provides a task allocation processing system, as shown in fig. 10, the system includes:
the target device 1010 may be configured to, when an allocation processing condition of a target task is satisfied, divide the target task into subtasks that are executed by at least two processes; and allocating the execution of the at least two processes to at least two devices;
the at least two devices 1020, which may be used to run the assigned processes; and when any one of the at least two devices runs the allocated process, an exception occurs, and the device which does not have the exception in the at least two devices is used for hosting the non-running process running the device with the exception.
In some embodiments, the at least two devices comprise the target device.
In some embodiments, the dividing, by the target device, the target task into subtasks run by at least two processes specifically includes:
the target equipment acquires a task file of the target task; determining a subfile size threshold or a subfile number threshold; splitting the task file into at least two subfiles based on the subfile size threshold or the subfile quantity threshold, wherein each subfile corresponds to one subtask; and creating a process for the subtask corresponding to each subfile.
In some embodiments, the allocating, by the target device, the running of the at least two processes to at least two devices specifically includes:
the target device obtains the process distribution weight of each device in the at least two devices; and assigning the runs of the at least two processes to the at least two devices based on the process assignment weight for each device.
In some embodiments, the allocating, by the target device, the running of the at least two processes to at least two devices specifically includes:
the target device respectively and completely distributes the running of the at least two processes to the at least two devices;
correspondingly, the running of the allocated processes by the at least two devices specifically includes:
the at least two devices seize a process lock;
and when any one of the at least two devices robs the process lock, the device which robs the process lock runs the at least two processes.
In some embodiments, before preempting the process lock, the at least two devices are further configured to determine a local number of processes running and a number of processes allocated, respectively;
and the system is used for respectively calculating the sum of the running number of the local processes and the number of the distributed processes;
and the process lock is preempted when the sum of the number is less than or equal to a preset process threshold value.
In some embodiments, after the target device allocates the execution of the at least two processes to at least two devices, the target device is further configured to update an allocation status of the target task in a task allocation table to be allocated.
In some embodiments, the at least two devices are further configured to update the running state of the process in the process table of the target task to be completed after any one of the at least two processes is finished running.
In some embodiments, the at least two devices are further configured to monitor the running states of the processes in the process table of the target task when the at least two devices run the allocated processes;
and the at least two devices are used for determining whether any device is abnormal when running the distributed process based on the process state.
In some embodiments, when the target task is a task with a pre-task, before the target device divides the target task into subtasks run by at least two processes, the target device is further configured to perform a confirmation process of a completion condition of the pre-task based on a process table of the pre-task;
and the target equipment is used for determining that the allocation processing condition of the target task is met when the preposed task is completed.
The system and method embodiments in the system embodiment described are based on the same application concept.
In some embodiments, when, in the foregoing embodiments, at least two devices include a target device, and the target device is a device without an exception, as shown in fig. 11, an embodiment of the present application further provides a task allocation processing device, where the device includes:
a target task dividing module 1110, configured to divide a target task into subtasks executed by at least two processes when an allocation processing condition of the target task is satisfied;
a process assigning module 1120 operable to assign execution of the at least two processes to at least two devices, the at least two devices including a target device;
a process running module 1130, which may be used to run the assigned process;
the managed running module 1140 may be configured to, when an exception occurs in a device other than the local device among the at least two devices running the allocated process, manage an un-run process running the device in which the exception occurs.
The device and method embodiments in the device embodiment are based on the same application concept.
The embodiment of the present application provides a task allocation processing device, where the task allocation processing device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the task allocation processing method provided in the above method embodiment.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal, a server or a similar operation device. Taking the example of operating on a server, fig. 12 is a block diagram of an embodiment of the present applicationProvided is a hardware structure block diagram of a server of a task allocation processing method. As shown in fig. 12, the server 1200 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1210 (the processors 1210 may include but are not limited to Processing devices such as a microprocessor MCU or a programmable logic device FPGA), a memory 1230 for storing data, and one or more storage media 1220 (e.g., one or more mass storage devices) for storing applications 1223 or data 1222. Memory 1230 and storage media 1220, among other things, may be transient storage or persistent storage. The program stored in the storage medium 1220 may include one or more modules, each of which may include a series of instruction operations for a server. Further, the central processor 1210 may be configured to communicate with the storage medium 1220, and execute a series of instruction operations in the storage medium 1220 on the server 1200. The Server 1200 may also include one or more power supplies 1260, one or more wired or wireless network interfaces 1250, one or more input-output interfaces 1240, and/or one or more operating systems 1221, such as Windows ServerTM,MacOS XTM,UnixTM,LinuxTM,FreeBSDTMAnd so on.
The input/output interface 1240 may be used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of the server 1200. In one example, the input/output Interface 1240 includes a Network Interface Controller (NIC) that may be coupled to other Network devices via a base station to communicate with the internet. In one example, the input/output interface 1240 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
It will be understood by those skilled in the art that the structure shown in fig. 12 is only an illustration and is not intended to limit the structure of the electronic device. For example, server 1200 may also include more or fewer components than shown in FIG. 12, or have a different configuration than shown in FIG. 12.
Embodiments of the present application further provide a storage medium, where the storage medium may be disposed in a device to store at least one instruction related to implementing a method for task allocation processing in the method embodiments, or at least one program, where the at least one instruction or the at least one program is loaded and executed by the processor to implement the method for task allocation processing provided in the method embodiments.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
As can be seen from the embodiments of the task allocation processing method, system, device, server, or storage medium provided in the present application, a target task is split into subtasks executed by at least two processes, and the execution of the at least two processes is allocated to at least two devices for execution, so as to avoid the influence of an excessively high utilization rate of CPU and memory resources of the devices on the entire devices; when any equipment is abnormal, the equipment which is not abnormal can host the non-running process of the equipment which is abnormal in running, the success rate of task execution is effectively guaranteed, and the stability and disaster tolerance capability of the whole business system for dealing with a large number of tasks are greatly improved.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system, device, server, and storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware to implement the above embodiments, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk, an optical disk, or the like.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A task allocation processing method, characterized in that the method comprises:
when the allocation processing condition of the target task is met, the target equipment divides the target task into subtasks operated by at least two processes;
the target device allocates the running of the at least two processes to at least two devices;
the at least two devices run the allocated processes;
when any one of the at least two devices is abnormal when running the allocated process, the device which is not abnormal in the at least two devices hosts the non-running process running the abnormal device.
2. The method of claim 1, wherein the target device dividing the target task into subtasks that are run by at least two processes comprises:
the target equipment acquires a task file of the target task;
the target device determines a subfile size threshold or a subfile number threshold;
the target device divides the task file into at least two subfiles based on the subfile size threshold or the subfile quantity threshold, wherein each subfile corresponds to one subtask;
and the target equipment creates a process for the subtask corresponding to each subfile.
3. The method of claim 1, wherein the target device allocating the execution of the at least two processes to at least two devices comprises:
the target device obtains the process distribution weight of each device in the at least two devices;
the target device allocates runs of the at least two processes to the at least two devices based on the process allocation weights of each device.
4. The method of claim 1, wherein the target device allocating the execution of the at least two processes to at least two devices comprises:
the target device respectively and completely distributes the running of the at least two processes to the at least two devices;
correspondingly, the running of the allocated processes by the at least two devices includes:
the at least two devices seize a process lock;
and when any one of the at least two devices robs the process lock, the device which robs the process lock runs the at least two processes.
5. The method of claim 4, wherein prior to the at least two devices preempting the process lock, the method further comprises:
the at least two devices respectively determine the running number of local processes and the number of distributed processes;
the at least two devices respectively calculate the sum of the running number of the local processes and the number of the distributed processes;
and when the sum of the number is less than or equal to a preset process threshold value, the process lock is preempted by the at least two devices.
6. The method of claim 1, wherein after the target device allocates the execution of the at least two processes to at least two devices, the method further comprises:
and the target equipment updates the distribution state of the target task in the task distribution table to be distributed.
7. The method of claim 1, further comprising:
after any process in the at least two processes is finished running, the equipment running the process updates the running state of the process in the process table of the target task to be finished.
8. The method of claim 7, further comprising:
when the at least two devices run the allocated processes, the at least two devices monitor the running states of the processes in the process table of the target task;
and the at least two devices determine whether any one device is abnormal when running the distributed process based on the process state.
9. The method according to claim 1, wherein when the target task is a task in which a preceding task exists, before the target device divides the target task into subtasks run by at least two processes, the method further comprises:
the target equipment confirms the completion condition of the preposed task based on the process table of the preposed task;
and when the preposed task is completed, the target equipment determines that the allocation processing condition of the target task is met.
10. A task allocation processing system, characterized in that the system comprises:
the target device is used for dividing the target task into subtasks operated by at least two processes when the allocation processing condition of the target task is met; and allocating the execution of the at least two processes to at least two devices;
the at least two devices are used for running the distributed processes; and when any one of the at least two devices runs the allocated process, an exception occurs, and the device which does not have the exception in the at least two devices is used for hosting the non-running process running the device with the exception.
CN202010092802.5A 2020-02-14 2020-02-14 Task allocation processing method and system Pending CN111240848A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010092802.5A CN111240848A (en) 2020-02-14 2020-02-14 Task allocation processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010092802.5A CN111240848A (en) 2020-02-14 2020-02-14 Task allocation processing method and system

Publications (1)

Publication Number Publication Date
CN111240848A true CN111240848A (en) 2020-06-05

Family

ID=70871972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010092802.5A Pending CN111240848A (en) 2020-02-14 2020-02-14 Task allocation processing method and system

Country Status (1)

Country Link
CN (1) CN111240848A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084094A (en) * 2020-09-16 2020-12-15 北京自如信息科技有限公司 Multi-server resource monitoring method and device and computer equipment
CN113114731A (en) * 2021-03-19 2021-07-13 北京达佳互联信息技术有限公司 Task processing method, device, server and storage medium
US20210319281A1 (en) * 2020-04-13 2021-10-14 Motorola Mobility Llc Subtask Assignment for an Artificial Intelligence Task

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109582459A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method and device that the trustship process of application is migrated
CN109788325A (en) * 2018-12-28 2019-05-21 网宿科技股份有限公司 Video task distribution method and server
CN110113387A (en) * 2019-04-17 2019-08-09 深圳前海微众银行股份有限公司 A kind of processing method based on distributed batch processing system, apparatus and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109582459A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method and device that the trustship process of application is migrated
CN109788325A (en) * 2018-12-28 2019-05-21 网宿科技股份有限公司 Video task distribution method and server
CN110113387A (en) * 2019-04-17 2019-08-09 深圳前海微众银行股份有限公司 A kind of processing method based on distributed batch processing system, apparatus and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210319281A1 (en) * 2020-04-13 2021-10-14 Motorola Mobility Llc Subtask Assignment for an Artificial Intelligence Task
CN112084094A (en) * 2020-09-16 2020-12-15 北京自如信息科技有限公司 Multi-server resource monitoring method and device and computer equipment
CN113114731A (en) * 2021-03-19 2021-07-13 北京达佳互联信息技术有限公司 Task processing method, device, server and storage medium
CN113114731B (en) * 2021-03-19 2023-03-14 北京达佳互联信息技术有限公司 Task processing method, device, server and storage medium

Similar Documents

Publication Publication Date Title
US10831545B2 (en) Efficient queueing and scheduling of backups in a multi-tenant cloud computing environment
US10798016B2 (en) Policy-based scaling of network resources
US10713088B2 (en) Event-driven scheduling using directed acyclic graphs
US11604665B2 (en) Multi-tiered-application distribution to resource-provider hosts by an automated resource-exchange system
CN110417558B (en) Signature verification method and device, storage medium and electronic device
US10819776B2 (en) Automated resource-price calibration and recalibration by an automated resource-exchange system
US9596302B2 (en) Migrating applications between networks
JP2021515293A (en) Computer implementation of service management for blockchain network infrastructure, systems, computer programs, and blockchain networks
US8566447B2 (en) Virtual service switch
CN111240848A (en) Task allocation processing method and system
US10402227B1 (en) Task-level optimization with compute environments
US11665105B2 (en) Policy-based resource-exchange life-cycle in an automated resource-exchange system
US20200250006A1 (en) Container management
US11502972B2 (en) Capacity optimization in an automated resource-exchange system
CN108337109A (en) A kind of resource allocation methods and device and resource allocation system
Gutierrez-Garcia et al. Agent-based cloud bag-of-tasks execution
CN106412030B (en) A kind of selection storage resource method, apparatus and system
CN113206877A (en) Session keeping method and device
US11194629B2 (en) Handling expiration of resources allocated by a resource manager running a data integration job
Leite et al. Dohko: an autonomic system for provision, configuration, and management of inter-cloud environments based on a software product line engineering method
CN107454137B (en) Method, device and equipment for on-line business on-demand service
US20230034835A1 (en) Parallel Processing in Cloud
Miranda et al. Dynamic communication-aware scheduling with uncertainty of workflow applications in clouds
US9998395B1 (en) Workload capsule generation and processing
US11048554B1 (en) Correlated volume placement in a distributed block storage service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40024344

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605

RJ01 Rejection of invention patent application after publication