CN110569118A - Task scheduling method and device, electronic equipment and storage medium - Google Patents

Task scheduling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110569118A
CN110569118A CN201910807973.9A CN201910807973A CN110569118A CN 110569118 A CN110569118 A CN 110569118A CN 201910807973 A CN201910807973 A CN 201910807973A CN 110569118 A CN110569118 A CN 110569118A
Authority
CN
China
Prior art keywords
subtask
task
main body
preprocessing
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910807973.9A
Other languages
Chinese (zh)
Inventor
王龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beike Technology Co Ltd
Original Assignee
Beike Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beike Technology Co Ltd filed Critical Beike Technology Co Ltd
Priority to CN201910807973.9A priority Critical patent/CN110569118A/en
Publication of CN110569118A publication Critical patent/CN110569118A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

the application discloses a task scheduling method and device, electronic equipment and a storage medium, and relates to the computer technology. The specific scheme comprises the steps of obtaining at least one task, and decomposing each task into a preprocessing subtask and a main subtask; executing each preprocessing subtask in parallel in a preprocessing thread; the method comprises the steps of determining a main body subtask to be processed in a main task thread according to a preset task processing sequence, judging whether a preprocessing subtask which belongs to the same task with the main body subtask to be processed is executed completely or not, judging whether a last main body subtask in the task processing sequence is executed completely or not, and executing the main body subtask to be processed when the preprocessing subtask which belongs to the same task is executed completely and the last main body subtask is executed completely. The method and the device have the advantages that the total time consumption for executing the tasks is reduced, the convenience for executing and maintaining the tasks is improved, and the execution sequence of the tasks can be ensured.

Description

Task scheduling method and device, electronic equipment and storage medium
Technical Field
the present application relates to computer technologies, and in particular, to a task scheduling method and apparatus, an electronic device, and a storage medium.
Background
in existing computer technology, a variety of different tasks need to be performed including external server interaction, user interface interaction, data storage and data processing.
in the prior art, each task is implemented in the same thread and is linearly executed, so that the total time consumption for executing the task is long, and the task is difficult to execute and maintain. Or each task is executed by adopting an asynchronous thread, however, most tasks have dependency relationship in the execution sequence, and the execution sequence of the tasks is difficult to guarantee by adopting the asynchronous thread to execute each task.
Disclosure of Invention
in view of the above, a main objective of the present application is to provide a task scheduling method, which not only reduces the total time consumption for executing tasks, improves the convenience of task execution and maintenance, but also ensures the execution sequence of the tasks.
in order to achieve the purpose, the technical scheme provided by the application is as follows:
In a first aspect, an embodiment of the present application provides a task scheduling method, including:
acquiring at least one task, and decomposing each task into a preprocessing subtask and a main subtask;
Executing each preprocessing subtask in parallel in a preprocessing thread;
Determining a main body subtask to be processed in a main task thread according to a preset task processing sequence, judging whether the pre-processing subtask belonging to the same task as the main body subtask to be processed is executed completely, judging whether a previous main body subtask in the task processing sequence is executed completely, and executing the main body subtask to be processed when the pre-processing subtask belonging to the same task is executed completely and the previous main body subtask is executed completely.
In one possible embodiment, the steps include:
And starting to execute each preprocessing subtask in the task processing sequence.
In a possible implementation manner, the step of executing the to-be-processed subject subtask includes:
Judging whether the main body subtask to be processed needs to interact with a user interface or not, and executing the main body subtask to be processed in a main thread when the main body subtask to be processed needs to interact with the user interface;
And when the main body subtask to be processed does not need to interact with the user interface, executing the main body subtask to be processed in the main task thread.
in one possible embodiment, the step of executing each of the preprocessing subtasks in parallel in a preprocessing thread includes:
And judging whether the preprocessing subtask needs to interact with an external server, starting an asynchronous request thread when the preprocessing subtask needs to interact with the external server, and interacting with the external server in the request thread.
in a second aspect, based on the same design concept, an embodiment of the present application further provides a task scheduling apparatus, including:
the task decomposition module is used for acquiring at least one task and decomposing each task into a preprocessing subtask and a main subtask;
the preprocessing module is used for executing each preprocessing subtask in parallel in a preprocessing thread;
The main task module is used for determining a main body subtask to be processed in a main task thread according to a preset task processing sequence, judging whether the pre-processing subtask belonging to the same task as the main body subtask to be processed is executed completely, judging whether a previous main body subtask in the task processing sequence is executed completely, and executing the main body subtask to be processed when the pre-processing subtask belonging to the same task is executed completely and the previous main body subtask is executed completely.
In a possible implementation, the preprocessing module is further configured to:
And starting to execute each preprocessing subtask in the task processing sequence.
in a possible implementation, the main task module is further configured to:
Judging whether the main body subtask to be processed needs to interact with a user interface or not, and executing the main body subtask to be processed in a main thread when the main body subtask to be processed needs to interact with the user interface;
and when the main body subtask to be processed does not need to interact with the user interface, executing the main body subtask to be processed in the main task thread.
in a possible implementation, the preprocessing module is further configured to:
And judging whether the preprocessing subtask needs to interact with an external server, starting an asynchronous request thread when the preprocessing subtask needs to interact with the external server, and interacting with the external server in the request thread.
in a third aspect, an embodiment of the present application further provides a computer-readable storage medium. The specific scheme is as follows:
a computer readable storage medium storing computer instructions which, when executed by a processor, may implement the steps of any one of the possible embodiments of the first aspect and the first aspect.
In a fourth aspect, an embodiment of the present application further provides an electronic device. The specific scheme is as follows:
An electronic device comprising the computer-readable storage medium described above, further comprising a processor that can execute the computer-readable storage medium.
In summary, the present application provides a task scheduling method, a task scheduling apparatus, an electronic device, and a storage medium. According to the task processing method and device, the task is divided into the preprocessing subtask and the main body subtask, the preprocessing thread and the main task thread are adopted to execute the preprocessing subtask and the main body subtask respectively, the task is divided to be convenient to maintain respectively, the execution sequence of each main body subtask is guaranteed by means of the preset task processing sequence, and convenience in task execution and maintenance is improved. Furthermore, the preprocessing subtasks are executed in parallel, so that the preprocessing subtasks of each task can be completed as quickly as possible, preparation is made for execution of each main subtask, and total time consumption for executing the tasks is effectively reduced. And for the main body subtasks, when the execution of the preprocessing subtask belonging to the same task is finished and the execution of the previous main body subtask is finished, the main body subtask to be processed is executed, and the execution sequence of the tasks can be well ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
fig. 1 is a schematic flowchart of a task scheduling method according to an embodiment of the present application;
Fig. 2 is a schematic structural diagram of a task scheduling method according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of a task scheduling device according to an embodiment of the present application;
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not explicitly listed or inherent to such process, method, article, or apparatus.
the core invention points of the application are as follows: according to the method and the device, the task is decomposed into the preprocessing subtask and the main body subtask, the preprocessing thread and the main task thread are adopted to respectively execute the preprocessing subtask and the main body subtask, and the task is decomposed to be convenient to maintain respectively. In the main task thread, each main sub task is sequentially executed according to a preset task processing sequence, so that the execution sequence of each task is ensured. The execution of the pre-processing subtask and the main subtask is relatively independent between the pre-processing thread and the main thread. Specifically, the preprocessing subtasks are executed in parallel, so that the preprocessing subtasks of each task can be completed as quickly as possible, preparation is made for execution of each main subtask, and total time consumption for executing the tasks is effectively reduced. And for the main body subtasks, when the execution of the preprocessing subtask belonging to the same task is finished and the execution of the previous main body subtask is finished, the main body subtask to be processed is executed, and the execution sequence of the tasks can be well ensured.
in order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention are described in detail below with specific embodiments. Several of the following embodiments may be combined with each other and some details of the same or similar concepts or processes may not be repeated in some embodiments.
fig. 1 is a schematic flowchart of a task scheduling method provided in an embodiment of the present application, and as shown in fig. 1, the embodiment mainly includes:
S101: at least one task is acquired, and each task is decomposed into a preprocessing subtask and a main subtask.
Here, a task refers to a basic unit of work in the field of computers. The method and the device for processing the data acquire at least one task comprising external server interaction, user interface interaction, data storage, data processing and the like, and decompose each task. Here, according to the execution process of the task, the task is decomposed into a preprocessing subtask and a main subtask, and the preprocessing subtask mainly executes operations of acquiring interactive data from an external server, preparing task data for the main subtask, and the like, and is mainly used for preparing for the execution of the task. The main body subtask mainly executes main body operation of the task and is mainly used for realizing the function of the task.
s102: and executing each preprocessing subtask in parallel in a preprocessing thread.
The pre-processing sub-tasks are executed in the pre-processing thread.
when the tasks are decomposed, the preprocessing subtasks of each task have no dependency relationship in the execution sequence, so that the next preprocessing subtask in the task processing sequence can be executed without waiting for the completion of the execution of the previous preprocessing subtask in the task processing sequence, and each preprocessing subtask in the preprocessing thread is executed in parallel.
S103: determining a main body subtask to be processed in a main task thread according to a preset task processing sequence, judging whether the pre-processing subtask belonging to the same task as the main body subtask to be processed is executed completely, judging whether a previous main body subtask in the task processing sequence is executed completely, and executing the main body subtask to be processed when the pre-processing subtask belonging to the same task is executed completely and the previous main body subtask is executed completely.
The preset task processing sequence is determined by the function to be realized and the execution sequence relation of each task. And executing the main body subtasks in the main task thread, wherein each main body subtask starts to be executed in the task processing sequence, and therefore the main body subtasks to be processed are determined in the task processing sequence in the main task thread. Because the main body subtasks need to be normally executed after the preprocessing subtasks are prepared for the execution of the main body subtasks, and the main body subtasks have dependency relationship in the execution sequence, when the preprocessing subtasks belonging to the same task are executed completely and the previous main body subtask is executed completely, the main body subtasks to be processed are executed.
in one possible embodiment, the task processing order may be determined in units of tasks, and thus, the task processing order used by the pre-processing thread is the same as the task processing order used by the main task thread. And, when the pre-processing subtasks are executed in the pre-processing thread, each pre-processing subtask starts execution in the order of task processing.
Fig. 2 is a schematic structural diagram of a task scheduling method according to an embodiment of the present application. Each horizontal line represents the life cycle of a task, each task is divided into a preprocessing subtask and a main subtask, and the task is executed after the preprocessing subtask and the main subtask are executed in sequence. And the first column thread is a preprocessing thread, each preprocessing subtask is executed in the preprocessing thread according to the task processing sequence, and when the preprocessing subtask is executed completely, the executed task state is sent to a main subtask which belongs to the same task as the preprocessing subtask in the main task thread in a cross-thread mode in the form of a message or an instruction.
Further, the pre-processing thread and the main task thread are asynchronous threads. The preprocessing subtasks in the preprocessing thread are sequentially started to be executed according to the task processing sequence, but the preprocessing subtasks can be started to be executed without waiting for the completion of the execution of the last preprocessing subtask in the task processing sequence. And for each preprocessing subtask, after the preprocessing subtask starts to be executed, judging whether the preprocessing subtask needs to interact with an external server, and when the preprocessing subtask needs to interact with the external server, starting an asynchronous request thread, and interacting with the external server at the request thread. The most time-consuming operation of interacting with the external server is put into the asynchronous request thread for execution, so that the preprocessing subtasks can be executed in parallel under most conditions, and the preprocessing operations including interaction with the external server and the like are executed in advance in the asynchronous preprocessing thread, so that the main subtasks do not need to wait for the corresponding external server, and the execution efficiency of the tasks can be greatly improved.
the main body subtasks are executed when the preprocessing subtasks belonging to the same task are executed and the previous main body subtask is executed, which is more beneficial to the cross-thread operation of the main body subtasks. For a task with a user interface, in order to prevent the user interface from being frequently stuck due to the interaction process with the user interface, when a main body subtask to be processed is executed, judging whether the main body subtask to be processed needs to interact with the user interface, and when the main body subtask to be processed needs to interact with the user interface, executing the main body subtask to be processed in a main thread; and when the main body subtask to be processed does not need to interact with the user interface, executing the main body subtask to be processed in the main task thread.
In order to implement the task scheduling method of the embodiment of the application, when programming a program code, the code needs to be decoupled, the boundary of code logic is divided by a task framework, and each task corresponds to a relatively independent task code, so that the code is more convenient to maintain.
Based on the same design concept, the embodiment of the application also provides a task scheduling device, electronic equipment and a storage medium.
as shown in fig. 3, a task scheduling apparatus 300 according to an embodiment of the present application includes:
The task decomposition module 301 is configured to obtain at least one task and decompose each task into a preprocessing subtask and a main subtask;
A preprocessing module 302, configured to execute each of the preprocessing subtasks in parallel in a preprocessing thread;
A main task module 303, configured to determine a main sub-task to be processed in a main task thread according to a preset task processing sequence, determine whether the execution of the preprocessing sub-task that belongs to the same task as the main sub-task to be processed is completed, determine whether the execution of a previous main sub-task in the task processing sequence is completed, and execute the main sub-task to be processed when the execution of the preprocessing sub-task that belongs to the same task is completed and the execution of the previous main sub-task is completed.
In a possible implementation, the preprocessing module 302 is further configured to:
and starting to execute each preprocessing subtask in the task processing sequence.
In a possible implementation, the main task module 303 is further configured to:
judging whether the main body subtask to be processed needs to interact with a user interface or not, and executing the main body subtask to be processed in a main thread when the main body subtask to be processed needs to interact with the user interface;
And when the main body subtask to be processed does not need to interact with the user interface, executing the main body subtask to be processed in the main task thread.
In a possible implementation, the preprocessing module 302 is further configured to:
and judging whether the preprocessing subtask needs to interact with an external server, starting an asynchronous request thread when the preprocessing subtask needs to interact with the external server, and interacting with the external server in the request thread.
embodiments of the present application further provide a computer-readable storage medium, which stores instructions that, when executed by a processor, cause the processor to perform the steps of any one of the task scheduling methods provided in the embodiments of the present application. In practical applications, the computer readable medium may be included in the apparatus/device/system described in the above embodiments, or may exist alone without being assembled into the apparatus/device/system. The computer readable storage medium carries one or more programs which, when executed, implement the steps of performing any one of the task scheduling methods provided by the embodiments of the present application according to the task scheduling apparatus provided by the embodiments of the present application with reference to the embodiments of the present application.
According to embodiments disclosed herein, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example and without limitation: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing, without limiting the scope of the present disclosure. In the embodiments disclosed herein, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
the method steps described herein may be implemented in hardware, for example, logic gates, switches, Application Specific Integrated Circuits (ASICs), programmable logic controllers, embedded microcontrollers, etc., in addition to data processing programs. Such hardware capable of implementing the methods described herein may also constitute the present application.
The embodiment of the present application further provides an electronic device, which may be a computer or a server, and the task scheduling device provided in the embodiment of the present application may be integrated therein. Fig. 4 shows an electronic device 400 provided in the present embodiment.
The electronic device may include a processor 401 of one or more processing cores, one or more computer-readable storage media 402. The electronic device may further include a power supply 403 and an input-output unit 404. Those skilled in the art will appreciate that fig. 4 does not constitute a limitation of the electronic device and may include more or fewer components than illustrated, or some components may be combined, or a different arrangement of components.
wherein:
the processor 401 is a control portion of the electronic device, and connects various portions by using various interfaces and lines, and executes the steps of any one of the task scheduling methods provided by the embodiments of the present application by running or executing a software program stored in the computer-readable storage medium 402.
the computer-readable storage medium 402 can be used for storing a software program, i.e., a program involved in any one of the task scheduling methods provided by the embodiments of the present application.
the processor 401 executes various functional applications and data processing by executing software programs stored in the computer-readable storage medium 402. The computer-readable storage medium 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data or the like used according to the needs of the electronic device. Further, the computer-readable storage medium 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the computer-readable storage medium 402 may also include a memory controller to provide the processor 401 access to the computer-readable storage medium 402.
The electronic device further comprises a power supply 403 for supplying power to the various components, and preferably, the power supply 403 is logically connected to the processor 401 through a power management system, so that functions of managing charging, discharging, and power consumption are realized through the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The server may also include an input-output unit 404, such as may be used to receive entered numeric or character information, and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control; such as various graphical user interfaces that may be used to display information entered by or provided to the user, as well as the server, which may be composed of graphics, text, icons, video, and any combination thereof.
The flowchart and block diagrams in the figures of the present application illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments disclosed herein. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be appreciated by a person skilled in the art that various combinations and/or combinations of features described in the various embodiments and/or claims of the present application are possible, even if such combinations or combinations are not explicitly described in the present application. In particular, the features recited in the various embodiments and/or claims of the present application may be combined and/or coupled in various ways, all of which fall within the scope of the present disclosure, without departing from the spirit and teachings of the present application.
The principle and implementation of the present application are explained by applying specific embodiments in the present application, and the above description of the embodiments is only used to help understanding the method and the core idea of the present application, and is not used to limit the present application. It will be appreciated by those skilled in the art that changes may be made in this embodiment and its broader aspects and without departing from the principles, spirit and scope of the invention, and that all such modifications, equivalents, improvements and equivalents as may be included within the scope of the invention are intended to be protected by the claims.

Claims (10)

1. A method for task scheduling, comprising:
Acquiring at least one task, and decomposing each task into a preprocessing subtask and a main subtask;
Executing each preprocessing subtask in parallel in a preprocessing thread;
Determining a main body subtask to be processed in a main task thread according to a preset task processing sequence, judging whether the pre-processing subtask belonging to the same task as the main body subtask to be processed is executed completely, judging whether a previous main body subtask in the task processing sequence is executed completely, and executing the main body subtask to be processed when the pre-processing subtask belonging to the same task is executed completely and the previous main body subtask is executed completely.
2. the method of claim 1, wherein the step of executing each of the pre-processing sub-tasks in parallel in a pre-processing thread comprises:
and starting to execute each preprocessing subtask in the task processing sequence.
3. The method of claim 1, wherein the step of executing the subject to be processed subtasks comprises:
Judging whether the main body subtask to be processed needs to interact with a user interface or not, and executing the main body subtask to be processed in a main thread when the main body subtask to be processed needs to interact with the user interface;
and when the main body subtask to be processed does not need to interact with the user interface, executing the main body subtask to be processed in the main task thread.
4. The method of claim 1, wherein the step of executing each of the pre-processing sub-tasks in parallel in a pre-processing thread comprises:
and judging whether the preprocessing subtask needs to interact with an external server, starting an asynchronous request thread when the preprocessing subtask needs to interact with the external server, and interacting with the external server in the request thread.
5. A task scheduling apparatus, comprising:
The task decomposition module is used for acquiring at least one task and decomposing each task into a preprocessing subtask and a main subtask;
The preprocessing module is used for executing each preprocessing subtask in parallel in a preprocessing thread;
the main task module is used for determining a main body subtask to be processed in a main task thread according to a preset task processing sequence, judging whether the pre-processing subtask belonging to the same task as the main body subtask to be processed is executed completely, judging whether a previous main body subtask in the task processing sequence is executed completely, and executing the main body subtask to be processed when the pre-processing subtask belonging to the same task is executed completely and the previous main body subtask is executed completely.
6. The apparatus of claim 5, wherein the preprocessing module is further configured to:
And starting to execute each preprocessing subtask in the task processing sequence.
7. the apparatus of claim 5, wherein the primary task module is further configured to:
judging whether the main body subtask to be processed needs to interact with a user interface or not, and executing the main body subtask to be processed in a main thread when the main body subtask to be processed needs to interact with the user interface;
And when the main body subtask to be processed does not need to interact with the user interface, executing the main body subtask to be processed in the main task thread.
8. the apparatus of claim 5, wherein the preprocessing module is further configured to:
And judging whether the preprocessing subtask needs to interact with an external server, starting an asynchronous request thread when the preprocessing subtask needs to interact with the external server, and interacting with the external server in the request thread.
9. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 4.
10. an electronic device comprising the computer-readable storage medium of claim 9, further comprising a processor that can execute the computer-readable storage medium.
CN201910807973.9A 2019-08-29 2019-08-29 Task scheduling method and device, electronic equipment and storage medium Pending CN110569118A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910807973.9A CN110569118A (en) 2019-08-29 2019-08-29 Task scheduling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910807973.9A CN110569118A (en) 2019-08-29 2019-08-29 Task scheduling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110569118A true CN110569118A (en) 2019-12-13

Family

ID=68776686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910807973.9A Pending CN110569118A (en) 2019-08-29 2019-08-29 Task scheduling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110569118A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324443A (en) * 2020-03-04 2020-06-23 广东南方数码科技股份有限公司 Data processing method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105589748A (en) * 2014-10-22 2016-05-18 阿里巴巴集团控股有限公司 Service request processing method and apparatus
WO2016078008A1 (en) * 2014-11-19 2016-05-26 华为技术有限公司 Method and apparatus for scheduling data flow task
JPWO2014068950A1 (en) * 2012-10-31 2016-09-08 日本電気株式会社 Data processing system, data processing method and program
CN108170526A (en) * 2017-12-06 2018-06-15 北京像素软件科技股份有限公司 Load capacity optimization method, device, server and readable storage medium storing program for executing
CN109669752A (en) * 2018-12-19 2019-04-23 北京达佳互联信息技术有限公司 A kind of interface method for drafting, device and mobile terminal
CN109901926A (en) * 2019-01-25 2019-06-18 平安科技(深圳)有限公司 Method, server and storage medium based on big data behavior scheduling application task
CN110134499A (en) * 2019-03-29 2019-08-16 新智云数据服务有限公司 Method for scheduling task, task scheduling system, storage medium and computer equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2014068950A1 (en) * 2012-10-31 2016-09-08 日本電気株式会社 Data processing system, data processing method and program
CN105589748A (en) * 2014-10-22 2016-05-18 阿里巴巴集团控股有限公司 Service request processing method and apparatus
WO2016078008A1 (en) * 2014-11-19 2016-05-26 华为技术有限公司 Method and apparatus for scheduling data flow task
CN108170526A (en) * 2017-12-06 2018-06-15 北京像素软件科技股份有限公司 Load capacity optimization method, device, server and readable storage medium storing program for executing
CN109669752A (en) * 2018-12-19 2019-04-23 北京达佳互联信息技术有限公司 A kind of interface method for drafting, device and mobile terminal
CN109901926A (en) * 2019-01-25 2019-06-18 平安科技(深圳)有限公司 Method, server and storage medium based on big data behavior scheduling application task
CN110134499A (en) * 2019-03-29 2019-08-16 新智云数据服务有限公司 Method for scheduling task, task scheduling system, storage medium and computer equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DANIEL CASINI: "Analyzing Parallel Real-Time Tasks Implemented with Thread Pools", 《PROCEEDINGS OF THE 56TH ANNUAL DESIGN AUTOMATION CONFERENCE 2019》 *
YINGCHI MAO: "Hierarchical model-based associate tasks scheduling with the deadline constraints in the cloud", 《2015 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION》 *
吴晓凌, 华中科技大学出版社 *
薛海龙: "Android应用异步编程模型性能分析", 《计算机科学与探索》 *
青岛农业大学: "《Android程序设计及实践》", 31 July 2019 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324443A (en) * 2020-03-04 2020-06-23 广东南方数码科技股份有限公司 Data processing method and device, electronic equipment and storage medium
CN111324443B (en) * 2020-03-04 2024-04-05 广东南方数码科技股份有限公司 Data processing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2013145105A1 (en) Sequence-program debugging assistance apparatus
CN110851246A (en) Batch task processing method, device and system and storage medium
CN111061981A (en) Page management method and device, storage medium and electronic equipment
KR101458028B1 (en) Apparatus and method for parallel processing
CN109840149B (en) Task scheduling method, device, equipment and storage medium
CN114610294B (en) Concurrent computation control method and device for efficiency indexes of simulation experiment and computer equipment
US8612991B2 (en) Dynamic critical-path recalculation facility
CN101980147B (en) Multithreaded processor and instruction execution and synchronization method thereof
CN104899093A (en) Data processing method, data processing device and data processing system
KR20220084792A (en) Service platform system for generating workflow and workflow generating method
CN110569118A (en) Task scheduling method and device, electronic equipment and storage medium
US20200151033A1 (en) Programmable controller, management device, and control system
JP5195408B2 (en) Multi-core system
WO2017148508A1 (en) Multi-phase high performance business process management engine
CN105068861A (en) Transaction execution method and device
US20080077925A1 (en) Fault Tolerant System for Execution of Parallel Jobs
CN112835692B (en) Log message driven task method, system, storage medium and equipment
CN113010290A (en) Task management method, device, equipment and storage medium
CN110968412B (en) Task execution method, system and storage medium
CN109298988A (en) A kind of acquisition methods and relevant apparatus of cluster instance state
JP5277847B2 (en) Work management device, work management program
Limoncelli Automation Should Be Like Iron Man, Not Ultron: The" Leftover Principle" Requires Increasingly More Highly-skilled Humans.
CN102073528B (en) Method for obtaining dynamic update time point of conventional operation system
CN110689922A (en) Method and system for GC content analysis of automatic parallelization knockout strategy
US9201707B2 (en) Distributed system, device, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191213