CN110955503A - Task scheduling method and device - Google Patents

Task scheduling method and device Download PDF

Info

Publication number
CN110955503A
CN110955503A CN201811130901.7A CN201811130901A CN110955503A CN 110955503 A CN110955503 A CN 110955503A CN 201811130901 A CN201811130901 A CN 201811130901A CN 110955503 A CN110955503 A CN 110955503A
Authority
CN
China
Prior art keywords
code data
task
tasks
coroutine
execution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811130901.7A
Other languages
Chinese (zh)
Other versions
CN110955503B (en
Inventor
李晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Maker Works Technology Co ltd
Original Assignee
Shenzhen Maker Works Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Maker Works Technology Co ltd filed Critical Shenzhen Maker Works Technology Co ltd
Priority to CN201811130901.7A priority Critical patent/CN110955503B/en
Publication of CN110955503A publication Critical patent/CN110955503A/en
Application granted granted Critical
Publication of CN110955503B publication Critical patent/CN110955503B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure discloses a task scheduling method and a device, comprising the following steps: obtaining co-thread code data obtained by program syntax conversion of the multi-thread code data, wherein the multi-thread code data is used for multi-thread parallel execution of a plurality of tasks; adding a plurality of tasks to a scheduler; and controlling the switching execution of the plurality of tasks in the single thread according to the co-program code data scheduling by the scheduler. The multithreading code data are subjected to program syntax conversion to obtain the coroutine code data, and the scheduler is used for scheduling and controlling the switching execution of a plurality of tasks in the single thread according to the coroutine code data, so that the multithreading code data are operated in the single thread, and the difficulty of operating a general programming language in a browser in the prior art is solved.

Description

Task scheduling method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a task scheduling method and apparatus.
Background
Browsers have become an important programming platform. In the prior art, code data written by a JavaScript language can be directly run in a browser, so that code data written by various programming languages can be run in the browser after code data written by other programming languages is compiled into code data of the JavaScript language.
However, the code data of the JavaScript language is executed in a single thread in the browser. If code written by other programming languages does not support running in a single thread manner, such as multi-threaded code data written using Python, then the multi-threaded code data compiled into JavaScript language is also not able to run in the browser. Thus multithreaded code data written using Python cannot run in a browser. Code data written by many common programming languages such as C, C + +, Python in the prior art is multi-threaded code data, and thus running the common programming languages in the browser is limited.
From the above, the problem of how to run multi-threaded code data in a single thread has yet to be solved.
Disclosure of Invention
In order to solve the problems in the related art, the present disclosure provides a task scheduling method and apparatus.
A task scheduling method comprises the following steps:
obtaining co-thread code data obtained by program syntax conversion of multi-thread code data, wherein the multi-thread code data is used for multi-thread parallel execution of a plurality of tasks;
adding the plurality of tasks to a scheduler;
and controlling the switching execution of the plurality of tasks in a single thread according to the coroutine code data scheduling by the scheduler.
A task scheduling apparatus comprising:
an acquisition module configured to perform: obtaining co-thread code data obtained by program syntax conversion of multi-thread code data, wherein the multi-thread code data is used for multi-thread parallel execution of a plurality of tasks;
an add module configured to perform: adding the plurality of tasks to a scheduler;
a scheduling control module configured to perform: and controlling the switching execution of the plurality of tasks in a single thread according to the coroutine code data scheduling by the scheduler.
In an exemplary embodiment, the multi-thread code data includes a task function for multi-threading parallel execution of a plurality of tasks, the task scheduling apparatus further includes:
a static analysis module configured to perform: performing static analysis on the multi-thread code data, and determining the type corresponding to each task function in the multi-thread code data through the static analysis;
a coroutine keyword addition module configured to perform: adding corresponding coroutine keywords in the task functions according to the type corresponding to each task function to obtain coroutine code data; when the coroutine code data runs in the single thread, the scheduler switches the tasks corresponding to the task functions among different task queues in the single thread through the coroutine keywords to realize the switching execution of the tasks.
In an exemplary embodiment, the task queue includes an execution queue for storing a currently executed task, a blocking queue for storing a blocked task, and a preparation queue for storing a task to be executed, and the scheduling control module includes:
a congestion determination unit configured to perform: judging whether the tasks in the execution queue are blocked or not according to the coroutine key words in the process that the coroutine code data runs in the single thread;
a schedule switching unit configured to perform: if the task in the execution queue is blocked, the blocked task in the execution queue is suspended to be executed, the blocked task is moved to the blocking queue through the scheduler, and the task in the preparation queue is moved to the execution queue to execute the task moved to the execution queue.
In an exemplary embodiment, the apparatus further comprises:
a first monitoring module configured to perform: monitoring tasks located in the blocking queue;
a first transfer module configured to perform: if the blocking condition causing the task in the blocking queue to be blocked is monitored to be eliminated, the task in the blocking queue is moved to the preparation queue by the scheduler to wait for the blocked task to be resumed.
In an exemplary embodiment, the apparatus further comprises:
an assignment module configured to perform: according to the initial execution sequence of the tasks in the coroutine code data, allocating the task initially executed in the tasks to the execution queue, and allocating other tasks in the tasks to the preparation queue.
In an exemplary embodiment, the scheduling control module further includes:
a monitoring unit configured to perform: monitoring the execution state of the tasks in the execution queue;
a transfer unit configured to perform: and if the execution of the tasks in the execution queue is monitored to be completed, removing the tasks which are completed in the execution from the execution queue through the scheduler, and moving the tasks in the preparation queue to the execution queue to execute the tasks moved to the execution queue.
In an exemplary embodiment, the apparatus further comprises:
a compilation module configured to perform: compiling the coroutine code data to meet the program language requirement of the running environment of the single thread on the running code data;
the scheduling control module includes:
a scheduling control unit configured to perform: and scheduling and controlling the switching execution of the tasks in the single thread of the running environment according to the compiled coroutine code data through the scheduler.
A task scheduling apparatus comprising:
a processor; and
a memory having computer readable instructions stored thereon which, when executed by the processor, implement a task scheduling method as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of task scheduling as described above.
In the technical scheme disclosed by the invention, the multi-thread code data is subjected to program syntax conversion to obtain the co-thread code data, and the scheduler is used for scheduling and controlling the switching execution of a plurality of tasks in a single thread according to the co-thread code data, so that the multi-thread code data can be operated in the single thread, and the difficulty in operating a general programming language in a browser in the prior art is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic illustration of an implementation environment to which the present disclosure relates;
FIG. 2 is a block diagram illustrating a server in accordance with an exemplary embodiment;
FIG. 3 is a flowchart illustrating a method of task scheduling in accordance with an exemplary embodiment;
FIG. 4 is a flowchart of the steps preceding step S110 in the embodiment illustrated in FIG. 3;
FIG. 5 is a flowchart of step S150 of the corresponding embodiment of FIG. 3;
FIG. 6 is a flowchart illustrating steps subsequent to step S151 of the corresponding embodiment of FIG. 5, in accordance with an exemplary embodiment;
FIG. 7 is a flowchart illustrating step S150 of the corresponding embodiment of FIG. 3, according to another exemplary embodiment;
FIG. 8 is a flowchart illustrating a task scheduling method in accordance with another exemplary embodiment;
FIG. 9 is a block diagram illustrating a task scheduler in accordance with an exemplary embodiment;
fig. 10 is a block diagram illustrating a task scheduling device according to another exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
FIG. 1 is a schematic diagram illustrating an implementation environment to which the present disclosure relates, according to an example implementation. The implementation environment includes: a server 200 and at least one terminal 100 (only two shown in fig. 1).
The terminal 100 may be a portable computer, a desktop computer, or other electronic devices that can run an application client, such as a smart phone. The server 200 is a server corresponding to the client running on the terminal 100, and is configured to perform data exchange with the application client running on the terminal 100, so as to provide a corresponding service for the application client, for example, code data is written on the client of the terminal, the server performs switching operation of multiple tasks in the code data on the code data written in the terminal according to the technical scheme of the present disclosure, and returns an operation result to the terminal 100, or feeds back an operation process to the terminal 100 in an operation process, and the terminal 100 displays the operation process to a user.
The terminal 100 and the server 200 establish a network connection to implement communication, and the association manner between the two includes a network association manner and/or a protocol of hardware and a data association manner between the two.
FIG. 2 is a block diagram illustrating a server 200 in accordance with an example embodiment, the server 200 may be used to implement task scheduling in accordance with the methods of the present disclosure.
It should be noted that the server is only an example adapted to the present disclosure, and should not be considered as providing any limitation to the scope of the present disclosure. Nor should the server be interpreted as having a need to rely on or have to have one or more components of the exemplary server 200 shown in fig. 2.
The hardware structure of the server may be greatly different due to different configurations or performances, as shown in fig. 2, the server 200 includes: a power supply 210, an interface 230, at least one memory 250, and at least one Central Processing Unit (CPU) 270.
The power supply 210 is used to provide operating voltage for each hardware device on the server 200.
The interface 230 includes at least one wired or wireless network interface 231, at least one serial-to-parallel conversion interface 233, at least one input/output interface 235, and at least one USB interface 237, etc. for communicating with external devices.
The storage 250 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon include an operating system 251, an application 253, data 255, etc., and the storage manner may be a transient storage or a permanent storage. The operating system 251 is used for managing and controlling each hardware device and the application 253 on the server 200 to implement the computation and processing of the mass data 255 by the central processing unit 270, which may be windows server, Mac OS XTM, unix, linux, FreeBSDTM, or the like. The application 253 is a computer program that performs at least one specific task on the operating system 251, and may include at least one module (not shown in fig. 2), each of which may contain a series of computer-readable instructions for the server 200. The data 255 may be code data stored in a disk, or the like.
The central processor 270 may include one or more processors and is arranged to communicate with the memory 250 via a bus for computing and processing the mass data 255 in the memory 250.
As described in detail above, a server 200 to which the present disclosure is applicable will accomplish the task scheduling method by the central processor 270 reading a form of a series of computer readable instructions stored in the memory 250.
In an exemplary embodiment, the server 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital signal processors, digital signal processing devices, programmable logic devices, field programmable gate arrays, controllers, microcontrollers, microprocessors or other electronic components for performing the methods described below. Thus, implementation of the invention is not limited to any specific hardware circuitry, software, or combination of both.
FIG. 3 is a flowchart illustrating a method of task scheduling in accordance with an exemplary embodiment. The task scheduling method is used in the terminal 100 implementing the environment shown in fig. 1. As shown in fig. 3, the task scheduling method includes:
step S110 is to obtain the multithread code data, which is used for multithread parallel execution of multiple tasks, and perform program syntax conversion to obtain the multithread code data.
Before proceeding with the detailed discussion, the distinction between the three will be described by performing the execution of a plurality of tasks in a coroutine, multithreading, and single-thread manner, respectively:
for example, the tasks to be executed are task a and task B, and both tasks are arithmetic operations in an ideal state, so that the problems of competition and data sharing do not exist.
When the two tasks are executed in a multithread manner, the thread A executes the task A, and the thread B executes the task B, so that the parallel execution of the task A and the task B is realized.
When the two tasks are executed in the single thread mode, according to the task execution sequence set in the code data, for example, the task a is executed first and then the task B is executed, so that the task B can be executed only after the task a is executed in the execution process.
When the two tasks are executed in a coroutine mode, the task A can be executed firstly and then the task B can be executed, wherein if the task A is blocked, the execution of the task A can be suspended, the task A is switched to be executed firstly, and then the execution of the task A is resumed. It can also be considered as a way between coroutines, i.e. coroutine a executes task a and coroutine B executes task B, but only tasks in one coroutine are executed at the same time.
As can be seen from the above, in multithreading, multiple tasks may be executed in parallel (i.e., multiple tasks may be executed at the same time). In a single thread, a plurality of tasks are executed in sequence, that is, only one task can be executed at a time, and the next task can be executed only after the previous task is completed. In the coroutine, only one task is executed at the same time, but the execution sequence between the tasks can be switched.
It should be noted that, in multithreading, switching between threads is also possible, for example, if two tasks need to be executed C, D in the thread I and the thread II has no task to be processed for a while, during execution, the task in the thread I, for example, the task C, may be switched to be executed in the thread II, that is, the thread I executes the task D and the thread II executes the task C. The scheduling among the threads is realized by an operating system, and the scheduling of the tasks in the coroutine is performed by corresponding code data, for example, the coroutine keywords such as yield and the like in the coroutine code data are used for task yielding. Therefore, the cost of switching between threads is greater than the cost of coroutine switching, and the speed of thread switching is slower than that of coroutine switching.
The technical scheme of the present disclosure realizes the switching execution of multiple tasks in multi-thread code data in a single thread by switching in a coroutine manner, that is, the code data itself realizes the switching.
The multi-thread code data refers to code data which runs in a multi-thread mode, wherein whether the code data is the multi-thread code data is limited by a programming language of the code data, for example, if a corresponding thread library for multi-thread execution of the code data is provided in a function library of the programming language to support multi-thread operation and application of the code data, code which is compiled by using the thread library in the programming language is the multi-thread code data, so that the multi-thread code data can run in a multi-thread mode in an environment supporting the running of the programming language. C. The programming languages such as C + +, Python, and the like support the multi-thread mode operation, and the specific programming language of the multi-thread code data is not limited herein.
The coroutine code data refers to code data which runs in a coroutine mode, namely, when the coroutine code data runs in the coroutine mode, a plurality of tasks to be executed in the coroutine code data can be executed in a switching mode, but only one task is executed at the same time. Similarly, the coroutine code data is also limited by the programming language, i.e. whether the programming language provides coroutine related functions, such as coroutine keywords mentioned below, which are executed by the code data in a coroutine manner, then the code data written by utilizing the coroutine related functions in the programming language is coroutine code data. In the technical scheme disclosed by the invention, the multithreading code data are converted into the coroutine code data by performing static analysis on the multithreading code data and adding the corresponding coroutine key words in the task function.
The manner in which code data programmed by different programming languages is executed (i.e., the above-mentioned single-threaded, multi-threaded, and coroutine) is limited, on one hand, by the actual execution environment of the code data, i.e., whether the actual execution environment executes the corresponding execution manner, and on the other hand, by the programming language, i.e., whether the programming language provides a function library and an interface of the corresponding execution manner. For example, code data written in JavaScript can only run in a single-thread manner when running in a browser, and code data written in programming languages such as Python, C + + and the like support both multi-thread manner and coroutine manner, for example, code data written in multi-thread related functions provided by Python and Python support multi-thread manner, and code data written in coroutine related functions supported by Python and Python support coroutine manner.
Of course, after the multi-thread code data is subjected to program syntax conversion to obtain the coroutine code data, the tasks executed when the coroutine code data runs are also the multiple tasks to be executed by the corresponding multi-thread code data.
In a specific embodiment, the multithreading code data may be written on a client of the terminal, and the server obtains the multithreading code data from the terminal and performs program syntax conversion on the multithreading code data to obtain the coroutine code data, so that the server directly executes the obtained coroutine code data according to the task scheduling method disclosed by the present disclosure. Of course, the coroutine code data obtained by program syntax conversion through the multi-thread code data can also be directly stored in the server, so that the server can directly extract the coroutine code data to execute according to the task scheduling method disclosed by the invention.
In step S130, a plurality of tasks are added to the scheduler.
The scheduler is used for scheduling the code data to execute a plurality of tasks in the code data in a coroutine mode, namely the scheduler is a coroutine scheduler supporting the execution of a plurality of tasks in a coroutine mode. In a programming language, if a coroutine-related function is provided by the programming language for a user to write coroutine code data, a coroutine scheduler is also provided in the programming language that performs coroutine execution of a plurality of tasks in the coroutine code data.
Here, the scheduler may schedule the plurality of tasks according to the step of step S150 by adding the plurality of tasks to the scheduler, that is, adding the task functions corresponding to the plurality of tasks to the coroutine scheduler.
In an embodiment, if the programming language used by the multi-thread code data does not support the code data for writing the coroutine, the multi-thread code data needs to be compiled to another language supporting the code data for writing the coroutine, that is, the multi-thread code data is compiled to the multi-thread code data of another programming language, and then program syntax conversion is performed according to the compiled multi-thread code data to obtain the coroutine code data. Thus, in step S130, the multi-thread code data is added to the coroutine scheduler provided by the coroutine-supporting programming language among the plurality of tasks to be executed.
And step S150, controlling the switching execution of a plurality of tasks in the single thread according to the coroutine code data by the scheduler.
That is, in step S150, although the coroutine code data is running in the single thread, during the execution, the scheduler may perform scheduling according to the coroutine code data, so as to implement the switching execution of the plurality of tasks in the single thread. The switching execution of the plurality of tasks in step S150 is described in detail below.
In the technical scheme disclosed by the invention, the multi-thread code data is subjected to program syntax conversion to obtain the co-thread code data, and the scheduler is used for scheduling and controlling the switching execution of a plurality of tasks in a single thread according to the co-thread code data, so that the multi-thread code data can be operated in the single thread, and the difficulty in operating a general programming language in a browser in the prior art is solved.
The present disclosure can be applied to programming software, particularly to programming software with a browser as a platform. Therefore, after the multi-thread code data is written according to the general programming language, the multi-thread code data can be operated in a single thread by using the technical scheme of the disclosure, for example, the multi-thread code data is subjected to program syntax transformation to obtain the co-program code data, and the co-program code data is compiled into the JavaScript language, so that the operation in the browser is realized.
In an exemplary embodiment, the multi-thread code data includes a task function for multi-threading parallel execution of a plurality of tasks, and as shown in fig. 4, further includes, before step S110:
and S011, performing static analysis on the multi-thread code data, and determining the type corresponding to each task function in the multi-thread code data through the static analysis.
Static analysis refers to that under the condition that the multi-thread code data is not operated, the multi-thread code data is scanned through the technologies of lexical analysis, syntax analysis, control flow, data flow and the like, so that the type corresponding to each task function in the multi-thread code data is determined.
Static analysis of the multi-threaded code data may be performed using static analysis tools such as Findbugs, PMD, Checkstyle, BlueMorpho, Klocwork, LDRA Testbed, HP Fortify, Parasoft C + + Test, and the like, and is not particularly limited herein. Of course, the static analysis tool has a requirement on the programming language of the code data, for example, the Parasoft C + + Test tool supports the static analysis of C, C + + language code data, so that the code data written by using Python cannot be statically analyzed by using the Parasoft C + + Test tool. Therefore, the static analysis tool used depends on the programming language used for the coroutine code data.
In the running process of the code data, the task is executed, that is, the task function corresponding to the task is executed, and of course, the task function may include one or more sub-functions, and the one or more sub-functions constitute the task function.
Whether the task corresponding to a certain task function is possibly blocked can be judged through static analysis. The task block may be caused by delayed execution of a task function, for example, a task function including a time () delayed execution function, or may be caused by a task function needing to read data from another device or a file through an I/O interface, or a task function needing to write data to another device or a file through an I/O interface, so that a task execution time corresponding to the task function is long, and the task function is a block function.
Step S013, adding corresponding coroutine keywords in the task function according to the type corresponding to each task function to obtain coroutine code data; in the process of operating the coroutine code data in the single thread, the scheduler switches tasks corresponding to the task functions among different task queues in the single thread through the coroutine keywords, so that the switching execution of a plurality of tasks is realized.
The added coroutine keyword depends on the programming language and the type of the task function, for example, yield, await, and the like are supported in the python language, and yield or await, and the like is added according to the type of the corresponding task function when the coroutine keyword is added, which is not specifically limited herein.
For example, in multi-thread code data, for example, by delaying the execution of a calling thread by time. sleep (1000), the function is a delayed execution function, and then the task function including the function is a delayed execution function, where the number 1000 in parentheses indicates the delay time, and then after static analysis determines that the task function in which the function is located is a delayed execution function, adding a corresponding coroutine key word await to obtain coroutine code data: sleep (1000), when coroutine code data runs to a task function of the function, the task corresponding to the task function is considered to be a task which can be blocked, so that a task queue of the task corresponding to the task function is transferred, and another task is executed.
For another example, if it is determined through static analysis that the function block () in the multi-thread code data is a blocking function, a corresponding coroutine keyword yield is added to the function, and coroutine code data is obtained: yield block (). If another function call _ block is called to the blocking function block (), the function call _ block is also regarded as a blocking function, and a corresponding coroutine keyword yield is added to obtain coroutine code data: yield call _ block. Therefore, in the execution process of coroutine code data, if the coroutine keyword is operated to judge the task corresponding to the task function where the coroutine keyword is located, the task is considered to be blocked, the operation of the task is suspended, and other tasks are switched and executed.
In different programming languages, supported coroutine keys are different, for example, in Python language, yield and await keys are supported, and at the same time, coroutine keys added by different types of task functions are different, for example, as mentioned above, await coroutine keys are added for deferred execution functions and yield coroutine keys are added for blocking functions. Of course, in a specific programming language, other types of task functions and corresponding attachable coroutine keywords exist, and are not limited in detail here. The foregoing is merely an illustrative example and is not to be construed as limiting the scope of the disclosure.
In an exemplary embodiment, the task queue includes an execution queue for storing a currently executed task, a blocking queue for storing a blocked task, and a preparation queue for storing a task to be executed, and as shown in fig. 5, step S150 includes:
step S151, during the process of operating the coroutine code data in the single thread, determining whether the task in the execution queue is blocked according to the coroutine keyword.
In step S153, if the task in the execution queue is blocked, the execution of the task blocked in the execution queue is suspended, the blocked task is moved to the blocking queue by the scheduler, and the task in the preparation queue is moved to the execution queue to execute the task moved to the execution queue.
In the running process of coroutine code data, if the task corresponding to the task function containing the coroutine key words is run, the task is considered to be blocked, so that the task is moved from the execution queue to the blocking queue through the scheduler, the task waiting to be executed in the preparation queue is moved to the execution queue, the task moved to the execution queue is executed, and the switching execution of a plurality of tasks is realized.
One or more tasks in the preparation queue may be provided. In one embodiment, the tasks are extracted from the preparation queue and transferred to the execution queue to execute the tasks in the order of putting the tasks into the preparation queue, i.e., the tasks are extracted from the preparation queue and transferred to the execution queue to execute the tasks in the order of first-in first-out. In another embodiment, the task in the preparation queue is extracted according to the priority of each task in the preparation queue, namely, the task with the high priority is extracted and transferred to the execution queue to execute the task with the high priority.
In an exemplary embodiment, as shown in fig. 6, after step S151, the method further includes:
step S161, monitoring the task in the blocking queue.
In step S162, if it is monitored that the blocking condition causing the task in the blocking queue is eliminated, the task in the blocking queue is moved to the preparation queue by the scheduler to wait for resuming the execution of the blocked task.
After the blocked task is moved to the blocking queue, if the condition causing the task to be blocked is removed, the task is moved from the blocking queue to the preparation queue again to wait for the task to be resumed. The reason for the task being blocked may be that a task calls another task during execution, so that the scheduler moves the task to the blocking queue and the called task to the execution queue for execution of the called task. Then, when the called task completes execution or an exception occurs, the condition which causes the task to be blocked is eliminated, so that the task is moved from the blocking queue to the preparation queue again to wait for the task to resume execution.
The following description is given by way of example of switching between different task queues of tasks corresponding to task functions including yield protocol key words:
during single-threaded operation of coroutine code data, a function containing a yield coroutine key causes the generator to execute in a pause mode, and if the function containing the yield coroutine key is operated, the execution is paused. The values of the expressions following the yield coroutine key are returned to the caller of the generator. Wherein the yield protocol key actually returns an iterator result object with two attributes: value and done, where the value attribute is the result of evaluating the expression on which the yield coroutine key is located, and done indicates that the generator function has not yet been executed.
In the execution process, once a function containing the yield protocol key words is suspended when the yield protocol key words are encountered, the scheduler moves the task corresponding to the task function of the function to the blocking queue and executes the task moved from the preparation queue to the execution queue.
When the next () of the generator is called, the suspended function resumes execution. And after the next () of the generator is called, namely the condition causing the task to be blocked is eliminated, the blocked task is moved from the preparation queue to the execution queue again. Of course, whether the next () of the producer is called depends on whether the condition causing the task block is removed.
In an exemplary embodiment, as shown in fig. 7, step S150 includes:
step S251, performing execution state monitoring on the tasks located in the execution queue.
In step S252, if it is monitored that the tasks in the execution queue are completely executed, the tasks that are completely executed are removed from the execution queue by the scheduler, and the tasks in the preparation queue are moved to the execution queue to execute the tasks moved to the execution queue.
And for the tasks which are already executed and completed in the execution queue, removing the tasks from the execution queue, and executing other tasks to be executed in the preparation queue.
In an exemplary embodiment, before step S151, the method further includes:
according to the initial execution sequence of the tasks in the coroutine code data, distributing the tasks which are initially executed in the tasks to an execution queue, and distributing other tasks in the tasks to a preparation queue.
In an exemplary embodiment, as shown in fig. 8, step S150 further includes, before:
and compiling the coroutine code data to meet the program language requirement of the running environment of the single thread on the running code data.
The step S150 includes:
and step S250, controlling the switching execution of a plurality of tasks in the single thread of the running environment by the scheduler according to the compiled coroutine code data.
For example, as mentioned above, in order to execute code data in a browser, the code data needs to be compiled into code data of a JavaScript language. Therefore, in order to meet the program language requirement of the coroutine code data in the actual operating environment, the coroutine code data needs to be compiled to realize the operation of the coroutine code data in the actual single-thread environment.
By compiling the coroutine code data, the running of various program language code data can be realized in a running environment, and the application range is wide.
The following is an embodiment of the apparatus of the present disclosure, which may be used to execute an embodiment of the task scheduling method executed by the server 200 of the present disclosure. For details not disclosed in the embodiments of the device of the present disclosure, please refer to the embodiments of the task scheduling method of the present disclosure.
Fig. 9 is a block diagram illustrating a task scheduling device according to an exemplary embodiment, which may be used in the server 200 of the implementation environment shown in fig. 1 to perform all or part of the steps of the task scheduling method shown in any one of the above method embodiments. As shown in fig. 9, the task scheduling apparatus includes:
an acquisition module 110 configured to perform: and acquiring the multithreading code data for multithreading parallel execution of a plurality of tasks, wherein the multithreading code data is obtained by program syntax conversion.
An adding module 130, connected to the obtaining module 110, configured to perform: multiple tasks are added to the scheduler.
A scheduling control module 150, coupled to the adding module 130, configured to perform: and controlling the switching execution of the plurality of tasks in the single thread according to the co-program code data scheduling by the scheduler.
The implementation processes of the functions and actions of each module in the above device are specifically described in the implementation processes of the corresponding steps in the above task scheduling method, and are not described herein again.
In an exemplary embodiment, the multithreaded code data includes a task function for multithreaded parallel execution of a plurality of tasks, and the task scheduling apparatus further includes:
a static analysis module configured to perform: and performing static analysis on the multi-thread code data, and determining the type corresponding to each task function in the multi-thread code data through the static analysis.
A coroutine keyword addition module configured to perform: adding corresponding coroutine keywords in the task functions according to the type corresponding to each task function to obtain coroutine code data; and the scheduler switches tasks corresponding to the task functions among different task queues in the single thread through the coroutine keywords in the process of running the coroutine code data in the single thread, so that the switching execution of a plurality of tasks is realized.
After the static analysis module and the coroutine keyword adding module execute the corresponding operations, the obtaining module 110 may directly obtain the corresponding coroutine code data.
In an exemplary embodiment, the task queue includes an execution queue for storing a currently executed task, a blocking queue for storing a blocked task, and a preparation queue for storing a task to be executed, and the scheduling control module includes:
a congestion determination unit configured to perform: and judging whether the tasks in the execution queue are blocked or not according to the coroutine key words in the process of operating the coroutine code data in the single thread.
A schedule switching unit configured to perform: if the task in the execution queue is blocked, the blocked task in the execution queue is suspended from being executed, the blocked task is moved to the blocking queue by the scheduler, and the task in the preparation queue is moved to the execution queue to be executed.
In an exemplary embodiment, the task scheduling apparatus further includes:
a first monitoring module configured to perform: tasks located in the blocking queue are monitored.
A first transfer module configured to perform: if the blocking condition causing the task in the blocking queue to be blocked is monitored to be eliminated, the task in the blocking queue is moved to the preparation queue by the scheduler to wait for the blocked task to resume execution.
In an exemplary embodiment, the task scheduling apparatus further includes:
an assignment module configured to perform: according to the initial execution sequence of the tasks in the coroutine code data, distributing the tasks which are initially executed in the tasks to an execution queue, and distributing other tasks in the tasks to a preparation queue.
In an exemplary embodiment, the scheduling control module further includes:
a monitoring unit configured to perform: and monitoring the execution state of the tasks in the execution queue.
A transfer unit configured to perform: if the execution of the tasks in the execution queue is monitored to be completed, the tasks which are completed in the execution are removed from the execution queue through the scheduler, and the tasks which are positioned in the preparation queue are moved to the execution queue to execute the tasks which are moved to the execution queue.
In an exemplary embodiment, the task scheduling apparatus further includes:
a compilation module configured to perform: and compiling the coroutine code data to meet the program language requirement of the running environment of the single thread on the running code data.
The scheduling control module includes:
a scheduling control unit configured to perform: and scheduling and controlling the switching execution of a plurality of tasks in the single thread of the operating environment through a scheduler according to the compiled coroutine code data.
The implementation processes of the functions and actions of each module in the above device are specifically described in the implementation processes of the corresponding steps in the above task scheduling method, and are not described herein again.
It is understood that these modules may be implemented in hardware, software, or a combination of both. When implemented in hardware, these modules may be implemented as one or more hardware modules, such as one or more application specific integrated circuits. When implemented in software, the modules may be implemented as one or more computer programs executing on one or more processors, such as programs stored in memory 250 for execution by central processor 270 of FIG. 2.
The present disclosure also provides a task scheduling apparatus, as shown in fig. 10, which can be used in the server 200 of the implementation environment shown in fig. 1, where the task scheduling apparatus 1000 includes:
a processor 1001; and
the memory 1002, the memory 1002 stores thereon computer readable instructions, which when executed by the processor 1001, implement the task scheduling method in any of the above task scheduling method embodiments. The processor 1001 reads computer readable instructions from the memory 1002 via the bus/communication line 1003 during execution.
The specific manner in which the processor of the apparatus performs the operations in this embodiment has been described in detail in the embodiment related to the task scheduling method, and will not be elaborated here.
Optionally, a computer-readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing the task scheduling method in any of the above task scheduling method embodiments. The computer readable storage medium comprises, for example, the memory 250 of a computer program executable by the central processor 270 of the server 200 to implement the task scheduling method described above.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A method for task scheduling, comprising:
obtaining co-thread code data obtained by program syntax conversion of multi-thread code data, wherein the multi-thread code data is used for multi-thread parallel execution of a plurality of tasks;
adding the plurality of tasks to a scheduler;
and controlling the switching execution of the plurality of tasks in a single thread according to the coroutine code data scheduling by the scheduler.
2. The method of claim 1, wherein the multi-thread code data comprises a task function for multi-threading execution of a plurality of tasks in parallel, and wherein the obtaining of coroutine code data obtained by program syntax conversion of the multi-thread code data further comprises:
performing static analysis on the multi-thread code data, and determining the type corresponding to each task function in the multi-thread code data through the static analysis;
adding corresponding coroutine keywords in the task functions according to the type corresponding to each task function to obtain coroutine code data; when the coroutine code data runs in the single thread, the scheduler switches the tasks corresponding to the task functions among different task queues in the single thread through the coroutine keywords to realize the switching execution of the tasks.
3. The method of claim 2, wherein the task queues include an execution queue for storing currently executed tasks, a blocking queue for storing blocked tasks, and a preparation queue for storing tasks to be executed, and wherein controlling, by the scheduler, the switching execution of the plurality of tasks in the single thread according to the coroutine code data scheduling comprises:
judging whether the tasks in the execution queue are blocked or not according to the coroutine key words in the process that the coroutine code data runs in the single thread;
if the task in the execution queue is blocked, the blocked task in the execution queue is suspended to be executed, the blocked task is moved to the blocking queue through the scheduler, and the task in the preparation queue is moved to the execution queue to execute the task moved to the execution queue.
4. The method according to claim 3, wherein after determining whether the task in the execution queue is blocked according to the coroutine key during the coroutine code data running in the single thread, the method further comprises:
monitoring tasks located in the blocking queue;
if the blocking condition causing the task in the blocking queue to be blocked is monitored to be eliminated, the task in the blocking queue is moved to the preparation queue by the scheduler to wait for the blocked task to be resumed.
5. The method according to claim 3, wherein before determining whether the task in the execution queue is blocked according to the coroutine key during the coroutine code data running in the single thread, the method further comprises:
according to the initial execution sequence of the tasks in the coroutine code data, allocating the task initially executed in the tasks to the execution queue, and allocating other tasks in the tasks to the preparation queue.
6. The method of claim 2, wherein said controlling, by the scheduler, the switching execution of the plurality of tasks in a single thread according to the coroutine code data schedule further comprises:
monitoring the execution state of the tasks in the execution queue;
and if the execution of the tasks in the execution queue is monitored to be completed, removing the tasks which are completed in the execution from the execution queue through the scheduler, and moving the tasks in the preparation queue to the execution queue to execute the tasks moved to the execution queue.
7. The method of claim 1, wherein prior to said controlling, by said scheduler, the switching execution of the plurality of tasks in a single thread according to the coroutine code data schedule, further comprising:
compiling the coroutine code data to meet the program language requirement of the running environment of the single thread on the running code data;
the scheduling, by the scheduler, the switching execution of the plurality of tasks in the single thread according to the coroutine code data includes:
and scheduling and controlling the switching execution of the tasks in the single thread of the running environment according to the compiled coroutine code data through the scheduler.
8. A task scheduling apparatus, comprising:
an acquisition module configured to perform: obtaining co-thread code data obtained by program syntax conversion of multi-thread code data, wherein the multi-thread code data is used for multi-thread parallel execution of a plurality of tasks;
an add module configured to perform: adding the plurality of tasks to a scheduler;
a scheduling control module configured to perform: and controlling the switching execution of the plurality of tasks in a single thread according to the coroutine code data scheduling by the scheduler.
9. A task scheduling apparatus, comprising:
a processor; and
a memory having computer readable instructions stored thereon which, when executed by the processor, implement the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201811130901.7A 2018-09-27 2018-09-27 Task scheduling method and device Active CN110955503B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811130901.7A CN110955503B (en) 2018-09-27 2018-09-27 Task scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811130901.7A CN110955503B (en) 2018-09-27 2018-09-27 Task scheduling method and device

Publications (2)

Publication Number Publication Date
CN110955503A true CN110955503A (en) 2020-04-03
CN110955503B CN110955503B (en) 2023-06-27

Family

ID=69967908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811130901.7A Active CN110955503B (en) 2018-09-27 2018-09-27 Task scheduling method and device

Country Status (1)

Country Link
CN (1) CN110955503B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112612615A (en) * 2020-12-28 2021-04-06 中孚安全技术有限公司 Data processing method and system based on multithreading memory allocation and context scheduling
CN112860401A (en) * 2021-02-10 2021-05-28 北京百度网讯科技有限公司 Task scheduling method and device, electronic equipment and storage medium
CN112988355A (en) * 2021-03-31 2021-06-18 深圳市优必选科技股份有限公司 Program task scheduling method and device, terminal equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005022384A1 (en) * 2003-08-28 2005-03-10 Mips Technologies, Inc. Apparatus, method, and instruction for initiation of concurrent instruction streams in a multithreading microprocessor
WO2005022381A2 (en) * 2003-08-28 2005-03-10 Mips Technologies, Inc. Integrated mechanism for suspension and deallocation of computational threads of execution in a processor
CN1842769A (en) * 2003-08-28 2006-10-04 美普思科技有限公司 Instruction for initiation of concurrent instruction streams in a multithreading microprocessor
US7206843B1 (en) * 2000-04-21 2007-04-17 Sun Microsystems, Inc. Thread-safe portable management interface
CN104142858A (en) * 2013-11-29 2014-11-12 腾讯科技(深圳)有限公司 Blocked task scheduling method and device
CN104199730A (en) * 2014-08-29 2014-12-10 浪潮集团有限公司 Single-thread multi-task processing method based on synchronous I/O multiplexing mechanism

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7206843B1 (en) * 2000-04-21 2007-04-17 Sun Microsystems, Inc. Thread-safe portable management interface
WO2005022384A1 (en) * 2003-08-28 2005-03-10 Mips Technologies, Inc. Apparatus, method, and instruction for initiation of concurrent instruction streams in a multithreading microprocessor
WO2005022381A2 (en) * 2003-08-28 2005-03-10 Mips Technologies, Inc. Integrated mechanism for suspension and deallocation of computational threads of execution in a processor
CN1842769A (en) * 2003-08-28 2006-10-04 美普思科技有限公司 Instruction for initiation of concurrent instruction streams in a multithreading microprocessor
CN104142858A (en) * 2013-11-29 2014-11-12 腾讯科技(深圳)有限公司 Blocked task scheduling method and device
CN104199730A (en) * 2014-08-29 2014-12-10 浪潮集团有限公司 Single-thread multi-task processing method based on synchronous I/O multiplexing mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
余志勇,刘光斌,许化龙: "分布式测控系统的多线程应用程序设计" *
杨胜哲;于俊清;唐九飞;: "数据流程序动态调度与优化" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112612615A (en) * 2020-12-28 2021-04-06 中孚安全技术有限公司 Data processing method and system based on multithreading memory allocation and context scheduling
CN112860401A (en) * 2021-02-10 2021-05-28 北京百度网讯科技有限公司 Task scheduling method and device, electronic equipment and storage medium
CN112860401B (en) * 2021-02-10 2023-07-25 北京百度网讯科技有限公司 Task scheduling method, device, electronic equipment and storage medium
CN112988355A (en) * 2021-03-31 2021-06-18 深圳市优必选科技股份有限公司 Program task scheduling method and device, terminal equipment and readable storage medium
CN112988355B (en) * 2021-03-31 2023-12-15 深圳市优必选科技股份有限公司 Program task scheduling method and device, terminal equipment and readable storage medium

Also Published As

Publication number Publication date
CN110955503B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US9772879B2 (en) System and method for isolating I/O execution via compiler and OS support
US9501319B2 (en) Method and apparatus for scheduling blocking tasks
US8261284B2 (en) Fast context switching using virtual cpus
US10402223B1 (en) Scheduling hardware resources for offloading functions in a heterogeneous computing system
US20110219373A1 (en) Virtual machine management apparatus and virtualization method for virtualization-supporting terminal platform
CN110955503B (en) Task scheduling method and device
Rossi et al. Preemption of the partial reconfiguration process to enable real-time computing with FPGAs
US10031773B2 (en) Method to communicate task context information and device therefor
US20170116030A1 (en) Low latency scheduling on simultaneous multi-threading cores
WO2015032311A1 (en) Code generation method, compiler, scheduling method, apparatus and scheduling system
US20220414052A1 (en) Multi-Core Processor, Multi-Core Processor Processing Method, and Related Device
US11182318B2 (en) Processor and interrupt controller
US20230127112A1 (en) Sub-idle thread priority class
US8762126B2 (en) Analyzing simulated operation of a computer
US20230096015A1 (en) Method, electronic deviice, and computer program product for task scheduling
Ciobanu The Events Priority in the nMPRA and Consumption of Resources Analysis on the FPGA.
CN112470125B (en) Interrupt processing method, computer system, and storage medium
Pereira et al. Co-designed FreeRTOS deployed on FPGA
US20060100986A1 (en) Task switching
US9015720B2 (en) Efficient state transition among multiple programs on multi-threaded processors by executing cache priming program
US20140298352A1 (en) Computer with plurality of processors sharing process queue, and process dispatch processing method
Gaitan Enhanced interrupt response time in the nMPRA based on embedded real time microcontrollers
US10565036B1 (en) Method of synchronizing host and coprocessor operations via FIFO communication
JP2004234643A (en) Process scheduling device, process scheduling method, program for process scheduling, and storage medium recorded with program for process scheduling
US10140150B2 (en) Thread diversion awaiting log call return

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant