CN114064225A - Self-adaptive scheduling method, device, computer storage medium and system - Google Patents

Self-adaptive scheduling method, device, computer storage medium and system Download PDF

Info

Publication number
CN114064225A
CN114064225A CN202010761677.2A CN202010761677A CN114064225A CN 114064225 A CN114064225 A CN 114064225A CN 202010761677 A CN202010761677 A CN 202010761677A CN 114064225 A CN114064225 A CN 114064225A
Authority
CN
China
Prior art keywords
task
integrated module
executors
processed
executor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010761677.2A
Other languages
Chinese (zh)
Inventor
张悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010761677.2A priority Critical patent/CN114064225A/en
Publication of CN114064225A publication Critical patent/CN114064225A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application provides a self-adaptive scheduling method, a self-adaptive scheduling device, a computer storage medium and a self-adaptive scheduling system, wherein the method comprises the steps of acquiring a task to be processed in a task list; sequentially judging whether a plurality of task executors included in the integrated module are in an idle state or not according to the tasks to be processed; when the plurality of task executors are all in a non-idle state, the number of the task executors in the integrated module is expanded; scheduling a newly added task executor from the expanded integrated module, and controlling the newly added task executor to execute the task to be processed; in this way, the integration module can adaptively adjust the number of task executors according to the service scale in the task list, thereby providing a dynamic telescopic continuous integration environment and improving the task concurrent processing capacity; the defect that the number of task processors must be kept fixed after the continuous integrated system is started is avoided, and the problem of solidification of continuous integrated processing capacity is solved.

Description

Self-adaptive scheduling method, device, computer storage medium and system
Technical Field
The present application relates to the field of computer software development technologies, and in particular, to a method, an apparatus, a computer storage medium, and a system for adaptive scheduling.
Background
Jenkins is an open-source continuous integration system, large-scale compiling and testing can be carried out on own projects and codes and project release can be carried out through the platform, and great convenience is brought to a software development team. Here Jenkins provides an open easy-to-use software platform that enables continuous integration of software.
In actual use, the Jenkins system generally adopts a master-slave mode to realize an integrated environment, and under the master-slave mode, a master node is a control node of the Jenkins system, and a slave node is a task node of the Jenkins system to execute a specific task, which is equivalent to a task executor of the Jenkins. The slave node may include a plurality of slaves to achieve the function of parallel processing tasks, and the number of the slaves may be increased or decreased according to the requirement, although this structure may achieve a certain degree of parallel processing, there are the following disadvantages: the persistent integration environment is relatively fixed, and in the face of different persistent integration task requirements, individual configuration of each node is required. Therefore, there is still a need for a persistent integration environment that can scale adaptively to cope with dynamic traffic sizes.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present application provide a method, an apparatus, a computer storage medium, and a system for adaptive scheduling, which can adaptively adjust the number of task executors according to a service scale, thereby providing a dynamic scalable persistent integration environment.
The technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an adaptive scheduling method, where the method includes:
acquiring a task to be processed in a task list;
sequentially judging whether a plurality of task executors included in the integrated module are in an idle state or not according to the tasks to be processed;
when the plurality of task executors are all in a non-idle state, the number of the task executors in the integrated module is expanded;
and scheduling a newly added task executor from the expanded integrated module, and controlling the newly added task executor to execute the task to be processed.
In a second aspect, an embodiment of the present application provides an adaptive scheduling apparatus, where the adaptive scheduling apparatus includes an obtaining unit, a determining unit, an extending unit, and a scheduling unit; wherein,
the acquisition unit is configured to acquire the tasks to be processed in the task list;
the judging unit is configured to sequentially judge whether a plurality of task executors included in the integrated module are in an idle state or not according to the tasks to be processed;
the expansion unit is configured to expand the number of task executors in the integrated module when the plurality of task executors are in a non-idle state;
and the scheduling unit is configured to schedule a newly added task executor from the expanded integrated module and control the newly added task executor to execute the to-be-processed task.
In a third aspect, an embodiment of the present application provides an adaptive scheduler, including a memory and a processor; wherein,
the memory for storing a computer program operable on the processor;
the processor is adapted to perform the steps of the method according to the first aspect when running the computer program.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing an adaptive scheduling program, which when executed by at least one processor implements the steps of the method according to the first aspect.
In a fifth aspect, the present application provides a system, which includes at least the adaptive scheduling apparatus according to the second aspect or the third aspect.
The embodiment of the application provides a self-adaptive scheduling method, a self-adaptive scheduling device, a computer storage medium and a self-adaptive scheduling system, wherein the method comprises the steps of acquiring a task to be processed in a task list; sequentially judging whether a plurality of task executors included in the integrated module are in an idle state or not according to the tasks to be processed; when the plurality of task executors are all in a non-idle state, the number of the task executors in the integrated module is expanded; scheduling a newly added task executor from the expanded integrated module, and controlling the newly added task executor to execute the task to be processed; in this way, the integration module can adaptively adjust the number of task executors according to the service scale in the task list, thereby providing a dynamic telescopic continuous integration environment and improving the task concurrent processing capacity; the defect that the number of task processors must be kept fixed after the continuous integrated system is started is avoided, and the problem of solidification of continuous integrated processing capacity is solved; the method avoids adding an additional cluster management tool to manage a plurality of task executors, and simplifies the cost of the connection process and the connection management.
Drawings
Fig. 1 is a schematic structural diagram of an integrated system in a master-slave mode in a related art scheme;
fig. 2 is a schematic structural diagram of an integrated system in another master-slave mode provided in the related art;
fig. 3 is a schematic flowchart of an adaptive scheduling method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an integrated system according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another adaptive scheduling method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another adaptive scheduling method according to an embodiment of the present application;
fig. 7 is a schematic flowchart of another adaptive scheduling method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a self-adaptive scheduling apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of another adaptive scheduling apparatus provided in an embodiment of the present application;
fig. 10 is a specific hardware structure example of adaptive scheduling provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of a system according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Jenkins is an open-source continuous integration system, large-scale compiling, testing and project publishing can be performed on projects and codes through the platform, and great convenience is brought to a software development team. Jenkins provides an open and easy-to-use software platform, and continuous integration of software becomes possible.
In the development process of the Jenkins platform, a master node and a slave node are generally adopted to combine to complete corresponding tasks, wherein the master node is mainly responsible for controlling the work in the aspect, such as managing the slave node and distributing the tasks to the slave node. In practical use, when a large-scale continuous integration requirement is met, a continuous integration structure of a single master node or a slave node with a single master node and a single hanging point (that is, one master corresponds to only one single slave) is difficult to meet a business requirement, so that the structure of a continuous integration environment is generally distributed and clustered.
Referring to fig. 1, which shows a schematic structural diagram of an integrated system in a master-slave mode provided in a related technical solution, as shown in fig. 1, when an integration environment is implemented in the master-slave mode, a master node is a control node of a Jenkins system, a slave node is a task node of the Jenkins system, and the number of slaves in the slave node may be increased or decreased according to a requirement; in addition, some developers choose to use Docker instead of slave node, see fig. 2, which shows a schematic structural diagram of an integrated system in another master-slave mode provided in the related technical solution, as shown in fig. 2, the master node still serves as a control node, the slave node is replaced with a Docker container, and the Docker container is connected to the master through jnlp or ssh, so that stable environment isolation can be achieved to perform parallel execution of different persistent integration tasks, and then these integrated systems all have the following two disadvantages:
(1) the continuous integration environment is relatively fixed; for a fixed integrated system, after configuration is finished and started, the number of the slave cannot be increased or decreased again in the operation process; therefore, in the face of different continuous integration task requirements, separate configuration of each node is required;
(2) once a failure or an error occurs in the slave node with the structure, the slave node needs to be searched one by one and manually repaired, which wastes a lot of time.
Docker is an application engine of an open source container, which enables developers to package applications and dependency packages into a portable image and then release the image to any popular environment, and also enables virtualization. The Docker containers use the sandbox mechanism entirely and do not have any interfaces between each other. The application scenarios of Docker mainly include the following four categories: (1) automatic packaging and publishing of the webpage application; (2) automatic testing and continuous integration and release; (3) deploying and adjusting databases or other background applications in a service-type environment; (4) and compiling or expanding the existing OpenShift or OpenShift platform from scratch to build a separate platform service environment, wherein both OpenShift and OpenShift are open platforms for open source developers.
Based on this, the embodiment of the application provides an adaptive scheduling method, which includes acquiring a task to be processed in a task list; sequentially judging whether a plurality of task executors included in the integrated module are in an idle state or not according to the tasks to be processed; when the plurality of task executors are all in a non-idle state, the number of the task executors in the integrated module is expanded; scheduling a newly added task executor from the expanded integrated module, and controlling the newly added task executor to execute the task to be processed; therefore, the number of the task executors in the integrated module can be adaptively adjusted according to the service scale in the task list, so that a dynamic telescopic continuous integrated environment is provided, and the task concurrent processing capacity is improved; the defect that the number of task processors must be kept fixed after the continuous integrated system is started is avoided, and the problem of solidification of continuous integrated processing capacity is solved; the method avoids adding an additional cluster management tool to manage a plurality of task executors, and simplifies the cost of the connection process and the connection management.
In an embodiment of the present application, referring to fig. 3, a flowchart of an adaptive scheduling method provided in an embodiment of the present application is shown, and as shown in fig. 3, the method may include:
s101: acquiring a task to be processed in a task list;
it should be noted that, in order to meet large-scale business requirements, the structure of the continuously integrated system is distributed and clustered. In a persistent integration system in a master-slave mode, there are a master node for completing a control task and a slave node for completing a specific service. In this embodiment, each specific slave is containerized to obtain a task executor, and the whole of these multiple task executors is containerized to an integrated module. That is, in this embodiment, the original separate slave is replaced with the task executor, but compared with the master node, the master node is directly connected and managed by the integrated module instead of the specific multiple task executors, so the control task of the master node is not more complicated, but the multiple task executors can more efficiently complete the assigned task through the adaptive scheduling method. Meanwhile, because the plurality of task executors finish scheduling by the self-adaptive scheduling method of the embodiment, a complex cluster management tool is not needed, and the problem of combination of the cluster management tool and a master node is not needed to be considered, so that the complexity of the system is reduced.
It should be noted that the persistent integration environment exists for the purpose of completing assigned tasks, and information for recording these tasks forms a task list. For a task list, the tasks therein are ordered. Generally, the tasks in the task list are sorted according to the received time, that is, the task received first is arranged at the front of the task list, and the task received later is arranged at the rear of the task list. Specifically, the received tasks are assigned when written into the task list, so as to identify and sort the tasks.
S102: sequentially judging whether a plurality of task executors included in the integrated module are in an idle state or not according to the tasks to be processed;
it should be noted that, according to the task to be processed, it is necessary to sequentially determine whether a plurality of task executors included in the integrated module are in an idle state; here, "sequentially" means in a certain order, which is preset, most commonly in a number of task executors from small to large, and the number is generally generated when the task executors are established.
It should be noted that, the "in sequence" includes that, after each task executor is judged, whether the judgment result meets the next step condition needs to be considered, and if yes, the next step is performed instead of continuing to judge the next task executor.
Further, in some embodiments, the task executor comprises a Docker container.
It should be noted that Docker is an application engine of an open-source container, so that a developer can package an application and a dependency package into a portable image, and then release the image into any popular environment, and can also implement virtualization. The Docker containers use the sandbox mechanism entirely and do not have any interfaces between each other. The application scenarios of Docker mainly include the following four categories: (1) automatic packaging and releasing of Web application; (2) automatic testing and continuous integration and release; (3) deploying and adjusting databases or other background applications in a service-type environment; (4) an individual PaaS environment is built by compiling or extending existing OpenShift or Cloud foundation platforms from scratch.
In this embodiment, the Docker provides a sandbox environment for continuous integration, thereby avoiding manual configuration and repair of the environment, and the Docker also realizes stable environment isolation, and can realize real parallelism of different continuous integration tasks in different Docker containers.
It should be noted that, in this embodiment, the integrated module includes a plurality of task executors, that is, includes a plurality of Docker containers, and each Docker container is used for executing a plurality of specific service requirements. The integrated module is actually equivalent to a slave node of a single-hanging single point in the prior art, and therefore may also be called a Salve Docker, a plurality of independent task executors, namely Docker0, Docker1.
Referring to fig. 4, which shows a schematic structural diagram of an integrated system in an embodiment of the present application, as shown in fig. 4, the adaptive scheduling system at least includes a master host (corresponding to a master node) and a Salve Docker (i.e., an integrated module), where the master node is implemented by the master host, that is, a Jenkins common physical host, and may also be a containerized Jenkins + Docker structured master; the Salve Docker is directly connected with the master host, namely the master host does not need to manage each task executor, so that the connection operation can be simplified, and the cost of connection management can be reduced.
In addition, Salve Docker generally monopolizes one physical host, is also Docker containerized, and meanwhile, the Salve Docker internally comprises a plurality of independent Docker containerized task executors, so that the Docker0 and the Docker1.
Further, in some embodiments, before step S101, the method may further include:
initializing the integrated module such that the integrated module includes at least one task executor.
It should be noted that, for the persistent integration system, the amount of tasks received by the system cannot be determined at the beginning of the setup, so if more task executors are started at the beginning of the setup, the system will cause the waste of processing capability and the extra consumption of hardware under the condition of low task requirement; however, if a few task executors are opened, the processing capability will be low when the task requirements are high, and the task requirements cannot be completed in time. The adaptive scheduling method in this embodiment can ensure that the number of task executors can be adaptively increased or decreased relative to the task requirements during the operation process, that is, the number of task executors in the integrated module is changed, so that when the integrated module is initially initialized, only the integrated module including one task executor can be obtained, and then the number of task executors is changed correspondingly according to the service scale, which not only reduces the waste of processing capacity, but also increases the maximum concurrent processing number of tasks. Of course, when the integrated module is initialized, other numbers of task executors, such as two or three, may also be obtained; meanwhile, when the integrated module is initialized, two parameters, namely a query variable and the number of task executors, need to be set, so that the task executors can be managed subsequently. Generally, a query variable is represented by i, and i also represents the serial number of a task executor in an integrated module, and the value of i is 0 after initialization; the number of task executors is represented by n, and i is a positive integer greater than or equal to 0 and less than n.
S103: when the plurality of task executors are all in a non-idle state, the number of the task executors in the integrated module is expanded;
it should be noted that, if all task executors in the acquired integration module are in a non-idle state, it indicates that the number of the current task executors is not enough to meet the existing task requirements, and therefore, the integration module needs to be expanded, that is, the number of the task executors needs to be expanded. Meanwhile, because the Docker container is of a sandbox mechanism and is completely independent, the new Docker container is an operation which is easy to complete.
Further, in some embodiments, step S103 may specifically include:
and establishing new task executors in the integrated module, and adding one to the number of the task executors in the integrated module so as to expand the number of the task executors included in the integrated module.
It should be noted that after the newly added task executors are established in the integrated module, the number of the task executors in the integrated module needs to be changed, otherwise, the newly added task executors cannot be managed correctly in the following process.
S104: and scheduling a newly added task executor from the expanded integrated module, and controlling the newly added task executor to execute the task to be processed.
It should be noted that, since the newly added task executor is necessarily in an idle state, the newly added task executor may be scheduled from the integrated module after the completion of the establishment, and the task to be processed is assigned to the newly added task executor to be completed, thereby ending the task assignment.
In practical use, although some developers choose to replace the slave node with the Docker, as shown in fig. 2, in this case, only the slave and Docker containers are replaced, and as a result, a plurality of slaves originally controlled by the master node become a plurality of Docker containers, the plurality of Docker containers are not integrated, and the complexity of master node control increases, and the replacement can still be completed by using a single cluster management tool such as swap or kubernet. In addition, in the prior art, the topology of the Docker container is also converted, for example, the master node is containerized, and a task of continuous integration is completed after conversion, but the function of dynamic expansion and contraction cannot be realized.
The containerization of the Jenkins continuous integration systems does not essentially perform scheduling optimization aiming at the continuous integration requirements of large-scale projects and codes, and does not adopt a specific algorithm to improve the efficiency of continuous integrated parallel processing, so that the following defects are specifically existed:
(1) an additional cluster management tool, such as swarm or kubernets, is required to perform additional management on the clustered slave nodes. This adds additional management resources and management costs to deploy;
(2) the number of slave nodes is preset and fixed, the processing capacity is relatively fixed, and if too many containers are started to process the continuously integrated tasks, the preset starting containers are too many due to concurrent task demands, so that the resource waste is caused to a certain extent;
(3) if fewer containers are started to perform adaptive scheduling tasks, the fewer containers cannot sufficiently improve the efficiency of parallel processing in the face of larger-scale task requirements, and the processing capacity is insufficient.
In the application, each specific task node of the persistent integration system is subjected to Docker containerization to form a task executor, and the whole task executor is subjected to containerization again to form a Slave Docker node, that is, the Slave Docker node is subjected to distributed containerization again to form a layout from Docker0, Docker1. Therefore, the master node is still connected with the single Slave Docker node, the complexity of connection operation is simplified, the cost of connection management is reduced, and an additional cluster management tool is not needed; the Slave Docker node performs internal Docker scheduling through the adaptive algorithm described in this embodiment, and adjusts the number of task executors through an adaptive scheduling method by sensing the scale of the traffic. Therefore, the concurrent processing capacity of the continuous integration platform is maximized, and the method has the following remarkable advantages:
(1) relatively fixed processing capacity compared to continuous integration of containerization; the application designs dynamic processing capacity; compared with the method in the industry, the mechanism can realize flexible and efficient calling of respective processing steps;
(2) the maximum self-adaptive processing efficiency according to the processing requirement is realized through an efficient scheduling algorithm;
(3) because the number of the task executors is flexibly adjusted according to the processing requirement, and the task executors in the idle state can be closed, when the Slave Docker node fails, the fault can be positioned and repaired only for the dockers in the open state, all the dockers do not need to be searched one by one, and the fault repairing efficiency can be improved.
The embodiment provides an adaptive scheduling method, which includes the steps that a task to be processed in a task list is obtained; sequentially judging whether a plurality of task executors included in the integrated module are in an idle state or not according to the tasks to be processed; when the plurality of task executors are all in a non-idle state, the number of the task executors in the integrated module is expanded; scheduling a newly added task executor from the expanded integrated module, and controlling the newly added task executor to execute the task to be processed; in this way, the integration module can adaptively adjust the number of task executors according to the service scale in the task list, thereby providing a dynamic telescopic continuous integration environment and improving the task concurrent processing capacity; the defect that the number of task processors must be kept fixed after the continuous integrated system is started is avoided, and the problem of solidification of continuous integrated processing capacity is solved; the method avoids adding an additional cluster management tool to manage a plurality of task executors, and simplifies the cost of the connection process and the connection management.
In another embodiment of the present application, please refer to fig. 5, which shows a flowchart of another adaptive scheduling method provided in the embodiment of the present application, where the adaptive scheduling method may include:
s201: judging whether the ith task executor in the integrated module is in an idle state or not;
in the present embodiment, the determination as to whether or not the task executor is in the idle state is performed "sequentially" based on the numbers, i is a number indicating the task executor for the task executor, i is a positive integer greater than or equal to 0 and smaller than n, and n represents the number of the human executors.
It should be further noted that, by determining the task executors in the integrated module one by one, this step corresponds to the concept of "in sequence" in the previous embodiment. When the ith task executor is in a non-idle state, it is described that the ith task executor has been called, and at this time, a judgment of a next task executor needs to be made.
Thus, for step S201, if the determination result is no, indicating that the ith task executor in the integration module is in a non-idle state, step S202 may be executed; if the determination result is yes, it indicates that the ith task executor in the integration module is in an idle state, then steps S203 and S204 may be executed.
S202: when the ith task executor in the integrated module is in a non-idle state, executing i to i +1, and returning to step S201;
here, if the ith task executor is in a non-idle state, an operation of adding one to the query variable is performed, and the process returns to step S201 to implement the query on the next task executor.
S203: when the ith task executor in the integrated module is in an idle state, calling the ith task executor from the integrated module;
here, if the ith task executor is in an idle state, the ith task executor may be called to execute the task to be processed.
S204: and controlling the ith task executor to execute the task to be processed, and finishing the judgment of whether the task executor which is not subjected to the idle state judgment in the integrated module is in the idle state.
It should be noted that, if there are task executors in idle states in the integrated module, it indicates that the number of the current task executors is sufficient to meet the existing task requirements, and the task to be processed can be directly allocated to the task executors in idle states for execution without increasing the number of the task executors. At this time, the current task to be processed is already allocated, so that whether the rest task executors are in an idle state or not does not need to be continuously judged, and the current process can be ended.
The embodiment of the application provides an adaptive scheduling method, and the specific implementation of the embodiment is elaborated, so that the integration module can adaptively adjust the number of task executors according to the service scale in a task list, thereby providing a dynamic telescopic continuous integration environment and improving the task concurrent processing capacity; the defect that the number of task processors must be kept fixed after the continuous integrated system is started is avoided, and the problem of solidification of continuous integrated processing capacity is solved; the method avoids adding an additional cluster management tool to manage a plurality of task executors, and simplifies the cost of the connection process and the connection management.
In another embodiment of the present application, referring to fig. 6, a method for adaptive scheduling provided in an embodiment of the present application is shown, and as shown in fig. 6, the method includes:
s301: judging whether a task to be processed exists in the task list or not;
here, with respect to step S301, if the determination result is no, step S302 is performed; if the judgment result is yes, step S303 is performed.
It should be noted that, for the task list, there may be a task to be processed or there may not be a task to be processed, and in this case, different handling needs to be performed according to two situations. In addition, step S301 is generally designed as a timed task to prevent too frequent opening of the process from causing excessive processing pressure.
S302: judging whether the Nth task executor in the integrated module is in an idle state or not;
here, with respect to step S303, if the determination result is yes, step S304 is performed.
In step S303, the generation time of the nth task executor is later than the generation time of the task executors in the integration module except for the nth task executor, where N is a positive integer greater than 1;
it should be noted that, when considering whether the task executor needs to be closed, it mainly refers to considering whether the task executor with the latest setup time needs to be closed, and generally, the task executor with the latest setup time is determined according to a preset number, that is, the generation time of the nth task executor is later than the generation time of the task executors other than the nth task executor in the integrated module.
As for why only the task performers whose setup times are the closest are considered, the reasons are as follows: the task executors all have numbers for identification, generally, due to the characteristics of the programming language, the intervals between the numbers are fixed, for example, 4 task executors respectively have numbers 1, 2, 3 and 4, when the numbers are used, only one time of +1 is needed to search for the next target, so that the situation of number interruption rarely occurs. Therefore, when the problem of task executor reduction is considered, in order to reduce complexity, only the working state of the last task executor is detected, and the problem of number interruption can be avoided. For example, there are a plurality of task executors with preset numbers 1, 2, 3, and 4, and only the task executor with the preset number 4 is taken as a target task executor, and then the working state of the target task executor is obtained to determine whether the target task executor needs to be closed; if the working state of the task executor with the preset number 2 is also judged, when the task executor with the preset number 2 is closed, the problem of number interruption occurs, and the program may be crashed.
It should be noted that whether or not to close the nth task executor is determined by determining the operating state of the nth task executor. If the Nth task executor is not in the idle state, the process can be directly finished without closing the Nth task executor.
S303: closing the Nth task executor, and reducing the number of task executors in the integrated module by one to reduce the number of task executors in the integrated module;
it should be noted that, if the nth task executor is in the idle state, it may be closed, and the number of task executors is reduced by 1, so that the number of task executors in the integrated module may correspond to the current service requirement in real time.
S304: acquiring the task to be processed from the task list;
it should be noted that, if there is a task to be processed, a task executor that is suitable for control of the task to be processed can be obtained from the task list for execution.
S305: sequentially judging whether a plurality of task executors included in the integrated module are in an idle state or not according to the tasks to be processed;
it should be noted that after the task to be processed is obtained, the states of the task executors in the integrated module are judged one by one to determine whether the number of the task executors needs to be expanded.
S306: when the plurality of task executors are all in a non-idle state, the number of the task executors in the integrated module is expanded;
it should be noted that, when the plurality of task executors are all in the non-idle state, it is indicated that the number of the existing task executors is not enough to meet the task requirement, and therefore the number of the task executors in the integrated module needs to be expanded.
S307, a newly added task executor is dispatched from the expanded integrated module, and the newly added task executor is controlled to execute the task to be processed.
It should be noted that the to-be-processed task may be executed by using a newly added task executor, and the current adaptive scheduling process is also ended.
Thus, by the adaptive scheduling method described in this embodiment, an adaptive scalable persistent integration scheme is implemented, a persistent integration structure is designed, a mechanism for implementing dynamic processing capability according to the number of dynamic scalable Docker containers in a service scale is provided, a persistent integrated structural topology using a dynamic scaling mechanism is provided, and the following disadvantages of the existing persistent integration environment are overcome:
(1) the isolation of a continuous integrated environment cannot be realized by a common node structure of a master-slave topology of a distributed structure, and the node recovery operation of a node still needs to be manually carried out when the node environment of a single-point slave is damaged, so that the execution efficiency of the overall adaptive scheduling task is influenced;
(2) in the conventional adaptive scheduling system as shown in fig. 2, although each Docker container simply implements a slave function node isolated from a stable environment, an additional cluster management tool is required to be added, which increases the additional management cost of the clustered slave node, and there are the following disadvantages: (a) or presetting and starting a plurality of containers to process the adaptive scheduling task, and possibly presetting and starting excessive containers to cause resource waste to a certain extent in the face of concurrent task requirements; (b) or fewer containers are started to perform adaptive scheduling tasks, and the parallel processing efficiency cannot be fully improved when the fewer containers perform execution in the face of large-scale task requirements;
(3) although environment isolation is realized through a Docker containerized continuous integrated system, algorithm coordination of a dynamic scheduling tool and the scheduling of a related algorithm scheduling mechanism are lacked;
(4) the existing persistent integration system lacks the capability of dynamically calling the number of nodes of the slave, that is, the number of Docker containers does not have a mechanism for dynamically changing according to the requirement of the task amount.
The embodiment of the application provides an adaptive scheduling method, and the specific implementation of the embodiment is elaborated, so that the integration module can adaptively adjust the number of task executors according to the service scale in a task list, thereby providing a dynamic telescopic continuous integration environment and improving the task concurrent processing capacity; the defect that the number of task processors must be kept fixed after the continuous integrated system is started is avoided, and the problem of solidification of continuous integrated processing capacity is solved; the method avoids adding an additional cluster management tool to manage a plurality of task executors, and simplifies the cost of the connection process and the connection management.
In yet another embodiment of the present application, referring to fig. 7, a flowchart of another adaptive scheduling method provided in the embodiment of the present application is shown, and as shown in fig. 7, the method includes:
s401: initializing a task queue list, initializing a preset Docker0, and setting an iteration variable i to be 0; n is 1;
it should be noted that when a system is started, a task queue list, task executors, and iteration variables need to be initialized, the task queue list is used to record tasks to be allocated, Docker0 is a task executor preset during starting, and n values correspond to the number of task executors in the system, so that when preset Docker0 is initialized, n values need to be initialized to 1, and i values need to be initialized to 0 as variables for performing alternate queries in scheduling tasks. That is to say, in this embodiment, the integrated service only has one Docker node at the beginning, and there is no need to initialize too many preset nodes, and meanwhile, the number of Docker nodes may be dynamically increased and decreased along with the number of integrated services, so as to maximize the concurrent processing capability of the adaptive scheduling system.
It should be further noted that, this embodiment does not limit that there is only one preset Docker node during initialization, and if it is also within the protection scope of this application to preset multiple dockers during initialization, for example, if 2 Docker nodes are preset, then S401 may include:
initializing the task queue list, initializing preset Docker0 and initializing preset Docker1, where iteration variable i is 0 and n is 2.
S402: judging whether a new task exists;
here, for step S402, if the determination result is yes, step S403 may be performed; if the judgment result is no, executing step S409;
it should be noted that, the determination of whether there is a new task may be set as a timing task, or a monitoring mechanism may be adopted, that is, the determination of whether there is a new task at a timing or whether there is a new task event is monitored.
S403: inserting the new task into a task queue list and assigning a value;
it should be noted that, after a new task is detected, the new task is inserted into the task queue list, and the insertion is generally performed in a first-come-last order, and the purpose of the assignment is to identify and refer to a different new task.
S404: taking out the next task to be processed in the list queue;
it should be noted that the to-be-processed tasks are generally head-of-list tasks of the list queue, that is, the to-be-processed tasks are sequentially fetched according to a certain order.
It should be noted that step S404 is performed continuously, and after the task to be processed is allocated, the next task to be processed needs to be taken out again until all tasks in the list queue are allocated after step S404 is executed each time.
S405: judging whether Docker0+ i is idle or not;
here, for step S402, if the determination result is yes, step S406 is performed; if the judgment result is no, executing step S407;
at this time, the states of the plurality of Docker containers are sequentially judged according to the numbers, and it is determined whether the number of task executors needs to be expanded.
S406: distributing the task to be processed to a Docker0+ i, and resetting i to 0;
it should be noted that, if a Docker in an idle state is determined, the to-be-processed task may be allocated, and at this time, the allocation of the to-be-processed task is finished.
Since step S404 is performed continuously, step S406 represents only the end of the present flow as a whole.
S407: controlling i to be i +1, and judging whether Docker0+ i exists or not;
here, corresponding to step S407, if the determination result is yes, step S405 is executed; if the judgment result is no, executing step S408;
it should be noted that, if the current Docker0+ i is not in the idle state, the working state of the next Docker is determined; it should be considered that the current Docker0+ i is the last Docker in the system, and at this time, the next Docker does not exist, so that before the working state of the next Docker is determined, it is determined whether the next Docker exists.
S408: newly building a Docker0+ i, distributing the tasks to be processed to the Docker0+ i, and resetting i to be 0;
it should be noted that, if there is no next Docker, it is stated that the current Docker number is not enough to deal with the current task amount, and the number of dockers needs to be increased. Specifically, a Docker is newly built, and a task to be processed is allocated to the newly built Docker, and it is implied here that n needs to be increased by 1. Similarly, S408 also represents the end of the present flow.
It should be noted that, in step S406 and step S408, after the task to be processed is allocated to the task executor to be completed, the next task to be processed needs to be taken out from the task queue list to continue allocation until there is no task to be processed in the task queue list.
S409: judging whether Docker n-1 is idle or not;
here, for step S409, if the determination result is yes, step S410 may be performed; if the judgment result is negative, returning to execute the step S402;
it should be noted that, if there is no task to be processed, whether to reduce the number of dockers needs to be considered, and specifically, whether to close Docker n-1 needs to be considered.
S410: the Docker n-1 is turned off, n is made equal to n-1, and the process returns to step S402.
It should be noted that, if Docker n-1 is in the idle state, in order to avoid the waste of the processing amount, it may be turned off, and the process returns to step S402.
The embodiment of the application provides an adaptive scheduling method, and the specific implementation of the embodiment is elaborated, so that the integration module can adaptively adjust the number of task executors according to the service scale in a task list, thereby providing a dynamic telescopic continuous integration environment and improving the task concurrent processing capacity; the defect that the number of task processors must be kept fixed after the continuous integrated system is started is avoided, and the problem of solidification of continuous integrated processing capacity is solved; the method avoids adding an additional cluster management tool to manage a plurality of task executors, and simplifies the cost of the connection process and the connection management.
In yet another embodiment of the present application, referring to fig. 8, which shows a schematic structural diagram of a composition of an adaptive scheduling apparatus 50 provided in an embodiment of the present application, as shown in fig. 8, the adaptive scheduling apparatus 50 includes an obtaining unit 501, a determining unit 502, an expanding unit 503, and a scheduling unit 504; wherein,
an obtaining unit 501 configured to obtain a task to be processed in a task list;
a judging unit 502 configured to sequentially judge whether a plurality of task executors included in an integrated module are in an idle state according to the task to be processed;
an extending unit 503, configured to extend the number of task executors in the integrated module when the plurality of task executors are all in a non-idle state;
the scheduling unit 504 is configured to schedule a newly added task executor from the extended integrated module, and control the newly added task executor to execute the to-be-processed task.
In the above embodiment, the obtaining unit 501 is specifically configured to determine whether a task to be processed exists in a task list; and if the task list has the task to be processed, acquiring the task to be processed from the task list.
Referring to fig. 9, which shows a schematic structural diagram of another adaptive scheduler 50 provided in the embodiment of the present application, as shown in fig. 9, on the basis of the foregoing embodiment, the adaptive scheduler 50 further includes a reducing unit 505 configured to determine whether an nth task executor in the integrated module is in an idle state if there is no task to be processed in the task list; the generation time of the Nth task executor is later than that of task executors except the Nth task executor in the integrated module, and N is a positive integer larger than 1; and if the Nth task executor in the integrated module is in an idle state, closing the Nth task executor, and reducing the number of the task executors in the integrated module by one to reduce the number of the task executors in the integrated module.
In the above embodiment, the expanding unit 503 is specifically configured to establish a new task executor in the integrated module, and add one to the number of task executors in the integrated module to expand the number of task executors included in the integrated module.
As shown in fig. 9, on the basis of the foregoing embodiment, the scheduling unit 504 is configured to call the ith task executor from the integrated module if the ith task executor in the integrated module is in an idle state; wherein i represents the number of the task executor, and is a positive integer greater than or equal to 0; and controlling the ith task executor to execute the task to be processed, and finishing the judgment of whether the task executor which is not subjected to the idle state judgment in the integrated module is in the idle state.
In the above embodiment, the adaptive scheduling apparatus 50 further includes an initialization unit 506 configured to initialize the integrated module, so that the integrated module includes at least one task executor.
In the above embodiment, the task executor comprises a Docker container.
It is understood that in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may also be a module, or may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Accordingly, the present embodiments provide a computer storage medium storing an adaptive scheduling program that, when executed by at least one processor, implements the steps of the method of any of the preceding embodiments.
Based on the above-mentioned composition of an adaptive scheduling apparatus 50 and computer storage medium, referring to fig. 10, it shows a specific hardware structure example of an adaptive scheduling apparatus 50 provided in the embodiment of the present application, which may include: a communication interface 601, a memory 602, and a processor 603; the various components are coupled together by a bus system 604. It is understood that the bus system 604 is used to enable communications among the components. The bus system 604 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 604 in fig. 10. The communication interface 601 is used for receiving and sending signals in the process of receiving and sending information with other external network elements;
a memory 602 for storing a computer program capable of running on the processor 603;
a processor 603 for, when running the computer program, performing:
acquiring a task to be processed in a task list;
sequentially judging whether a plurality of task executors included in the integrated module are in an idle state or not according to the tasks to be processed;
when the plurality of task executors are all in a non-idle state, the number of the task executors in the integrated module is expanded;
and scheduling a newly added task executor from the expanded integrated module, and controlling the newly added task executor to execute the task to be processed.
It will be appreciated that the memory 602 in the subject embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous chained SDRAM (Synchronous link DRAM, SLDRAM), and Direct memory bus RAM (DRRAM). The memory 602 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And the processor 603 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 603. The Processor 603 may be a general purpose Processor, a Digital Signal Processor (DSP), an APPlication Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 602, and the processor 603 reads the information in the memory 602, and performs the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more APPlication Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the processor 603 is further configured to perform the steps of the method of any of the previous embodiments when running the computer program.
Based on the above-mentioned composition of the adaptive scheduling apparatus 50 and an example of a hardware structure, refer to fig. 11, which shows a schematic diagram of a composition structure of a system 70 provided in an embodiment of the present application. As shown in fig. 11, the system 70 at least includes the adaptive scheduling apparatus 50 of any of the previous embodiments. The system can adaptively adjust the number of task executors in the integrated module according to the service scale in the task list, thereby providing a dynamic telescopic continuous integrated environment and improving the task concurrent processing capacity; the defect that the number of task processors must be kept fixed after the continuous integrated system is started is avoided, and the problem of solidification of continuous integrated processing capacity is solved; the method avoids adding an additional cluster management tool to manage a plurality of task executors, and simplifies the cost of the connection process and the connection management.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.
It should be noted that, in the present application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. An adaptive scheduling method, the method comprising:
acquiring a task to be processed in a task list;
sequentially judging whether a plurality of task executors included in the integrated module are in an idle state or not according to the tasks to be processed;
when the plurality of task executors are all in a non-idle state, the number of the task executors in the integrated module is expanded;
and scheduling a newly added task executor from the expanded integrated module, and controlling the newly added task executor to execute the task to be processed.
2. The method of claim 1, wherein the obtaining the task to be processed in the task list comprises:
judging whether a task to be processed exists in the task list or not;
and if the task list has the task to be processed, acquiring the task to be processed from the task list.
3. The method of claim 2, wherein after determining whether there are pending tasks in the task list, the method further comprises:
if the task list does not have the task to be processed, judging whether an Nth task executor in the integrated module is in an idle state; the generation time of the Nth task executor is later than that of task executors except the Nth task executor in the integrated module, and N is a positive integer larger than 1;
and if the Nth task executor in the integrated module is in an idle state, closing the Nth task executor, and reducing the number of the task executors in the integrated module by one to reduce the number of the task executors in the integrated module.
4. The adaptive scheduling method of claim 1, wherein the expanding the number of task executors included in the integrated module comprises:
and establishing new task executors in the integrated module, and adding one to the number of the task executors in the integrated module so as to expand the number of the task executors included in the integrated module.
5. The adaptive scheduling method according to claim 1, wherein the sequentially determining whether the plurality of task executors included in the integrated module are in an idle state comprises:
if the ith task executor in the integrated module is in an idle state, calling the ith task executor from the integrated module; wherein i represents the number of the task executor, and is a positive integer greater than or equal to 0;
and controlling the ith task executor to execute the task to be processed, and finishing the judgment of whether the task executor which is not subjected to the idle state judgment in the integrated module is in the idle state.
6. The adaptive scheduling method according to any one of claims 1 to 5, wherein before acquiring the task to be processed in the task list, the method further comprises:
initializing the integrated module such that the integrated module includes at least one task executor.
7. The adaptive scheduling method of claim 6, wherein the task executor comprises a Docker container.
8. The adaptive scheduling device is characterized by comprising an acquisition unit, a judgment unit, an expansion unit and a scheduling unit; wherein,
the acquisition unit is configured to acquire the tasks to be processed in the task list;
the judging unit is configured to sequentially judge whether a plurality of task executors included in the integrated module are in an idle state according to the tasks to be processed;
the expansion unit is configured to expand the number of task executors in the integrated module when the plurality of task executors are all in a non-idle state;
and the scheduling unit is configured to schedule a newly added task executor from the expanded integrated module and control the newly added task executor to execute the to-be-processed task.
9. An adaptive scheduling apparatus, comprising a memory and a processor; wherein,
the memory for storing a computer program operable on the processor;
the processor, when executing the computer program, is adapted to perform the steps of the method of any of claims 1 to 7.
10. A computer storage medium storing an adaptive scheduling program, which when executed by at least one processor implements the steps of the method of any one of claims 1 to 7.
11. A system characterized in that it comprises at least an adaptive scheduling device according to claim 8 or 9.
CN202010761677.2A 2020-07-31 2020-07-31 Self-adaptive scheduling method, device, computer storage medium and system Pending CN114064225A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010761677.2A CN114064225A (en) 2020-07-31 2020-07-31 Self-adaptive scheduling method, device, computer storage medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010761677.2A CN114064225A (en) 2020-07-31 2020-07-31 Self-adaptive scheduling method, device, computer storage medium and system

Publications (1)

Publication Number Publication Date
CN114064225A true CN114064225A (en) 2022-02-18

Family

ID=80227769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010761677.2A Pending CN114064225A (en) 2020-07-31 2020-07-31 Self-adaptive scheduling method, device, computer storage medium and system

Country Status (1)

Country Link
CN (1) CN114064225A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116647530A (en) * 2023-06-06 2023-08-25 深圳花儿绽放网络科技股份有限公司 Automatic execution system for instant messaging task

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116647530A (en) * 2023-06-06 2023-08-25 深圳花儿绽放网络科技股份有限公司 Automatic execution system for instant messaging task
CN116647530B (en) * 2023-06-06 2024-09-10 深圳花儿绽放网络科技股份有限公司 Automatic execution system for instant messaging task

Similar Documents

Publication Publication Date Title
CN103593242A (en) Resource sharing control system based on Yarn frame
CN110401700B (en) Model loading method and system, control node and execution node
CN103186404B (en) System firmware update method and the server system using the method
Ahmadinia et al. Task scheduling for heterogeneous reconfigurable computers
CN113127150A (en) Rapid deployment method and device of cloud native system, electronic equipment and storage medium
US20220188138A1 (en) Saving and restoring pre-provisioned virtual machine states
CN113190282A (en) Android operating environment construction method and device
CN104932933A (en) Spin lock acquisition method and apparatus
CN116521209B (en) Upgrading method and device of operating system, storage medium and electronic equipment
CN113760543A (en) Resource management method and device, electronic equipment and computer readable storage medium
CN114064225A (en) Self-adaptive scheduling method, device, computer storage medium and system
CN115033337A (en) Virtual machine memory migration method, device, equipment and storage medium
US20140325516A1 (en) Device for accelerating the execution of a c system simulation
EP2541404B1 (en) Technique for task sequence execution
CN115469912B (en) Heterogeneous real-time information processing system design method
CN104281587A (en) Connection establishing method and device
WO2022237419A1 (en) Task execution method and apparatus, and storage medium
US8666521B2 (en) Method for operating an automation system
WO2018188959A1 (en) Method and apparatus for managing events in a network that adopts event-driven programming framework
US10783291B2 (en) Hybrid performance of electronic design automation (EDA) procedures with delayed acquisition of remote resources
CN112817717A (en) Scheduling method and device for timing task
CN112527760A (en) Data storage method, device, server and medium
CN110737533A (en) task scheduling method and device, electronic equipment and storage medium
US20100223596A1 (en) Data processing device and method
CN115858134B (en) Method and device for controlling multitasking resources of solid state disk

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination