CN115061792A - Task processing method and device, electronic equipment and storage medium - Google Patents

Task processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115061792A
CN115061792A CN202210662879.0A CN202210662879A CN115061792A CN 115061792 A CN115061792 A CN 115061792A CN 202210662879 A CN202210662879 A CN 202210662879A CN 115061792 A CN115061792 A CN 115061792A
Authority
CN
China
Prior art keywords
task
executed
execution unit
sending
task execution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210662879.0A
Other languages
Chinese (zh)
Inventor
陈筑
王濡瑶
叶伟伟
解继刚
王建
王越
李扬
李杨
时斌
王彦博
桂宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Dalian Power Plant of Huaneng International Power Co Ltd
Original Assignee
Central South University
Dalian Power Plant of Huaneng International Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University, Dalian Power Plant of Huaneng International Power Co Ltd filed Critical Central South University
Priority to CN202210662879.0A priority Critical patent/CN115061792A/en
Publication of CN115061792A publication Critical patent/CN115061792A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/32Monitoring with visual or acoustical indication of the functioning of the machine
    • G06F11/324Display of status information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides a task processing method, which comprises the following steps: acquiring a task to be executed, wherein the task to be executed is a timing task packed by a trained model; sending the task to be executed to the corresponding task execution unit according to a preset rule so that the task execution unit executes the task to be executed to obtain a task result; and sending out alarm information when the task result meets the preset alarm condition. The trained models are packaged into the timed tasks, the models can be created and deployed separately, the coupling degree of modeling and deployment early warning is reduced, the timed tasks are sent to the corresponding task execution units according to the preset rules to execute the tasks, the task results are obtained, when the task results meet the preset warning conditions, warning information is sent out, hardware resources can be fully utilized, the requirements for operating equipment are reduced, related modeling knowledge is not needed for deployment of the models, common workers can operate the system, the efficiency of model deployment is improved, and meanwhile, the labor cost is saved.

Description

Task processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of task management technologies, and in particular, to a task processing method and apparatus, an electronic device, and a storage medium.
Background
In recent years, with the continuous improvement of the informatization degree of a thermal power plant, various digital instrument devices replace traditional mechanical instrument devices, systems such as DCS, SIS and ERP are popularized in the power plant, the systems record the operation of the equipment of the thermal power plant and the operation process of operators, and the system has positive guiding significance for unit operation, fault diagnosis and state monitoring. A large number of operating parameters exist in the thermal power plant industry, a large number of models need to be deployed according to complex relationships and association degrees among variables, the performance of the models is rapidly reduced along with time, a large amount of labor cost is needed for model management when the models need to be updated manually and periodically, parallel calculation of the large number of models also has extremely high requirements on hardware of an operation platform, and due to the fact that the models are created and deployed with high coupling, the models are often deployed with more modeling knowledge, and therefore the model deployment efficiency is low.
Disclosure of Invention
The embodiment of the invention provides a task processing method, aiming at solving the problem that the efficiency of model deployment is low because more modeling knowledge is needed in the existing model deployment process, the trained models are packaged into timing tasks, the creation and deployment of the models can be separated, the coupling degree of modeling and deployment early warning is reduced, the timing tasks are sent to corresponding task execution units according to preset rules for task execution, task results are obtained, when the task results meet preset alarm conditions, alarm information is sent out, hardware resources can be fully utilized, the requirements on operating equipment are reduced, the deployment of the models does not need related modeling knowledge, common workers can operate, the efficiency of model deployment is improved, and meanwhile, the labor cost is saved.
In a first aspect, an embodiment of the present invention provides a task processing method, including the following steps:
acquiring a task to be executed, wherein the task to be executed is a timing task packed by a trained model;
sending the task to be executed to a corresponding task execution unit according to a preset rule so that the task execution unit executes the task to be executed to obtain a task result;
and sending out alarm information when the task result meets a preset alarm condition.
Optionally, before the step of obtaining the task to be executed, the method further includes:
acquiring the structure and parameters of the trained model, and storing the structure and parameters of the trained model as a structural file;
writing condition information of task execution in the structural file, and packaging the condition information into the timing task, wherein the condition information comprises a time interval.
Optionally, the step of sending the task to be executed to the corresponding task execution unit according to a preset rule includes:
judging whether the task to be executed designates a task execution unit or not;
if the task to be executed is appointed, sending the task to be executed to a corresponding appointed task execution unit;
and if not, sending the task to be executed to a task execution unit with the resource occupancy rate lower than a first threshold value.
Optionally, the step of sending the task to be executed to the corresponding task execution unit according to the preset rule further includes:
and if the resource occupancy rate of the specified task execution unit corresponding to the current task to be executed is higher than a second threshold value, skipping the sending of the current task to be executed, and sending the current task to be executed to the corresponding specified task execution unit when detecting that the resource occupancy rate of the specified task execution unit corresponding to the current task to be executed is lower than the second threshold value.
Optionally, the step of sending the task to be executed to the corresponding task execution unit according to the preset rule further includes:
and when detecting that the resource occupancy rates of all the task execution units are higher than a third threshold value, stopping the distribution of the tasks to be executed, and restarting the distribution of the tasks to be executed when adding a new task execution unit or the task execution unit with the resource occupancy rate lower than the third threshold value exists.
Optionally, the step of sending the task to be executed to the corresponding task execution unit according to the preset rule further includes:
when detecting that the crashed task execution unit exists, restarting the crashed task execution unit;
if the restart is successful, sending the corresponding task to be executed to the task execution unit after the restart is successful;
and if the restart fails, sending the corresponding task to be executed to other task execution units.
Optionally, when the task result meets a preset alarm condition, the step of sending an alarm message includes:
comparing the task result with real data in a preset memory database to obtain a comparison result;
and sending alarm information when the comparison result meets a preset alarm condition.
In a second aspect, an embodiment of the present invention provides a task processing device, where the task processing device includes:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a task to be executed, and the task to be executed is a timing task packed by a trained model;
the sending module is used for sending the task to be executed to a corresponding task execution unit according to a preset rule so that the task execution unit executes the task to be executed to obtain a task result;
and the alarm module is used for sending out alarm information when the task result meets the preset alarm condition.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the processor executes the computer program to realize the steps of the task processing method provided by the embodiment of the invention.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements steps in the task processing method provided by the embodiment of the present invention.
In the embodiment of the invention, a task to be executed is obtained, wherein the task to be executed is a timing task packed by a trained model; sending the task to be executed to a corresponding task execution unit according to a preset rule so that the task execution unit executes the task to be executed to obtain a task result; and sending out alarm information when the task result meets a preset alarm condition. The trained models are packaged into the timed tasks, the models can be created and deployed separately, the coupling degree of modeling and deployment early warning is reduced, the timed tasks are sent to the corresponding task execution units according to the preset rules to execute the tasks, the task results are obtained, when the task results meet the preset warning conditions, warning information is sent out, hardware resources can be fully utilized, the requirements for operating equipment are reduced, related modeling knowledge is not needed for deployment of the models, common workers can operate the system, the efficiency of model deployment is improved, and meanwhile, the labor cost is saved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is an architecture diagram of a task processing system according to an embodiment of the present invention;
FIG. 2 is a flowchart of a task processing method according to an embodiment of the present invention;
FIG. 3 is a block diagram of a task processing device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is an architecture diagram of a task processing system according to an embodiment of the present invention, as shown in fig. 1, the task processing system includes: the task scheduling system comprises a task publisher, a task scheduler, task middleware, a memory database, a task execution unit and task result storage. In the embodiment of the present invention, the number of the task execution units is multiple, and the task execution units share the memory database and the task middleware.
The task processing system can be a distributed queue management system deployed for automated modeling of a thermal power plant.
The task publisher can be a web application program, and can pack the trained model into a timing task through a relevant interface of the task scheduler, and store the timing task to the task middleware. The timing task includes a time interval and an expected result. In a possible embodiment, the above timed task may further include a designated task execution unit that executes the timed task.
The task scheduler is an independent wait process, and it should be noted that only one wait process can be started by one host or one service. The task scheduler periodically sends the timing tasks which are due and need to be executed to the task middleware by reading the configuration file, so that the task middleware sends the timing tasks according to the configuration file. The task scheduler is also used for monitoring the running condition of the task execution unit in real time or in a timing mode.
The task middleware comprises a task scheduling queue, and after receiving the timing task issued by the task issuer, the task middleware stores the corresponding timing task into the task scheduling queue and distributes the timing task to the task execution unit for execution. The task middleware can be deployed on an independent host to transmit information through a local area network.
The task execution unit is a processing unit for actually executing the task, monitors the task middleware in real time, and acquires a trained model needing to be predicted from the timed task after the task middleware sends the timed task. The task execution units can read real-time data from the shared memory database for prediction. The running model is managed through a prefork mode, wherein the prefork mode is that a plurality of sub-processes are opened up when a task is started, the task is waited to be dispatched, and the expenses of frequently creating and destroying the processes are avoided. The task execution units are independent and do not interfere with each other, and the task execution units can be deployed on a plurality of hosts at the same time.
The task result storage is used for storing task results of the timing task, the task results are results predicted by the trained models in the timing task, and a publisher only needs to execute the trained models through the task execution unit after issuing the timing task so as to obtain corresponding task results and obtain real-time predicted values, and the publisher does not need to pay attention to deployment of the trained models and management of processes.
The internal memory database imports real-time data in the SIS system of the thermal power plant, can provide the real-time data of each module of the thermal power plant for the task execution unit, and improves the efficiency of the execution unit.
In the embodiment of the invention, the trained models are packaged into the timing tasks, the creation and deployment of the models can be separated, the coupling degree of modeling and deployment early warning is reduced, the timing tasks are sent to the corresponding task execution units according to the preset rules for task execution, the task results are obtained, when the task results meet the preset warning conditions, warning information is sent, hardware resources can be fully utilized, the requirements on operating equipment are reduced, the deployment of the models does not need related modeling knowledge, common workers can operate, the efficiency of model deployment is improved, and meanwhile, the labor cost is saved.
Specifically, referring to fig. 2, fig. 2 is a flowchart of a task processing method according to an embodiment of the present invention, and as shown in fig. 2, the setting of the application program may include the following steps:
201. and acquiring the task to be executed.
In the embodiment of the invention, the task to be executed is a timing task packed by a trained model. The trained model is a model formed by creating, training and adjusting the model, and the trained model can be directly deployed and used.
The trained model includes model structures and parameters. And packaging the trained models into a timing task, starting the trained models at a fixed time to process the real-time data of the thermal power plant, and predicting the operating condition of the thermal power plant. Meanwhile, the trained models are packaged into timing tasks, and the trained models are deployed in the form of the timing tasks, so that the coupling degree of the modeling process and the deployment process of the models can be reduced, and common workers can operate the deployment work.
Specifically, the modeling part of the model can be handed to a professional worker with relevant modeling knowledge, and the deployment part of the model can be handed to a common worker without relevant modeling knowledge, so that the modeling process and the deployment process of the model can be decoupled.
202. And sending the task to be executed to the corresponding task execution unit according to a preset rule so that the task execution unit executes the task to be executed to obtain a task result.
In the embodiment of the present invention, the preset rule may be sent according to a designated task execution unit, or sent according to the resource occupancy rate of the task execution unit.
Specifically, when the task to be executed includes the designated task execution unit, the task to be executed may be sent to the corresponding designated task execution unit, and when the task to be executed does not include the designated task execution unit, the task to be executed may be sent to the task execution unit with the lower resource occupancy rate.
After the task execution unit receives the corresponding task to be executed, the timing task corresponding to the task to be executed is obtained, and the timing task is loaded into resources such as a memory and a process which are applied in advance, so that deployment of the trained model is completed.
The task execution unit is a processing unit for actually executing the task, monitors the task middleware in real time, and acquires a trained model needing to be predicted from the timing task after the task middleware sends the timing task. The task execution units can read real-time data from the shared memory database for prediction. The running model is managed through a prefork mode, wherein the prefork mode is that a plurality of sub-processes are opened up when a task is started, the task is waited to be dispatched, and the expenses of frequently creating and destroying the processes are avoided. The task execution units are independent and do not interfere with each other, and the task execution units can be deployed on a plurality of hosts at the same time.
203. And sending out alarm information when the task result meets the preset alarm condition.
In the embodiment of the present invention, the preset alarm condition may be whether the task result is abnormal.
Specifically, after the task execution unit executes the corresponding timing task to obtain a corresponding task result, the task result is compared with a preset alarm condition, if the task result meets the preset alarm condition, the task result is abnormal, the operation of the thermal power plant is abnormal, and alarm information is sent to a task publisher or related personnel; and if the task result does not meet the preset alarm condition, the task result is normal, and the operation of the thermal power plant is normal.
In the embodiment of the invention, a task to be executed is obtained, wherein the task to be executed is a timing task packed by a trained model; sending the task to be executed to a corresponding task execution unit according to a preset rule so that the task execution unit executes the task to be executed to obtain a task result; and sending out alarm information when the task result meets a preset alarm condition. The trained models are packaged into the timed tasks, the models can be created and deployed separately, the coupling degree of modeling and deployment early warning is reduced, the timed tasks are sent to the corresponding task execution units according to the preset rules to execute the tasks, the task results are obtained, when the task results meet the preset warning conditions, warning information is sent out, hardware resources can be fully utilized, the requirements for operating equipment are reduced, related modeling knowledge is not needed for deployment of the models, common workers can operate the system, the efficiency of model deployment is improved, and meanwhile, the labor cost is saved.
Optionally, before the step of obtaining the task to be executed, the structure and parameters of the trained model may also be obtained, and the structure and parameters of the trained model are stored as a structural file; writing condition information of task execution in the structural file, and packaging the condition information into the timing task, wherein the condition information comprises a time interval.
In an embodiment of the present invention, the trained model may be an algorithm model for big data analysis of a thermal power plant. Preferably, the trained model may be a trained prediction model, and the prediction model may predict a possible future time sequence result according to the input time sequence data.
The condition information may include a time interval and an expected result, and the execution condition information may further include a designated task execution unit.
Specifically, the structure, parameters, and the like of the trained model may be saved as a structural file by the task publisher, and the structural file may be a file in a pickle format. The task publisher can write the time interval, the expected result and the designated task execution unit into the structural file to obtain the timing task.
After the timing task is obtained, the task publisher can write the timing task into the task middleware.
More specifically, the task publisher is a web application, and the user performs task publishing through the web application. The user saves the structure, parameters, etc. of the trained model as a file in a pickle format through a web application. The user processes the file in the pickle format through the web application program, writes the time interval and the expected result of task execution and the designated task execution unit for executing the task, and accordingly obtains the timing task. After a user processes a file in a pickle format into a timing task through a web application program, the timing task is written into shared task middleware through serialization.
Further, the task middleware may include a task scheduling queue, and after receiving the timing task issued by the task issuer, the task middleware stores the corresponding timing task in the task scheduling queue and distributes the timing task to the task execution unit for execution. The task middleware can be deployed on an independent host to transmit information through a local area network.
By saving the structure and parameters of the trained model as structural files, writing time intervals, expected results and assigning task execution units, timing tasks containing the trained model can be obtained and used as tasks to be executed, so that the deployment of the trained model is separated from the modeling process, when the deployment is needed, the corresponding task execution units are called to provide hardware resources to execute the corresponding timing tasks, the hardware resources can be fully utilized, the requirements on operation equipment are reduced, the deployment of the model does not need related modeling knowledge, common workers can operate, the efficiency of model deployment is improved, and meanwhile, the labor cost is saved.
Optionally, in the step of sending the task to be executed to the corresponding task execution unit according to the preset rule, it may be determined whether the task to be executed specifies the task execution unit; if the task is appointed, sending the task to be executed to a corresponding appointed task execution unit; and if not, sending the task to be executed to the task execution unit with the resource occupancy rate lower than the first threshold value.
In the embodiment of the present invention, the first threshold may be determined according to a memory and a CPU resource required by the current task to be executed when the task is executed. The resource occupancy rate a of the current task to be executed in all task execution units can be calculated according to the memory and the CPU resource required by the current task to be executed when executing the task, wherein a is a numerical value between 0 and 1, the first threshold value is 1-nxa, and n is a numerical value between 1.0 and 1.2, and if the current resource occupancy rate b of the task execution unit is smaller than the first threshold value, which can be 1-nxa, the task execution unit has enough resources to execute the task to be executed.
Specifically, the task scheduler obtains the to-be-executed task written in the task middleware by reading the configuration file. The task scheduler may assign the task to be executed to the corresponding designated task execution unit according to whether the task to be executed specifies the task execution unit, and if the task to be executed does not specify the task execution unit, the task scheduler may preferentially assign the task execution unit with sufficient memory and CPU resources according to the memory and CPU resources required by the current task to be executed when executing the task, and specifically, may preferentially assign the machine with the lowest occupancy rate when there are a plurality of task execution units with sufficient memory and CPU resources.
And the task to be executed is sent to the corresponding task execution unit according to the preset rule, so that a better task execution unit can be matched, and the task efficiency is improved.
Optionally, in the step of sending the task to be executed to the corresponding task execution unit according to the preset rule, if the resource occupancy rate of the designated task execution unit corresponding to the current task to be executed is higher than the second threshold, the sending of the current task to be executed may be skipped, and when it is detected that the resource occupancy rate of the designated task execution unit corresponding to the current task to be executed is lower than the second threshold, the current task to be executed is sent to the corresponding designated task execution unit.
In the embodiment of the present invention, the second threshold may be determined according to a memory and a CPU resource required by the current task to be executed when the task is executed. The resource occupancy rate a required by the current task to be executed in all task execution units can be calculated according to the memory and the CPU resource required by the current task to be executed when executing the task, the second threshold value can be 1-mxa, m is a numerical value between 1.0 and 1.1, and if the current resource occupancy rate b of the task execution unit is smaller than the second threshold value 1-1.05 a, it indicates that the task execution unit has the right resource to execute the task to be executed.
When the task scheduler sends the task to be executed to the corresponding task execution unit according to the preset rule, the task scheduler can skip when the resource occupancy rate of the specified task execution unit corresponding to the current task to be executed is larger, and sends the next task to be executed according to the preset rule, and when the resource occupancy rate of the specified task execution unit corresponding to the current task to be executed is reduced, the task scheduler is matched with the corresponding specified task execution unit, so that the task execution unit is prevented from being crashed due to the fact that resources occupied by the executed task are too large.
Optionally, in the step of sending the task to be executed to the corresponding task execution unit according to the preset rule, when it is detected that the resource occupancy rates of all the task execution units are higher than the third threshold, the allocation of the task to be executed is stopped, and when a new task execution unit is added or a task execution unit with a resource occupancy rate lower than the third threshold exists, the allocation of the task to be executed is restarted.
In the embodiment of the present invention, the third threshold may be determined according to a memory and a CPU resource required by the current task to be executed when the task is executed. The resource occupancy rate a of the current task to be executed in all task execution units can be calculated according to the memory and CPU resources required by the current task to be executed when executing the task, and the third threshold value can be 1-a, and if the current resource occupancy rate b of the task execution unit is less than the third threshold value 1-a, it indicates that the resource of the task execution unit is insufficient to execute the task to be executed.
In a possible embodiment, the third threshold may be 0.8, that is, when the resource occupancy rate of a certain task execution unit is higher than 80%, the task allocation to the task execution unit is stopped, when the resource occupancy rate of all the task execution units is higher than 80%, the task allocation to be executed by the task is stopped,
in a possible embodiment, the new task execution unit has a correlation with the task execution unit for which the task allocation to be executed is stopped, and if the specified task execution unit in the task to be executed is the task execution unit for which the task allocation to be executed is stopped, the new task execution unit is the specified task execution unit of the task to be executed, so that a file corresponding to the specified task to be executed is not changed, and the specified task execution unit in the timed task does not need to be changed.
When the task scheduler sends the task to be executed to the corresponding task execution unit according to the preset rule, the task scheduler can stop when the resource occupancy rate of the specified task execution unit corresponding to the current task to be executed is large, and the task execution unit is prevented from collapsing due to the fact that the resource occupied by the task to be executed is too large.
Optionally, in the step of sending the task to be executed to the corresponding task execution unit according to the preset rule, when it is detected that the crashed task execution unit exists, the crashed task execution unit may be restarted; if the restart is successful, sending the corresponding task to be executed to the task execution unit after the restart is successful; and if the restart fails, sending the corresponding task to be executed to other task execution units.
In the embodiment of the invention, the task processing timeliness can be ensured by restarting the crashed task execution unit in time. And after the restart fails, sending the task to be executed corresponding to the crashed task execution unit to other task execution units.
Optionally, in the step of sending the warning information when the task result meets the preset warning condition, the task result may be compared with real data in a preset memory database to obtain a comparison result; and when the comparison result meets the preset alarm condition, sending alarm information.
In the embodiment of the invention, the memory database imports real-time data in the SIS of the thermal power plant, can provide the real-time data of each module of the thermal power plant for the task execution unit, and writes the obtained task result into the task result storage after the task execution unit executes the corresponding task to be executed.
And the task result storage is used for storing the task result of the timing task, and the task result is the result predicted by the trained model in the timing task. The task result can be prediction data, the task scheduler can regularly compare real data in the memory database with the prediction data in the task result storage, compare relevant indexes, and send alarm information to a task publisher through a websocket protocol if the prediction data of a certain task result exceeds a threshold set by a user.
The publisher only needs to execute the trained model through the task execution unit after issuing the timing task, so that the corresponding task result is obtained, the real-time prediction value can be obtained, the publisher does not need to pay attention to the deployment of the trained model and the management of the process, and when the prediction data exceeds a user set threshold value, an alarm can be given in time.
Optionally, an embodiment of the present invention further provides a task processing method, which specifically includes the following steps:
step one, the web application packs the model prediction into a timing task and then sends the timing task to the task middleware.
The web application program saves the structure, parameters and the like of the prediction model as a file in a pickle format; the web application program processes the file in the pickle format, writes the file into a time interval for executing the task, expects a result, executes a designated task execution unit of the task, processes the file into a timing task, and then writes the processed file into shared task middleware in a serialization way.
And step two, the task scheduler periodically sends timing tasks according to the configuration files and monitors the operation condition of each execution unit.
The task scheduler acquires a timing task written in the task middleware by reading the configuration file; and the task scheduler assigns the timing task to the assigned task execution unit according to whether the timing task assigns the task execution unit or not, if the timing task assigns the operation unit, the timing task is sent to the assigned task execution unit, and if not, the timing task is preferentially assigned to the machine with lower memory and CPU resource occupancy rate according to the memory and CPU resource required by the current task to execute the task. When the CPU or the memory of the appointed task execution unit can not support the resource consumption of the running timing task, the task scheduler skips the task to distribute the next timing task, and detects whether the appointed task execution unit supports the resource consumption capable of meeting the timing task at regular time. When the CPU or memory occupation of all task execution units reaches more than 80%, the task scheduler stops task allocation and sends out alarm information through a websocket protocol, the running condition of the task execution units is detected at regular time, and when a new task execution unit is added or an available task execution unit currently exists, the task scheduler allocates tasks to the available task execution units. When a certain task execution unit crashes, the task scheduler sends a restart instruction to the task execution unit, if the task execution unit is restarted successfully, the timing task of the execution unit is sent again, and if the task unit is restarted unsuccessfully, the execution task of the execution unit is distributed to other task execution units. When the task scheduler needs to delete or stop a certain task, the timed task is moved out of the task middleware.
And step three, the task execution unit receives the task instruction sent by the task scheduler and executes the task instruction at regular time.
And after the task execution unit acquires the task instruction sent by the task scheduler, acquiring a corresponding timing task from the task middleware. The task execution unit checks the timing task obtained from the task middleware, if the timing task is empty, the execution of the timing task is stopped, and if the timing task is not empty, the timing task is loaded into resources such as a memory and a process which are applied by the task execution unit in advance. The task execution unit executes the task periodically according to the time interval of the task. When a task execution unit crashes or disconnects from the task scheduler, the task execution unit stops execution of the task, restarts the task execution unit, and attempts to reconnect the task scheduler. And after the task is executed, writing the task result into a task result storage.
And step four, the task scheduler can regularly compare the real data in the memory database with the predicted data in the task result storage, compare the relevant indexes, and send out alarm information through the websocket if the relevant indexes of a certain task exceed a threshold set by a user. The related indexes can be set by the user.
And repeating the second step to the fourth step.
In the embodiment of the invention, the trained models are packaged into the timing tasks, the creation and deployment of the models can be separated, the coupling degree of modeling and deployment early warning is reduced, the timing tasks are sent to the corresponding task execution units according to the preset rules for task execution, the task results are obtained, when the task results meet the preset warning conditions, warning information is sent, hardware resources can be fully utilized, the requirements on operating equipment are reduced, the deployment of the models does not need related modeling knowledge, common workers can operate, the efficiency of model deployment is improved, and meanwhile, the labor cost is saved.
It should be noted that the task processing method provided by the embodiment of the present invention can be applied to devices such as smart phones, computers, servers, and the like.
Optionally, referring to fig. 3, fig. 3 is a schematic structural diagram of a task processing device according to an embodiment of the present invention, and as shown in fig. 3, the device includes:
a first obtaining module 301, configured to obtain a task to be executed, where the task to be executed is a timing task packed by a trained model;
a sending module 302, configured to send the task to be executed to a corresponding task execution unit according to a preset rule, so that the task execution unit executes the task to be executed to obtain a task result;
and the alarm module 303 is configured to send alarm information when the task result meets a preset alarm condition.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring the structure and the parameters of the trained model, and the structure and the parameters of the trained model are stored as a structural file;
and the packaging module is used for writing condition information executed by the task into the structural file and packaging the condition information into the timing task, wherein the condition information comprises a time interval.
Optionally, the sending module 302 is further configured to determine whether the task to be executed specifies a task execution unit; if the task to be executed is appointed, sending the task to be executed to a corresponding appointed task execution unit; and if not, sending the task to be executed to a task execution unit with the resource occupancy rate lower than a first threshold value.
Optionally, the sending module 302 is further configured to skip sending of the current task to be executed if the resource occupancy rate of the specified task execution unit corresponding to the current task to be executed is higher than a second threshold, and send the current task to be executed to the corresponding specified task execution unit when it is detected that the resource occupancy rate of the specified task execution unit corresponding to the current task to be executed is lower than the second threshold.
Optionally, the sending module 302 is further configured to stop the allocation of the to-be-executed task when it is detected that the resource occupancy rates of all the task execution units are higher than a third threshold, and restart the allocation of the to-be-executed task when a new task execution unit is added or there is a task execution unit whose resource occupancy rate is lower than the third threshold.
Optionally, the sending module 302 is further configured to restart the crashed task execution unit when it is detected that the crashed task execution unit exists; if the restart is successful, sending the corresponding task to be executed to the task execution unit after the restart is successful; and if the restart fails, sending the corresponding task to be executed to other task execution units.
Optionally, the alarm module 303 is further configured to compare the task result with real data in a preset memory database to obtain a comparison result; and sending alarm information when the comparison result meets a preset alarm condition.
The task processing device provided by the embodiment of the present invention may be applied to devices such as a smart phone, a computer, and a server that can perform graph-level business analysis.
The task processing device provided by the embodiment of the invention can realize each process realized by the task processing method in the method embodiment, and can achieve the same beneficial effect. To avoid repetition, further description is omitted here.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 4, including: memory 402, processor 401 and a computer program of a task processing method stored on the memory 402 and executable on the processor 401, wherein:
the processor 401 is configured to call the computer program stored in the memory 402, and execute the following steps:
acquiring a task to be executed, wherein the task to be executed is a timing task packed by a trained model;
sending the task to be executed to a corresponding task execution unit according to a preset rule so that the task execution unit executes the task to be executed to obtain a task result;
and sending out alarm information when the task result meets a preset alarm condition.
Optionally, before the step of acquiring the task to be executed, the method executed by the processor 401 further includes:
acquiring the structure and parameters of the trained model, and storing the structure and parameters of the trained model as a structural file;
writing condition information of task execution in the structural file, and packaging the condition information into the timing task, wherein the condition information comprises a time interval.
Optionally, the step, executed by the processor 401, of sending the task to be executed to the corresponding task execution unit according to the preset rule includes:
judging whether the task to be executed designates a task execution unit or not;
if the task to be executed is appointed, sending the task to be executed to a corresponding appointed task execution unit;
and if not, sending the task to be executed to a task execution unit with the resource occupancy rate lower than a first threshold value.
Optionally, the step, executed by the processor 401, of sending the task to be executed to the corresponding task execution unit according to the preset rule further includes:
and if the resource occupancy rate of the specified task execution unit corresponding to the current task to be executed is higher than a second threshold value, skipping the sending of the current task to be executed, and sending the current task to be executed to the corresponding specified task execution unit when detecting that the resource occupancy rate of the specified task execution unit corresponding to the current task to be executed is lower than the second threshold value.
Optionally, the step, executed by the processor 401, of sending the task to be executed to the corresponding task execution unit according to the preset rule further includes:
and when detecting that the resource occupancy rates of all the task execution units are higher than a third threshold value, stopping the distribution of the tasks to be executed, and restarting the distribution of the tasks to be executed when adding a new task execution unit or the task execution unit with the resource occupancy rate lower than the third threshold value exists.
Optionally, the step, executed by the processor 401, of sending the task to be executed to the corresponding task execution unit according to the preset rule further includes:
when detecting that the crashed task execution unit exists, restarting the crashed task execution unit;
if the restart is successful, sending the corresponding task to be executed to the task execution unit after the restart is successful;
and if the restart fails, sending the corresponding task to be executed to other task execution units.
Optionally, the step, executed by the processor 401, of sending the warning information when the task result meets a preset warning condition includes:
comparing the task result with real data in a preset memory database to obtain a comparison result;
and sending alarm information when the comparison result meets a preset alarm condition.
The electronic device provided by the embodiment of the invention can be applied to devices such as smart phones, computers, servers and the like which can perform task processing.
The electronic equipment provided by the embodiment of the invention can realize each process realized by the task processing method in the method embodiment, and can achieve the same beneficial effect. To avoid repetition, further description is omitted here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the task processing method or the application-side task processing method provided in the embodiment of the present invention, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware that is instructed by a computer program, and the program may be stored in a computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (10)

1. A task processing method, comprising the steps of:
acquiring a task to be executed, wherein the task to be executed is a timing task packed by a trained model;
sending the task to be executed to a corresponding task execution unit according to a preset rule so that the task execution unit executes the task to be executed to obtain a task result;
and sending out alarm information when the task result meets a preset alarm condition.
2. The method of claim 1, wherein prior to the step of obtaining the task to be performed, the method further comprises:
acquiring the structure and parameters of the trained model, and storing the structure and parameters of the trained model as a structural file;
writing condition information of task execution in the structural file, and packaging the condition information into the timing task, wherein the condition information comprises a time interval.
3. The method of claim 2, wherein the step of sending the task to be executed to the corresponding task execution unit according to a preset rule comprises:
judging whether the task to be executed designates a task execution unit or not;
if the task to be executed is appointed, sending the task to be executed to a corresponding appointed task execution unit;
and if not, sending the task to be executed to a task execution unit with the resource occupancy rate lower than a first threshold value.
4. The method according to claim 3, wherein the step of sending the task to be executed to the corresponding task execution unit according to the preset rule further comprises:
and if the resource occupancy rate of the specified task execution unit corresponding to the current task to be executed is higher than a second threshold value, skipping the sending of the current task to be executed, and sending the current task to be executed to the corresponding specified task execution unit when detecting that the resource occupancy rate of the specified task execution unit corresponding to the current task to be executed is lower than the second threshold value.
5. The method of claim 4, wherein the step of sending the task to be executed to the corresponding task execution unit according to the preset rule further comprises:
and when detecting that the resource occupancy rates of all the task execution units are higher than a third threshold value, stopping the distribution of the tasks to be executed, and restarting the distribution of the tasks to be executed when adding a new task execution unit or the task execution unit with the resource occupancy rate lower than the third threshold value exists.
6. The method of claim 5, wherein the step of sending the task to be executed to the corresponding task execution unit according to the preset rule further comprises:
when detecting that the crashed task execution unit exists, restarting the crashed task execution unit;
if the restart is successful, sending the corresponding task to be executed to the task execution unit after the restart is successful;
and if the restart fails, sending the corresponding task to be executed to other task execution units.
7. The method of claim 6, wherein the step of sending an alarm message when the task result satisfies a preset alarm condition comprises:
comparing the task result with real data in a preset memory database to obtain a comparison result;
and sending alarm information when the comparison result meets a preset alarm condition.
8. A task processing apparatus, characterized in that the apparatus comprises:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a task to be executed, and the task to be executed is a timing task packed by a trained model;
the sending module is used for sending the task to be executed to a corresponding task execution unit according to a preset rule so that the task execution unit executes the task to be executed to obtain a task result;
and the alarm module is used for sending out alarm information when the task result meets the preset alarm condition.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor implementing the steps in the task processing method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, implements the steps in the task processing method according to any one of claims 1 to 7.
CN202210662879.0A 2022-06-13 2022-06-13 Task processing method and device, electronic equipment and storage medium Pending CN115061792A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210662879.0A CN115061792A (en) 2022-06-13 2022-06-13 Task processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210662879.0A CN115061792A (en) 2022-06-13 2022-06-13 Task processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115061792A true CN115061792A (en) 2022-09-16

Family

ID=83199601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210662879.0A Pending CN115061792A (en) 2022-06-13 2022-06-13 Task processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115061792A (en)

Similar Documents

Publication Publication Date Title
CN108845884B (en) Physical resource allocation method, device, computer equipment and storage medium
CN108632365B (en) Service resource adjusting method, related device and equipment
KR100509794B1 (en) Method of scheduling jobs using database management system for real-time processing
US8424007B1 (en) Prioritizing tasks from virtual machines
CN111338791A (en) Method, device and equipment for scheduling cluster queue resources and storage medium
CN110704173A (en) Task scheduling method, scheduling system, electronic device and computer storage medium
CN102541661B (en) Realize the method and apparatus of wait on address synchronization interface
EP3798930A2 (en) Machine learning training resource management
CN110109741B (en) Method and device for managing circular tasks, electronic equipment and storage medium
US20220138012A1 (en) Computing Resource Scheduling Method, Scheduler, Internet of Things System, and Computer Readable Medium
CN109117244B (en) Method for implementing virtual machine resource application queuing mechanism
CN113886069A (en) Resource allocation method and device, electronic equipment and storage medium
CN114625533A (en) Distributed task scheduling method and device, electronic equipment and storage medium
CN107729213A (en) A kind of background task monitoring method and device
CN114564281A (en) Container scheduling method, device, equipment and storage medium
CN113157569B (en) Automated testing method, apparatus, computer device and storage medium
CN114327894A (en) Resource allocation method, device, electronic equipment and storage medium
EP3798931A1 (en) Machine learning training resource management
CN109995787A (en) A kind of data processing method and relevant device
CN116483546B (en) Distributed training task scheduling method, device, equipment and storage medium
CN109284193A (en) A kind of distributed data processing method and server based on multithreading
US20230155958A1 (en) Method for optimal resource selection based on available gpu resource analysis in large-scale container platform
CN115061792A (en) Task processing method and device, electronic equipment and storage medium
CN116521338A (en) Message queue management and control method, device, computer equipment and storage medium
CN115187097A (en) Task scheduling method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination