CN117591267B - Task processing method, device, medium and system - Google Patents

Task processing method, device, medium and system Download PDF

Info

Publication number
CN117591267B
CN117591267B CN202410066711.2A CN202410066711A CN117591267B CN 117591267 B CN117591267 B CN 117591267B CN 202410066711 A CN202410066711 A CN 202410066711A CN 117591267 B CN117591267 B CN 117591267B
Authority
CN
China
Prior art keywords
module
task
persistent memory
processor
modules
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410066711.2A
Other languages
Chinese (zh)
Other versions
CN117591267A (en
Inventor
杨钧
刘铁军
董培强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Metabrain Intelligent Technology Co Ltd
Original Assignee
Suzhou Metabrain Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Metabrain Intelligent Technology Co Ltd filed Critical Suzhou Metabrain Intelligent Technology Co Ltd
Priority to CN202410066711.2A priority Critical patent/CN117591267B/en
Publication of CN117591267A publication Critical patent/CN117591267A/en
Application granted granted Critical
Publication of CN117591267B publication Critical patent/CN117591267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a task processing method, equipment, a medium and a system in the technical field of computers. The method and the device can enable the module starting information and the module ending information to be completely and permanently stored; after the system is restarted, the processing progress of the target task is restored according to the module running information and the relation diagram in the persistent memory so as to continue to process the task without repeated processing, and the power-down consistency and the breakdown consistency are realized.

Description

Task processing method, device, medium and system
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a task processing method, device, medium, and system.
Background
When the program code runs, the program code and related data need to be loaded into a non-persistent memory first and then processed by a CPU (Central Processing Unit ), but the non-persistent memory, registers and caches of the CPU do not have persistent characteristics, and the data can be lost after power failure. The access speed of the system hard disk is slower, and the data can not be saved to the hard disk under the condition of sudden power failure of the system. The lost data contains the context data of the program operation, the program operation progress is difficult to record, the program operation progress can not be recovered after the system is powered down, and the program needs to be restarted, so that the resource waste is caused.
Therefore, how to restore the program running progress after the system is powered down is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the foregoing, an object of the present application is to provide a task processing method, device, medium and system, so as to restore the program running progress after the system is powered down. The specific scheme is as follows:
in a first aspect, the present application provides a task processing method applied to a system including a processor, a non-persistent memory, a persistent memory, and a hard disk;
the method comprises the following steps:
the processor reads a target task from the persistent memory to the non-persistent memory; the target task is segmented into a plurality of task modules, and the plurality of task modules construct a relation graph;
the processor operates the task modules in the non-persistent memory according to the relation diagram, and stores the module starting information and the module ending information of each task module into the persistent memory so as to restore the processing progress of the target task according to the module starting information, the module ending information and the relation diagram in the persistent memory after the system is restarted;
and after the operation of the task modules is finished, the processor determines a task output result of the target task and stores the task output result to the hard disk.
Optionally, the system is connected with a task generating end, and the task generating end determines single-module scheduling constraint according to actual running characteristics of the system and single-module expected characteristics; and under the single-module scheduling constraint, dividing the target task into a plurality of task modules, and constructing a relation diagram based on the plurality of task modules.
Optionally, the task generating end determines a single-module scheduling constraint according to an actual running characteristic of the system and a single-module expected characteristic, including:
the task generating end obtains the clock frequency of a processor, the access speed of the persistent memory, the single instruction length and the expected operation duration of a single module;
the task generating end takes the clock frequency of the processor, the access speed and the single instruction length as actual running characteristics of the system, and takes the single module expected running time as the single module expected characteristics;
the task generating end calculates the single-module operation time length according to the single-instruction length, the single-module expected operation time length and the processor clock frequency;
the task generating end calculates the single module preparation stage duration according to the single instruction length, the single module expected operation duration, the processor clock frequency, the access speed and the preset input data length;
The task generating end calculates the duration of the ending stage of the single module according to the access speed and the preset output data length;
and the task generating end determines the sum of the single module operation time length, the single module preparation stage time length and the single module ending stage time length as the single module scheduling constraint.
Optionally, the task generating end calculates a single-module operation duration according to the single instruction length, the single-module expected operation duration and the processor clock frequency, including:
the task generating end takes the product of the single instruction length, the single module expected operation duration and the processor clock frequency as the single module operation duration.
Optionally, the task generating end calculates a single module preparation stage duration according to the single instruction length, the single module expected operation duration, the processor clock frequency, the access speed and a preset input data length, including:
the task generating end calculates the ratio of the clock frequency of the processor to the access speed;
the task generating end takes the product of the ratio, the single instruction length and the single module expected operation duration as a first result;
The task generating end takes the ratio of the preset input data length to the preset input data length as a second result;
and the task generating end takes the sum of the first result and the second result as the single module preparation stage duration.
Optionally, the task generating end calculates a duration of a single module ending stage according to the access speed and a preset output data length, including:
and the task generating end takes the ratio of the preset output data length to the access speed as the duration of the single module ending stage.
Optionally, the task generating end segments the target task into a plurality of task modules under the single-module scheduling constraint, including:
the task generating end transversely cuts the functions which are not related to each other in the target task into a plurality of subtasks;
the task generating end longitudinally cuts each subtask into a plurality of nodes with sequence;
and the task generating end transversely cuts and/or longitudinally cuts each node to obtain a plurality of task modules with the total operation duration not exceeding the single-module scheduling constraint.
Optionally, the task generating end constructs a relationship graph based on the task modules, including:
The task generating end determines the relation between different task modules;
the task generating end associates different task modules according to the relation and marks the input data type and the output data type of each task module to obtain the relation diagram.
Optionally, the processor runs the plurality of task modules in the non-persistent memory according to the relationship graph, including:
the processor determines a first module group capable of running in the same time period according to the relation diagram, and makes all task modules in the first module group run simultaneously in the same time period under the constraint of single-module scheduling; and/or determining a second module group with a sequential execution order according to the relation diagram, and sequentially operating each task module in the second module group according to the sequential execution order under the constraint of single module scheduling.
Optionally, before storing the module start information and the module end information of each task module in the persistent memory, the processor further includes:
when each task module starts to run, the processor records corresponding module starting information in the non-persistent memory;
when each task module finishes running, the processor records corresponding module finishing information in the non-persistent memory;
The processor aggregates module start information and module end information for each task module in the non-persistent memory.
Optionally, the processor stores module start information and module end information of each task module to the persistent memory, including:
the processor allocates a corresponding storage area for each task module in the persistent memory according to the size of each task module;
the processor stores the module start information and the module end information of each task module to corresponding storage areas in the persistent memory.
Optionally, the processor restores the processing progress of the target task according to the module starting information, the module ending information and the relationship diagram in the persistent memory, including:
the processor reads module starting information and module ending information of all modules from the persistent memory;
the processor compares the read information with the relation diagram in the persistent memory to determine a target task module which finishes running when the system is restarted;
and the processor determines and restores the processing progress of the target task according to the position of the target task module in the relation diagram.
Optionally, the method further comprises:
and the processor takes an un-operated task module behind the target task module as a starting point, and continues to operate the un-operated task module in the target task.
Optionally, the method further comprises:
before any task module starts to run, the processor allocates a corresponding running area for the task module in the non-persistent memory according to the size of the task module;
correspondingly, the method further comprises the steps of: and after the operation of any task module is finished, the processor releases the operation area occupied by the task module in the non-persistent memory.
Optionally, the target task is any one of task queues stored in the hard disk;
correspondingly, the method further comprises the steps of:
the processor reads the task queue from the hard disk to the persistent memory;
accordingly, the processor reads a target task from the persistent memory to the non-persistent memory, comprising:
the processor reads the target task from a task queue in the persistent memory and records a reading time stamp;
the processor loads the target task to the non-persistent memory and records a loading time stamp.
Optionally, after the processor stores the task output result to the hard disk, the method further includes:
The processor uninstalls the target task from the non-persistent memory and records an uninstalling timestamp;
the processor gathers the reading time stamp, the loading time stamp and the unloading time stamp to a queue log;
the processor stores the queue log to the persistent memory.
In a third aspect, the present application provides an electronic device, including:
a memory for storing a computer program;
and a processor for executing the computer program to implement the task processing method disclosed above.
In a fourth aspect, the present application provides a readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the task processing method disclosed above.
In a fifth aspect, the present application provides a task processing system, comprising: processor, non-persistent memory, persistent memory and hard disk;
the persistent memory is used for: storing a target task;
the processor is configured to: reading the target task from the persistent memory to the non-persistent memory; the target task is segmented into a plurality of task modules, and the plurality of task modules construct a relation graph; operating the task modules in the non-persistent memory according to the relation diagram;
The persistent memory is used for: storing module starting information and module ending information of each task module;
the processor is further configured to: after the system is restarted, restoring the processing progress of the target task according to the module starting information, the module ending information and the relation diagram in the persistent memory; after the operation of the task modules is finished, determining a task output result of the target task;
the hard disk is used for: and storing a task output result of the target task.
Optionally, the method further comprises: the task generation end is used for: determining single-module scheduling constraint according to actual operation characteristics of the system and single-module expected characteristics; and under the single-module scheduling constraint, dividing the target task into a plurality of task modules, and constructing a relation diagram based on the plurality of task modules.
Therefore, the technical effects of the application are as follows: the method comprises the steps that a processor reads target tasks segmented into a plurality of task modules from a persistent memory to a non-persistent memory, operates the plurality of task modules in the non-persistent memory according to a relation diagram, and stores module starting information and module ending information of each task module to the persistent memory, so that after a system is restarted, the processing progress of the target tasks is restored according to the module starting information, the module ending information and the relation diagram in the persistent memory, and module operation information (namely module starting information and module ending information) is completely and permanently stored; after the system is restarted, the processing progress of the target task is restored according to the module running information and the relation diagram in the persistent memory, so that the method can be realized: under the conditions of abnormal passive power failure of the system, manual power failure of the system by an operator, system program breakdown and the like, the task processing progress is determined and recovered according to the module running information in the persistent memory so as to continue to process the task without repeated processing, and the power failure consistency and the breakdown consistency are realized. After the operation of the task modules is finished, the processor further determines a task output result of the target task and stores the task output result to the hard disk.
Correspondingly, the task processing equipment, the medium and the system provided by the application also have the technical effects.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of a task processing method disclosed in the present application;
FIG. 2 is a schematic diagram of a task queue disclosed herein;
FIG. 3 is a schematic diagram of the preparation, run and end phases of a module disclosed herein;
FIG. 4 is a schematic representation of a first relationship disclosed herein;
FIG. 5 is a schematic representation of a second relationship disclosed herein;
FIG. 6 is a schematic representation of a third relationship disclosed herein;
FIG. 7 is a schematic diagram of a log of the present disclosure;
FIG. 8 is a schematic diagram of a mapping relationship between persistent memory and non-persistent memory for a task disclosed in the present application;
FIG. 9 is a schematic diagram of a task recovery process disclosed herein;
FIG. 10 is a schematic diagram of a module flow process disclosed herein;
FIG. 11 is a schematic diagram of a task processing device disclosed herein;
FIG. 12 is a block diagram of a server provided herein;
fig. 13 is a diagram of a terminal structure provided in the present application;
FIG. 14 is a schematic diagram of a task processing system provided herein;
fig. 15 is a flowchart of another task processing method provided in the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
At present, when the program code runs, the program code and related data are loaded into a non-persistent memory first and then processed by a CPU, but the non-persistent memory, a register and a cache of the CPU do not have persistent characteristics, and the data can be lost after power failure. The access speed of the system hard disk is slower, and the data can not be saved to the hard disk under the condition of sudden power failure of the system. The lost data contains the context data of the program operation, the program operation progress is difficult to record, the program operation progress can not be recovered after the system is powered down, and the program needs to be restarted, so that the resource waste is caused. Therefore, the task processing scheme can restore the program running progress after the system is powered down, so that the program task can be continuously run without repeated processing, and the power down consistency and the crash consistency are realized.
Referring to fig. 1, an embodiment of the present application discloses a task processing method, including:
s101, determining single-module scheduling constraint according to actual operation characteristics of the system and single-module expected characteristics.
In this embodiment, the actual running characteristic of the system may be represented by the clock frequency of the processor, the access speed of the persistent memory and the single instruction length, and the single module expected characteristic may be represented by the single module expected running duration and the single module expected instruction number. Let the processor clock frequency be CPU_Hz, the access speed of the persistent memory be speed, the single instruction length be len, the single module expected operation time be tm, then the single module expected instruction number be tm×CPU_Hz, the single module operation time be len×tm×CPU_Hz, the preparation stage time be len×tm×CPU_Hz/speed+module input data length/speed, the end stage time be module output data length divided by speed. It is assumed here that the access speed of persistent memory is less than the access speed of non-persistent memory of the system.
It should be noted that, from the start operation to the end operation, one task module needs to go through the preparation phase, the operation phase and the end phase, so the total operation duration of one task module is equal to the sum of the preparation phase duration, the operation phase duration and the end phase duration.
In one example, determining a single module scheduling constraint based on a system actual operating characteristic and a single module desired characteristic includes: acquiring the clock frequency of a processor, the access speed of a persistent memory, the single instruction length and the expected operation duration of a single module; taking the clock frequency, the access speed and the single instruction length of a processor as actual operation characteristics of a system, and taking the expected operation duration of the single module as the expected characteristics of the single module; calculating the single-module operation time length (namely the operation stage time length) according to the single instruction length, the single-module expected operation time length and the processor clock frequency; calculating the preparation stage duration of the single module according to the single instruction length, the expected operation duration of the single module, the clock frequency of the processor, the access speed and the preset input data length; calculating the duration of the ending stage of the single module according to the access speed and the preset output data length; and determining the sum of the single-module operation time length, the single-module preparation stage time length and the single-module ending stage time length as a single-module scheduling constraint.
The method for calculating the single-module operation time length according to the single-instruction length, the single-module expected operation time length and the processor clock frequency comprises the following steps: and taking the product of the single instruction length, the single module expected running time and the clock frequency of the processor as the single module running time.
The method for calculating the single module preparation stage duration according to the single instruction length, the single module expected operation duration, the processor clock frequency, the access speed and the preset input data length comprises the following steps: calculating the ratio of the clock frequency of the processor to the access speed; taking the product of the ratio, the single instruction length and the single module expected operation time length as a first result; taking the ratio of the preset input data length to the preset input data length as a second result; and taking the sum of the first result and the second result as the single module preparation stage duration.
According to the access speed and the preset output data length, calculating the end stage duration of the single module comprises the following steps: and taking the ratio of the preset output data length to the access speed as the duration of the ending stage of the single module.
S102, under the constraint of single-module scheduling, dividing a target task into a plurality of task modules, and constructing a relation diagram based on the plurality of task modules.
Wherein the total duration of operation of each task module does not exceed a single module scheduling constraint. The single-module scheduling constraint is a control period for the system to control the operation of one task module.
The task segmentation modes include transverse cutting and longitudinal cutting. The transverse cutting is to separate the independent functions in the task, and the functions are independent and independent, so that the functions are executed concurrently; the longitudinal segmentation is to cut an independent function according to the sequence, and the functions of the longitudinal segmentation have a dependency relationship, so that the functions need to be executed in series. At this time, the function cannot be called a module yet. Through the function of longitudinal cutting, the transverse cutting is possible; similarly, the function of transverse cutting may also allow longitudinal cutting. And finally, cutting to ensure that the total running duration of the task module does not exceed the single-module scheduling constraint. In one example, splitting the target task into a plurality of task modules under a single module scheduling constraint includes: transversely cutting the functions which are not related to each other in the target task into a plurality of subtasks; longitudinally cutting each subtask into a plurality of nodes with sequence; and performing transverse cutting and/or longitudinal cutting on each node to obtain a plurality of task modules with the total running duration not exceeding the single-module scheduling constraint.
In one example, building a relationship graph based on a plurality of task modules includes: determining the relation between different task modules; and associating different task modules according to the relation, and marking the input data type and the output data type of each task module to obtain a relation diagram.
And S103, operating a plurality of task modules according to the single-module scheduling constraint and the relation diagram, and storing the operation information of each task module into a persistent memory so as to restore the processing progress of the target task according to the operation information in the persistent memory after the system is restarted.
The operation of a plurality of task modules according to the single-module scheduling constraint and the relation diagram is that the plurality of task modules can be operated in parallel or in series. In one example, running multiple task modules in accordance with a single module scheduling constraint and relationship graph includes: determining a first module group capable of running in the same time period according to the relation diagram, and enabling all task modules in the first module group to run simultaneously in the same time period under the single-module scheduling constraint; and/or determining a second module group with a sequential execution order according to the relation diagram, and sequentially operating each task module in the second module group according to the sequential execution order under the single-module scheduling constraint.
In this embodiment, the running information of each task module may be recorded in a log manner. In one example, before storing the operation information of each task module in the persistent memory, the method further includes: recording module start information (i.e., start log) when each task module starts to run; the module starting information comprises: module mark, start information length and input description information; recording module end information (i.e., end log) when each task module ends operation; the module end information includes: module mark, end information length and output description information; and summarizing the module starting information and the module ending information of each task module to obtain corresponding operation information.
In one embodiment, storing the operational information of each task module to persistent memory includes: distributing a corresponding storage area for each task module in the persistent memory according to the size of each task module; and storing the operation information of each task module into a corresponding storage area in the persistent memory.
The method for recovering the processing progress of the target task according to the running information in the persistent memory comprises the following steps: reading the operation information of all modules from the persistent memory; comparing the read information with the relation diagram to determine a target task module which finishes running when the system is restarted; and determining and recovering the processing progress of the target task according to the position of the target task module in the relation diagram. After recovering the processing progress of the target task according to the running information in the persistent memory, the method further comprises the following steps: and continuously operating the non-operated task modules in the target task by taking the non-operated task modules behind the target task module as starting points so as to continuously operate the target task without repetition.
In one example, further comprising: before any task module starts to run, a corresponding running area is allocated for the task module in a non-persistent memory according to the size of the task module; correspondingly, the method further comprises the steps of: and after the operation of any task module is finished, releasing the operation area occupied by the task module in the non-persistent memory.
It should be noted that the target task is any one of the task queues; accordingly, before the target task is split into the plurality of task modules under the single-module scheduling constraint, the method further comprises: reading a target task from the task queue, and recording a reading time stamp; and loading the target task into the non-persistent memory, and recording a loading time stamp. Unloading the target task from the non-persistent memory after the plurality of task modules finish running, and recording an unloading time stamp; summarizing the reading time stamp, the loading time stamp and the unloading time stamp to a queue log; and storing the queue log into a persistent memory.
Therefore, the embodiment can determine single-module scheduling constraint according to the actual running characteristics of the system and the expected characteristics of the single module, and divide the target task into a plurality of task modules under the single-module scheduling constraint, so that the running total duration of each task module does not exceed the single-module scheduling constraint, fine-grained division of the task is realized, and the division mode is fit with the characteristics of the task and the running rule of the system; finally, operating a plurality of task modules according to single-module scheduling constraint and a relation diagram constructed based on the plurality of task modules, and storing the operation information of each task module into a persistent memory, so that the module operation information is completely and permanently stored; after the system is restarted, the processing progress of the target task is restored according to the module running information in the persistent memory, so that the following steps are realized: under the conditions of various power failures of the system and the like, the task processing progress is determined and recovered according to the module operation information in the persistent memory, and the operation site can be recovered after power-on so as to continue to process tasks without repeated processing, so that the power failure consistency and the breakdown consistency are realized.
According to the characteristic of the persistent memory, the persistent memory is used as a central area for task operation, the processor and the non-persistent memory are used as resources for supporting task operation, relevant data of the progress of task operation are stored in the persistent memory, power failure cannot be lost, and safety and integrity of the data can be protected. Therefore, the application adds the persistent memory between the non-persistent memory and the hard disk, the task to be operated is stored in the hard disk, and when the task needs to be operated, the task is loaded into the persistent memory and the non-persistent memory and then is distributed to the CPU to process the task. Carrying out data movement from the hard disk to the persistent memory by taking tasks as granularity; from persistent memory to non-persistent memory, carrying out data movement by taking a module as granularity; and from the non-persistent memory to the CPU, carrying out data movement with the instruction as granularity. In the task operation process, the context data of the operation is stored in a persistent memory, the system is suddenly powered off, and the operation progress is not lost.
Specifically, the task is divided into a plurality of modules (i.e., a plurality of task modules), and the modules are basic execution units in the task. When a task needs to be operated, the task is loaded from a hard disk to a persistent memory, then the modules are loaded to a non-persistent memory according to the dependency relationship among the modules in the task, CPU resources are allocated for the modules, and operation information of each module needs to be recorded. Lost after a sudden power outage is simply the execution result of the running module (this module running information has not yet been recorded). And after the system crashes, recovering the task to a state before the module operates according to the module operation information in the persistent memory.
In order to effectively manage a plurality of tasks to be loaded, the tasks are constructed as a task queue. Referring to fig. 2, a task queue includes a plurality of tasks and corresponds to a queue log; each of these tasks may be split into multiple modules, which may form a graph. The task queue is a list of all tasks currently running or to be run, and each task in the queue corresponds to specific descriptive information. The queue log is used for recording loading and unloading time of each task, and the tasks are loaded into the persistent memory or unloaded from the persistent memory, and corresponding events are recorded in the queue log.
In one example, a task to be executed is first loaded from a hard disk into persistent memory, each node in the task list points to a task entity, and the task entity corresponds to a space in the persistent memory, which describes a task according to a definition rule of the task.
It should be noted that the operation of one module is divided into a preparation phase, an operation phase and an end phase. Referring to fig. 3, these 3 phases of different modules may have time overlapping, and the sum of the times of the preparation phase, the run phase and the end phase is equal to the single module scheduling constraint, and none of the preparation phase, the run phase and the end phase exceeds a scheduling interval, which may be configured, typically, a multiple of 10ms of the system clock. That is, the time taken up by the preparation phase, the end phase, and the module run phase is all equal to or less than the scheduling interval; the preparation phase, the operation phase and the end phase of different modules can be performed simultaneously in a scheduling interval, and the preparation phase, the end phase and the operation phase all need to allocate computer resources. The execution of the different modules is pipelined.
Specifically, the following factors may be referred to within the scheduling interval: a time duration (ti) of the clock interruption; a desired module run time (tm); preparation phase duration (tp); end phase duration (tt). If max { tm, tp, tt }% ti+=0, i.e.: the maximum value of tm, tp, tt is equal to 0 as a result of the remainder of ti, then the scheduling interval t is equal to the maximum value of tm, tp, tt divided by ti (rounded up or down), expressed by the formula t=max { tm, tp, tt }/ti; if max { tm, tp, tt }% ti >0, i.e.: if the result of the maximum value of tm, tp, tt is greater than 0, the scheduling interval t is equal to the result of dividing the maximum value of tm, tp, tt by ti and adding one, and the formula is t= [ max { tm, tp, tt }/ti ] +1.
Each task can be segmented to obtain a plurality of modules, each module is provided with a corresponding input data object and output data object, and the running information of each module forms a task log of the task; and, the logical relationships between the modules, input data objects, and output data objects are described by a relationship diagram. The processor may schedule each module to run in persistent memory and non-persistent memory according to the relationship graph. Referring to fig. 4, as shown in fig. 4, there may be a plurality of input data objects and a plurality of output data objects of the same module; such as: the input data objects for module a are: data object a, data object B, and data object C, two output data objects: a data object D and a data object E; different modules are associated based on the same data object. As can be seen from fig. 4, the data object D and the data object E are input parameters of the module B, and the output result of the module B is the data object F and the data object G, wherein the data object F is the input parameter of the module C, and the data object G is not the input parameter or the output result of any module. The arrows represent the order of execution of the modules. The relationship diagram describes modules and logical relationships between modules and data objects.
In one example, the system allocates hardware resources, such as processors, persistent memory, and non-persistent memory, at a module granularity for use by running tasks or tasks to be run. Before the module operates, the execution code and the input data object of the module need to be loaded into the non-persistent memory, after the module finishes executing, the output result of the execution is stored back into the persistent memory, and the operation log of the module is recorded, so that the module is regarded as finishing executing. And after the module is executed, releasing resources such as non-persistent memory, CPU and the like occupied by the module. A module is a piece of executable program code. The data type is a data structure. The module input data objects and output data objects are defined in terms of the corresponding data types. The module may be identified by a module ID and the data object may be identified by a data object ID. The module contains a function of an atomic operation, which is relative to the persistent memory, and the module needs to write the execution result back to the persistent memory after all the execution is completed. The input data objects and output data objects defined by the modules are predefined data types.
It should be noted that, the initialization module and the exit module are modules for controlling the task to start running and end running, and at least one task module is included between the two modules as an execution module, i.e. a module for processing data. Referring to fig. 5, the initialization module, the execution module, and the exit module are executed serially. Different execution modules may execute concurrently, as shown in particular in fig. 6. As can be seen from fig. 6, the module a and the module B are executed concurrently. Concurrently executing module a and module B may access the same data object, and concurrent data object access is required. The start of the operation of the modules a and B needs to be after the initialization of the modules is completed, and the end of the modules a and B needs to be before the execution of the exit module. The execution of the execution module may also use the data object as a determination condition, as shown in fig. 6, where the execution of the module a and the module B uses the value of the data object a as a determination condition, and if a is equal to 0, the execution of the module a is started, and if a is not equal to 0, the execution of the module B is started.
The task log is used for recording the task running process, the record of the task log takes a module as granularity, and the log format consists of two parts: log header, log content. The log header is used as the basis of log analysis, and comprises a log label, a log type and the total length of the log. The log tag is used to identify the beginning of a log. The log types include: and starting the log by the module and ending the log by the module. The total length of the log is the amount of persistent memory space occupied by the log, including the log header portion. The size of the log header is fixed so the length of the log content is equal to the total length of the log minus the length of the log header. The content of the log includes two parts: module ID and data object content. The corresponding module can be found from the task description according to the module ID, the module definition is known from the module, and the data types of all input parameters and output results can be known according to the module definition. Fig. 7 is a log schematic. A complete module run log includes a module start log and a module end log.
The mapping of tasks between persistent and non-persistent memory is shown in fig. 8. Referring to FIG. 8, the system loads the relationship graph, module definitions, and data types for the task into non-persistent memory where the task runs. The relationship graph, module definition, and data type are required to run the module, so they need to be loaded before running the module, which is loaded into the read-only code region of the task. The execution of the module is then started: and loading the modules in turn according to the relation diagram, and inputting parameters of the modules and outputting data types of results. The modules and data objects to be loaded are defined and data types with reference to the modules. The operation of the module generates temporary data, so that memory space needs to be dynamically allocated, the data are stored in non-persistent memory, and memory resources are released after the execution of the module is completed. The loading module is used for firstly recording a module starting log, recording a module ending log after the loading module runs, and finally refreshing the recorded log related information to the persistent memory.
Because the data stored in the persistent memory can be recovered after power failure, the system is powered on again or the system program is recovered after breakdown, the state before power failure can be recovered again by taking the modules as recovery granularity according to the task logs in the persistent memory, and the modules which do not complete execution before power failure are regarded as not being executed, and the modules can be scheduled to be executed again. By analyzing the task log, the problem module can be accurately positioned. The task recovery flow is shown in fig. 9. The task corresponds to a process, and the module corresponds to a thread in the process. The execution of the module is an atomic operation for the persistent memory, and a start module log and an end module log are complete records of the operation of the module. The flow of operation of the module is shown in fig. 10.
According to the embodiment, the power-down consistency and the crash consistency can be realized based on the persistent memory, the consistency is based on module granularity, and the program is not required to be loaded from the hard disk again after restarting, and the consistency is directly recovered from the persistent memory.
A task processing device provided in the embodiments of the present application is described below, and a task processing device described below and other embodiments described herein may be referred to with each other.
Referring to fig. 11, an embodiment of the present application discloses a task processing device, including:
the determining module is used for determining single-module scheduling constraint according to the actual running characteristics of the system and the single-module expected characteristics;
the segmentation module is used for segmenting the target task into a plurality of task modules under the single-module scheduling constraint, and constructing a relationship graph based on the plurality of task modules; wherein the total running duration of each task module does not exceed the single-module scheduling constraint;
and the processing module is used for operating the task modules according to the single-module scheduling constraint and the relation diagram, and storing the operation information of each task module into the persistent memory so as to restore the processing progress of the target task according to the operation information in the persistent memory after the system is restarted.
In one example, the determination module is specifically configured to: acquiring the clock frequency of a processor, the access speed of a persistent memory, the single instruction length and the expected operation duration of a single module; taking the clock frequency, the access speed and the single instruction length of a processor as actual operation characteristics of a system, and taking the expected operation duration of the single module as the expected characteristics of the single module; calculating the single-module operation time according to the single instruction length, the single-module expected operation time and the processor clock frequency; calculating the preparation stage duration of the single module according to the single instruction length, the expected operation duration of the single module, the clock frequency of the processor, the access speed and the preset input data length; calculating the duration of the ending stage of the single module according to the access speed and the preset output data length; and determining the sum of the single-module operation time length, the single-module preparation stage time length and the single-module ending stage time length as a single-module scheduling constraint.
In one example, the determination module is specifically configured to: and taking the product of the single instruction length, the single module expected running time and the clock frequency of the processor as the single module running time.
In one example, the determination module is specifically configured to: calculating the ratio of the clock frequency of the processor to the access speed; taking the product of the ratio, the single instruction length and the single module expected operation time length as a first result; taking the ratio of the preset input data length to the preset input data length as a second result; and taking the sum of the first result and the second result as the single module preparation stage duration.
In one example, the determination module is specifically configured to: and taking the ratio of the preset output data length to the access speed as the duration of the ending stage of the single module.
In one example, the segmentation module is specifically configured to: transversely cutting the functions which are not related to each other in the target task into a plurality of subtasks; longitudinally cutting each subtask into a plurality of nodes with sequence; and performing transverse cutting and/or longitudinal cutting on each node to obtain a plurality of task modules with the total running duration not exceeding the single-module scheduling constraint.
In one example, the segmentation module is specifically configured to: determining the relation between different task modules; and associating different task modules according to the relation, and marking the input data type and the output data type of each task module to obtain a relation diagram.
In one example, the processing module is specifically configured to: determining a first module group capable of running in the same time period according to the relation diagram, and enabling all task modules in the first module group to run simultaneously in the same time period under the single-module scheduling constraint; and/or determining a second module group with a sequential execution order according to the relation diagram, and sequentially operating each task module in the second module group according to the sequential execution order under the single-module scheduling constraint.
In one example, further comprising: the recording module is used for recording the starting information of the modules when each task module starts to operate before the operation information of each task module is stored in the persistent memory; the module starting information comprises: module mark, start information length and input description information; when each task module finishes running, recording module finishing information; the module end information includes: module mark, end information length and output description information; and summarizing the module starting information and the module ending information of each task module to obtain corresponding operation information.
In one example, the processor module is specifically configured to: distributing a corresponding storage area for each task module in the persistent memory according to the size of each task module; and storing the operation information of each task module into a corresponding storage area in the persistent memory.
In one example, the processor module is specifically configured to: reading the operation information of all modules from the persistent memory; comparing the read information with the relation diagram to determine a target task module which finishes running when the system is restarted; and determining and recovering the processing progress of the target task according to the position of the target task module in the relation diagram.
In one example, further comprising: and the continuous operation module is used for continuously operating the non-operated task module in the target task by taking the non-operated task module behind the target task module as a starting point after recovering the processing progress of the target task according to the operation information in the persistent memory.
In one example, further comprising: the allocation module is used for allocating a corresponding operation area for the task module in the non-persistent memory according to the size of the task module before any task module starts to operate; correspondingly, the method further comprises the steps of: and the releasing module is used for releasing the operation area occupied by the task module in the non-persistent memory after the operation of any task module is finished.
In one example, the target task is any one of a task queue; correspondingly, under the single-module scheduling constraint, before dividing the target task into a plurality of task modules, reading the target task from a task queue, and recording a reading time stamp; and loading the target task into the non-persistent memory, and recording a loading time stamp.
In one example, further comprising: the log recording module is used for unloading the target task from the non-persistent memory after the plurality of task modules finish running and recording the unloading time stamp; summarizing the reading time stamp, the loading time stamp and the unloading time stamp to a queue log; and storing the queue log into a persistent memory.
The more specific working process of each module and unit in this embodiment may refer to the corresponding content disclosed in the foregoing embodiment, and will not be described herein.
Therefore, the embodiment provides a task processing device, which can restore the running progress of a program after the system is powered down, so that the program task can be continuously run without repeated processing, and the power-down consistency and the crash consistency are realized.
An electronic device provided in an embodiment of the present application is described below, and an electronic device described below may refer to other embodiments described herein.
The embodiment of the application discloses electronic equipment, which comprises:
a memory for storing a computer program;
and a processor for executing the computer program to implement the method disclosed in any of the above embodiments.
Further, the embodiment of the application also provides electronic equipment. The electronic device may be a server as shown in fig. 12 or a terminal as shown in fig. 13. Fig. 12 and 13 are each a block diagram of an electronic device according to an exemplary embodiment, and the contents of the drawings should not be construed as limiting the scope of use of the present application in any way.
Fig. 12 is a schematic structural diagram of a server according to an embodiment of the present application. The server specifically may include: at least one processor, at least one memory, a power supply, a communication interface, an input-output interface, and a communication bus. The memory is used for storing a computer program, and the computer program is loaded and executed by the processor to realize relevant steps in task processing disclosed in any of the foregoing embodiments.
In this embodiment, the power supply is configured to provide a working voltage for each hardware device on the server; the communication interface can create a data transmission channel between the server and external equipment, and the communication protocol to be followed by the communication interface is any communication protocol applicable to the technical scheme of the application, and is not particularly limited herein; the input/output interface is used for acquiring external input data or outputting data to the external, and the specific interface type can be selected according to the specific application requirement, and is not limited in detail herein.
In addition, the memory can be used as a carrier for storing resources, such as non-persistent memory, read-only memory, random access memory, magnetic disk, hard disk or optical disk, and the like, and the resources stored on the memory comprise an operating system, computer programs, data and the like, and the storage mode can be transient storage or permanent storage.
The operating system is used for managing and controlling each hardware device and computer program on the Server to realize the operation and processing of the processor on the data in the memory, and the operation and processing can be Windows Server, netware, unix, linux and the like. The computer program may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the task processing method disclosed in any of the foregoing embodiments. The data may include data such as information on a developer of the application program in addition to data such as update information of the application program.
Fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present application, where the terminal may specifically include, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like.
Generally, the terminal in this embodiment includes: a processor and a memory.
The processor may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor may incorporate a GPU (Graphics Processing Unit, image processor) for rendering and rendering of content required to be displayed by the display screen. In some embodiments, the processor may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
The memory may include one or more computer-readable storage media, which may be non-transitory. The memory may also include persistent memory, such as: high speed random access memory, and non-persistent memory, i.e., non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices, hard disks. In this embodiment, the memory is at least used to store a computer program, where the computer program, after being loaded and executed by the processor, can implement relevant steps in the task processing method executed by the terminal side disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory can also comprise an operating system, data and the like, and the storage mode can be short-term storage or permanent storage. The operating system may include Windows, unix, linux, among others. The data may include, but is not limited to, update information for the application.
In some embodiments, the terminal may further include a display screen, an input-output interface, a communication interface, a sensor, a power supply, and a communication bus.
Those skilled in the art will appreciate that the structure shown in fig. 13 is not limiting of the terminal and may include more or fewer components than shown.
A readable storage medium provided in embodiments of the present application is described below, and the readable storage medium described below may be referred to with respect to other embodiments described herein.
A readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the task processing method disclosed in the foregoing embodiment. The readable storage medium is a computer readable storage medium, and can be used as a carrier for storing resources, such as persistent memory, non-persistent memory, read-only memory, random access memory, magnetic disk, hard disk or optical disk, wherein the stored resources comprise an operating system, a computer program, data and the like, and the storage mode can be transient storage or permanent storage.
A task processing system provided in the embodiments of the present application is described below, and a task processing system described below and other embodiments described herein may be referred to with each other.
The embodiment of the application provides a task processing system, which comprises: processor, non-persistent memory, and hard disk. Persistent memory is used for: storing a target task; the processor is used for: reading a target task from a persistent memory to a non-persistent memory; the target task is segmented into a plurality of task modules, and the plurality of task modules construct a relation graph; running a plurality of task modules in a non-persistent memory according to a relation diagram; persistent memory is used for: storing module starting information and module ending information of each task module; the processor is further configured to: after the system is restarted, the processing progress of the target task is restored according to the module starting information, the module ending information and the relation diagram in the persistent memory; after the operation of the task modules is finished, determining a task output result of the target task; the hard disk is used for: and storing a task output result of the target task.
Wherein the system further comprises: the task generation end is used for: determining single-module scheduling constraint according to actual operation characteristics of the system and single-module expected characteristics; under the constraint of single-module scheduling, the target task is segmented into a plurality of task modules, and a relation graph is constructed based on the plurality of task modules.
The present embodiment also provides another task processing system, including: processor, persistent memory and non-persistent memory; the persistent memory is used for storing target tasks; the non-persistent memory is used for loading the target task; the processor is used for determining single-module scheduling constraint according to the actual running characteristics of the system and the single-module expected characteristics; under the constraint of single-module scheduling, dividing a target task into a plurality of task modules in a non-persistent memory, and constructing a relationship diagram based on the plurality of task modules; wherein the total running duration of each task module does not exceed the single-module scheduling constraint; and operating the task modules according to the single-module scheduling constraint and the relation diagram, and storing the operation information of each task module into the persistent memory so as to restore the processing progress of the target task according to the operation information in the persistent memory after the system is restarted. The processor is also configured to perform the related methods provided by other embodiments.
In one example, the system further comprises: a hard disk; the hard disk is used for storing a task queue; the target task is any one of the task queues. The persistent memory is used for acquiring a target task from the hard disk.
In one example, the system provided by the present embodiment includes: hard disk, processor, persistent memory and non-persistent memory, see fig. 14 in particular.
Referring to fig. 15, a task processing method suitable for the above system includes:
s1501, the processor reads the target task from the persistent memory to the non-persistent memory.
The target task is segmented into a plurality of task modules, and the plurality of task modules construct a relational graph. The processor has no distinction between accessing persistent memory and accessing non-persistent memory, and the data stored in the persistent memory and the non-persistent memory can be directly copied and moved to each other. The processor generally needs to call the system bus for accessing the hard disk, and the processor does not need to call the system bus for accessing the persistent memory and the non-persistent memory, so that the processor reads the target task from the persistent memory to the non-persistent memory, and the reading efficiency is higher.
S1502, the processor runs a plurality of task modules in the non-persistent memory according to the relation diagram.
And S1503, the processor stores the module starting information and the module ending information of each task module into a persistent memory.
S1504, after the system is restarted, the processor reads module starting information, module ending information and a relation diagram from the persistent memory.
S1505, the processor restores the processing progress of the target task according to the module starting information, the module ending information and the relation diagram in the persistent memory.
In one example, a processor runs a plurality of task modules in a non-persistent memory according to a relationship graph, comprising: the processor determines a first module group capable of running in the same time period according to the relation diagram, and makes all task modules in the first module group run simultaneously in the same time period under the constraint of single module scheduling; and/or determining a second module group with a sequential execution order according to the relation diagram, and sequentially operating each task module in the second module group according to the sequential execution order under the single-module scheduling constraint.
It should be noted that before the processor stores the module start information and the module end information of each task module into the persistent memory, the method further includes: when each task module starts to run, the processor records corresponding module starting information in a non-persistent memory; when the processor finishes running each task module, recording corresponding module finishing information in a non-persistent memory; the processor aggregates the module start information and the module end information for each task module in the non-persistent memory.
Further, the processor stores module start information and module end information of each task module to a persistent memory, including: the processor allocates a corresponding storage area for each task module in the persistent memory according to the size of each task module; the processor stores the module start information and the module end information of each task module to corresponding storage areas in the persistent memory.
In this embodiment, the specific process of restoring the processing progress of the target task by the processor includes: the processor reads module starting information and module ending information of all the modules from the persistent memory; the processor compares the read information with the relation diagram in the persistent memory to determine a target task module which finishes running when the system is restarted; and the processor determines and restores the processing progress of the target task according to the position of the target task module in the relation diagram.
S1506, after the operation of the task modules is finished, the processor determines the task output result of the target task.
S1507, the processor stores the task output result to the hard disk.
It should be noted that, the processor further uses the non-running task module after the target task module as a starting point to continue to run the non-running task module in the target task. Before any task module starts to run, the processor allocates a corresponding running area for the task module in the non-persistent memory according to the size of the task module; correspondingly, the method further comprises the steps of: and after the operation of any task module is finished, the processor releases the operation area occupied by the task module in the non-persistent memory.
In one embodiment, the target task is any one of a task queue stored in the hard disk; correspondingly, the method further comprises the steps of: the processor reads the task queue from the hard disk to the persistent memory; accordingly, the processor reads the target task from persistent memory to non-persistent memory, comprising: the processor reads the target task from the task queue in the persistent memory and records the reading time stamp; the processor loads the target task into the non-persistent memory and records the loading time stamp.
After the processor stores the task output result to the hard disk, the method further comprises: the processor uninstalls the target task from the non-persistent memory and records an uninstalling timestamp; the processor gathers the reading time stamp, the loading time stamp and the unloading time stamp to a queue log; the processor stores the queue log to persistent memory.
In one example, the system is connected with a task generating end, and the task generating end determines single-module scheduling constraint according to actual operation characteristics of the system and single-module expected characteristics; under the constraint of single-module scheduling, the target task is segmented into a plurality of task modules, and a relation graph is constructed based on the plurality of task modules.
The task generating end determines single-module scheduling constraint according to the actual running characteristics of the system and the single-module expected characteristics, and comprises the following steps: the task generating end obtains the clock frequency of a processor, the access speed of a persistent memory, the length of a single instruction and the expected operation time of a single module; the task generating end takes the clock frequency, the access speed and the single instruction length of the processor as actual running characteristics of the system, and takes the expected running time of the single module as expected characteristics of the single module; the task generating end calculates the single-module operation time according to the single-instruction length, the single-module expected operation time and the processor clock frequency; the task generating end calculates the preparation stage duration of the single module according to the single instruction length, the expected operation duration of the single module, the clock frequency of the processor, the access speed and the preset input data length; the task generating end calculates the duration of the ending stage of the single module according to the access speed and the preset output data length; the task generating end determines the sum of the single-module operation time length, the single-module preparation stage time length and the single-module ending stage time length as a single-module scheduling constraint.
In one example, the task generating end calculates a single-module operation duration according to a single instruction length, a single-module expected operation duration and a processor clock frequency, including: the task generating end takes the product of the single instruction length, the single module expected operation time length and the clock frequency of the processor as the single module operation time length.
In one example, the task generating end calculates a single module preparation phase duration according to a single instruction length, a single module expected operation duration, a processor clock frequency, an access speed and a preset input data length, and the task generating end includes: the task generating end calculates the ratio of the clock frequency of the processor to the access speed; the task generating end takes the product of the ratio, the single instruction length and the single module expected operation duration as a first result; the task generating end takes the ratio of the preset input data length to the preset input data length as a second result; and the task generating end takes the sum of the first result and the second result as the single module preparation stage duration.
In an example, the task generating end calculates a duration of a single module ending phase according to an access speed and a preset output data length, including: the task generating end takes the ratio of the preset output data length to the access speed as the duration of the ending stage of the single module.
Further, the task generating end divides the target task into a plurality of task modules under the single-module scheduling constraint, including: the task generating end transversely cuts the functions which are not related to each other in the target task into a plurality of subtasks; the task generating end longitudinally cuts each subtask into a plurality of nodes with sequence; and the task generating end performs transverse cutting and/or longitudinal cutting on each node to obtain a plurality of task modules with the total operation duration not exceeding the single-module scheduling constraint.
In one example, the task generating end builds a relationship graph based on a plurality of task modules, including: the task generating end determines the relation between different task modules; the task generating end associates different task modules according to the relation and marks the input data type and the output data type of each task module to obtain a relation diagram. And then, the task generating end stores each task of the divided modules and the corresponding relation diagram into the hard disk in a task queue form.
It can be seen that the embodiment can enable the module starting information and the module ending information to be completely and permanently stored; after the system is restarted, the processing progress of the target task is restored according to the module running information and the relation diagram in the persistent memory so as to continue to process the task without repeated processing, and the power-down consistency and the breakdown consistency are realized.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of readable storage medium known in the art.
The principles and embodiments of the present application are described herein with specific examples, the above examples being provided only to assist in understanding the methods of the present application and their core ideas; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (18)

1. A task processing method, which is characterized by being applied to a system comprising a processor, a non-persistent memory, a persistent memory and a hard disk;
The method comprises the following steps:
the processor reads a target task from the persistent memory to the non-persistent memory;
the operation of one task module is divided into a preparation stage, an operation stage and an ending stage, and the sum of the time of the preparation stage, the operation stage and the ending stage is equal to a single-module scheduling constraint; under the single-module scheduling constraint, dividing the target task into a plurality of task modules, and determining the relation among different task modules; associating different task modules according to the relation, and marking the input data type and the output data type of each task module to obtain a relation diagram; the task segmentation mode comprises transverse cutting and longitudinal cutting, wherein the transverse cutting is to divide functions which are not related to each other in the task, and the longitudinal cutting is to cut an independent function according to a sequence;
the processor operates the task modules in the non-persistent memory according to the relation diagram, and stores the module starting information and the module ending information of each task module into the persistent memory so as to read the module starting information and the module ending information of all the modules from the persistent memory after the system is restarted; comparing the read information with the relation diagram in the persistent memory to determine a target task module which finishes running when the system is restarted; determining and recovering the processing progress of the target task according to the position of the target task module in the relation diagram;
And after the operation of the task modules is finished, the processor determines a task output result of the target task and stores the task output result to the hard disk.
2. The method according to claim 1, wherein the system is connected with a task generating end, and the task generating end determines single-module scheduling constraints according to actual operation characteristics of the system and single-module expected characteristics.
3. The method according to claim 2, wherein the task generating side determines a single-module scheduling constraint according to the actual operating characteristics of the system and the single-module expected characteristics, including:
the task generating end obtains the clock frequency of a processor, the access speed of the persistent memory, the single instruction length and the expected operation duration of a single module;
the task generating end takes the clock frequency of the processor, the access speed and the single instruction length as actual running characteristics of the system, and takes the single module expected running time as the single module expected characteristics;
the task generating end calculates the single-module operation time length according to the single-instruction length, the single-module expected operation time length and the processor clock frequency;
The task generating end calculates the single module preparation stage duration according to the single instruction length, the single module expected operation duration, the processor clock frequency, the access speed and the preset input data length;
the task generating end calculates the duration of the ending stage of the single module according to the access speed and the preset output data length;
and the task generating end determines the sum of the single module operation time length, the single module preparation stage time length and the single module ending stage time length as the single module scheduling constraint.
4. The method of claim 3, wherein the task generating side calculates a single-module operation duration according to the single instruction length, the single-module expected operation duration, and the processor clock frequency, including:
the task generating end takes the product of the single instruction length, the single module expected operation duration and the processor clock frequency as the single module operation duration.
5. The method of claim 3, wherein the task generating side calculates a single module preparation phase duration according to the single instruction length, the single module expected operation duration, the processor clock frequency, the access speed, and a preset input data length, and the method comprises:
The task generating end calculates the ratio of the clock frequency of the processor to the access speed;
the task generating end takes the product of the ratio, the single instruction length and the single module expected operation duration as a first result;
the task generating end takes the ratio of the preset input data length to the preset input data length as a second result;
and the task generating end takes the sum of the first result and the second result as the single module preparation stage duration.
6. The method of claim 3, wherein the task generating side calculates a single module end period according to the access speed and a preset output data length, including:
and the task generating end takes the ratio of the preset output data length to the access speed as the duration of the single module ending stage.
7. The method according to claim 2, wherein the task generating end segments the target task into a plurality of task modules under the single-module scheduling constraint, including:
the task generating end transversely cuts the functions which are not related to each other in the target task into a plurality of subtasks;
the task generating end longitudinally cuts each subtask into a plurality of nodes with sequence;
And the task generating end transversely cuts and/or longitudinally cuts each node to obtain a plurality of task modules with the total operation duration not exceeding the single-module scheduling constraint.
8. The method of claim 1, wherein the processor running the plurality of task modules in the non-persistent memory according to the relationship graph comprises:
the processor determines a first module group capable of running in the same time period according to the relation diagram, and makes all task modules in the first module group run simultaneously in the same time period under the constraint of single-module scheduling; and/or determining a second module group with a sequential execution order according to the relation diagram, and sequentially operating each task module in the second module group according to the sequential execution order under the constraint of single module scheduling.
9. The method of claim 1, wherein before storing the module start information and the module end information for each task module to the persistent memory, the processor further comprises:
when each task module starts to run, the processor records corresponding module starting information in the non-persistent memory;
When each task module finishes running, the processor records corresponding module finishing information in the non-persistent memory;
the processor aggregates module start information and module end information for each task module in the non-persistent memory.
10. The method of claim 1, wherein the processor storing module start information and module end information for each task module to the persistent memory comprises:
the processor allocates a corresponding storage area for each task module in the persistent memory according to the size of each task module;
the processor stores the module start information and the module end information of each task module to corresponding storage areas in the persistent memory.
11. The method as recited in claim 10, further comprising:
and the processor takes an un-operated task module behind the target task module as a starting point, and continues to operate the un-operated task module in the target task.
12. The method according to any one of claims 1 to 11, further comprising:
before any task module starts to run, the processor allocates a corresponding running area for the task module in the non-persistent memory according to the size of the task module;
Correspondingly, the method further comprises the steps of: and after the operation of any task module is finished, the processor releases the operation area occupied by the task module in the non-persistent memory.
13. The method according to any one of claims 1 to 11, wherein the target task is any one of a task queue stored in the hard disk;
correspondingly, the method further comprises the steps of:
the processor reads the task queue from the hard disk to the persistent memory;
accordingly, the processor reads a target task from the persistent memory to the non-persistent memory, comprising:
the processor reads the target task from a task queue in the persistent memory and records a reading time stamp;
the processor loads the target task to the non-persistent memory and records a loading time stamp.
14. The method of claim 13, wherein after the processor stores the task output result to the hard disk, further comprising:
the processor uninstalls the target task from the non-persistent memory and records an uninstalling timestamp;
the processor gathers the reading time stamp, the loading time stamp and the unloading time stamp to a queue log;
The processor stores the queue log to the persistent memory.
15. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the method of any one of claims 1 to 14.
16. A readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the method of any one of claims 1 to 14.
17. A task processing system, comprising: processor, non-persistent memory, persistent memory and hard disk;
the persistent memory is used for: storing a target task;
the processor is configured to: reading the target task from the persistent memory to the non-persistent memory; the operation of one task module is divided into a preparation stage, an operation stage and an ending stage, and the sum of the time of the preparation stage, the operation stage and the ending stage is equal to a single-module scheduling constraint; under the single-module scheduling constraint, dividing the target task into a plurality of task modules, and determining the relation among different task modules; associating different task modules according to the relation, and marking the input data type and the output data type of each task module to obtain a relation diagram; the task segmentation mode comprises transverse cutting and longitudinal cutting, wherein the transverse cutting is to divide functions which are not related to each other in the task, and the longitudinal cutting is to cut an independent function according to a sequence; operating the task modules in the non-persistent memory according to the relation diagram;
The persistent memory is used for: storing module starting information and module ending information of each task module;
the processor is further configured to: reading module starting information and module ending information of all modules from the persistent memory; comparing the read information with the relation diagram in the persistent memory to determine a target task module which finishes running when the system is restarted; determining and recovering the processing progress of the target task according to the position of the target task module in the relation diagram; after the operation of the task modules is finished, determining a task output result of the target task;
the hard disk is used for: and storing a task output result of the target task.
18. The system of claim 17, further comprising: the task generation end is used for: and determining single-module scheduling constraint according to the actual operation characteristics of the system and the single-module expected characteristics.
CN202410066711.2A 2024-01-17 2024-01-17 Task processing method, device, medium and system Active CN117591267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410066711.2A CN117591267B (en) 2024-01-17 2024-01-17 Task processing method, device, medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410066711.2A CN117591267B (en) 2024-01-17 2024-01-17 Task processing method, device, medium and system

Publications (2)

Publication Number Publication Date
CN117591267A CN117591267A (en) 2024-02-23
CN117591267B true CN117591267B (en) 2024-04-05

Family

ID=89920440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410066711.2A Active CN117591267B (en) 2024-01-17 2024-01-17 Task processing method, device, medium and system

Country Status (1)

Country Link
CN (1) CN117591267B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102165426A (en) * 2008-09-26 2011-08-24 微软公司 Memory management techniques selectively using mitigations to reduce errors
WO2015180493A1 (en) * 2014-05-30 2015-12-03 华为技术有限公司 Method, apparatus, and system for processing data storage
CN116755625A (en) * 2023-06-21 2023-09-15 陕西浪潮英信科技有限公司 Data processing method, device, equipment and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102165426A (en) * 2008-09-26 2011-08-24 微软公司 Memory management techniques selectively using mitigations to reduce errors
WO2015180493A1 (en) * 2014-05-30 2015-12-03 华为技术有限公司 Method, apparatus, and system for processing data storage
CN116755625A (en) * 2023-06-21 2023-09-15 陕西浪潮英信科技有限公司 Data processing method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN117591267A (en) 2024-02-23

Similar Documents

Publication Publication Date Title
US9563470B2 (en) Backfill scheduling for embarrassingly parallel jobs
KR102209030B1 (en) Task time allocation method allowing deterministic error recovery in real time
US8490118B2 (en) Wait on address synchronization interface
KR20200122364A (en) Resource scheduling method and terminal device
US11074134B2 (en) Space management for snapshots of execution images
CN110427258A (en) Scheduling of resource control method and device based on cloud platform
JP2018517202A (en) Event processing system paging
US20150378782A1 (en) Scheduling of tasks on idle processors without context switching
WO2014116345A1 (en) Cluster maintenance system and operation thereof
WO2009073346A2 (en) Data parallel production and consumption
CN113010289A (en) Task scheduling method, device and system
US20170269668A1 (en) Wireless component state based power management
JP2020008964A (en) Migration management program, migration method and migration system
CN115407943A (en) Memory dump file generation method, device and equipment and readable storage medium
KR100770034B1 (en) Method and system for providing context switch using multiple register file
CN117591267B (en) Task processing method, device, medium and system
US8356300B2 (en) Multi-threaded processes for opening and saving documents
CN116881012A (en) Container application vertical capacity expansion method, device, equipment and readable storage medium
Jarray et al. New adaptive middleware for real-time embedded operating systems
US10025639B2 (en) Energy efficient supercomputer job allocation
CN114995770A (en) Data processing method, device, equipment, system and readable storage medium
US8095806B2 (en) Method of power simulation and power simulator
CN111858234A (en) Task execution method, device, equipment and medium
JP2003263354A (en) Periodical automatic backup schedule method and device
CN115543648A (en) Electric energy meter data storage method and device based on message queue

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant