CN113918296A - Model training task scheduling execution method and device, electronic equipment and storage medium - Google Patents

Model training task scheduling execution method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113918296A
CN113918296A CN202111181731.7A CN202111181731A CN113918296A CN 113918296 A CN113918296 A CN 113918296A CN 202111181731 A CN202111181731 A CN 202111181731A CN 113918296 A CN113918296 A CN 113918296A
Authority
CN
China
Prior art keywords
task
model
training
initial model
initiated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111181731.7A
Other languages
Chinese (zh)
Other versions
CN113918296B (en
Inventor
李怀志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202111181731.7A priority Critical patent/CN113918296B/en
Publication of CN113918296A publication Critical patent/CN113918296A/en
Application granted granted Critical
Publication of CN113918296B publication Critical patent/CN113918296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/503Resource availability

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Debugging And Monitoring (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses a model training task scheduling execution method, which comprises the following steps: confirming tasks to be initiated in the task queue according to the consumed resources and the available resources, and confirming task execution objects in the execution object set according to the tasks to be initiated; training an initial model constructed by using a task to be initiated according to a task execution object, storing model parameters of the initial model after each round of training and marking corresponding storage time; if the training process of the initial model is interrupted, removing the task queue of the task to be initiated, and updating the task to be initiated according to the stored model parameters and the storage time; and adding the updated task to be initiated into the task queue according to the priority of the task to be initiated. The invention also relates to a blockchain technique, the model parameters can be stored in blockchain link points. The invention also provides a model training task scheduling executing device, equipment and medium. The invention can improve the efficiency of task scheduling execution.

Description

Model training task scheduling execution method and device, electronic equipment and storage medium
Technical Field
The invention relates to an artificial intelligence technology, in particular to a model training task scheduling execution method, a model training task scheduling execution device, electronic equipment and a storage medium.
Background
With the development of artificial intelligence, the past artificial training of the training process of the artificial intelligence model is changed into the automatic scheduling execution of a plurality of model training tasks by utilizing a training platform, so as to implement the intellectualization of model training.
However, in the process of scheduling and executing the existing model training task, whether training is completed or not and whether training is interrupted or not need to be checked continuously in the training process, if training is interrupted, other tasks cannot be scheduled, and meanwhile, the task with training interruption needs to be trained again, so that the efficiency of task scheduling and executing is low.
Disclosure of Invention
The invention provides a method and a device for scheduling and executing a model training task, electronic equipment and a computer readable storage medium, and mainly aims to improve the efficiency of scheduling and executing the task.
In order to achieve the above object, the present invention provides a method for scheduling and executing a model training task, comprising:
acquiring a task queue and a consumption resource corresponding to each task in the task queue;
judging whether the number of tasks in the task queue is zero or not;
when the number of the tasks in the task queue is zero, stopping task scheduling;
when the number of the tasks in the task queue is not zero, acquiring available resources corresponding to each execution object in a preset execution object set in real time;
confirming a task to be initiated in the task queue according to the consumed resource and the available resource, wherein the task to be initiated comprises: model architecture information, initial model parameters, and priorities;
carrying out model construction according to the model architecture information and the initial model parameters to obtain the initial model;
performing multiple rounds of iterative training on the initial model by using the execution objects in the execution object set, storing the model parameters of the initial model after each round of training, and marking corresponding storage time;
detecting whether the training process of the initial model is interrupted in real time until the training of the initial model is finished;
when the training process of the initial model is detected to be interrupted, removing the task queue of the task to be initiated, and updating the task to be initiated according to the stored model parameters and the storage time;
and adding the updated task to be initiated into the task queue according to the priority of the task to be initiated, and returning to the step of judging whether the number of the tasks in the task queue is zero or not.
Optionally, the determining, according to the consumed resource and the available resource, a task to be launched in the task queue includes:
respectively calculating the consumed resources and each available resource to obtain corresponding resource difference values;
summarizing all the resource difference values corresponding to each consumed resource to obtain a corresponding resource difference value set;
selecting a resource difference set with resource differences not greater than zero in all the resource difference sets to obtain a target resource difference set;
determining the task to which the consumption resource corresponding to the target resource difference value set belongs as a task to be selected;
and selecting the task to be selected with the top rank in the task queue to obtain the task to be initiated.
Optionally, the performing, by the root, model construction according to the model architecture information and the initial model parameter to obtain the initial model includes:
obtaining a model frame and a model structure in the model construction information;
putting all the model structures into the model frame, and connecting according to a preset connecting sequence to obtain the combined model;
and setting the initial model parameters as the model parameters of the combined model to obtain the initial model.
Optionally, the performing multiple rounds of iterative training on the initial model by using the performing objects in the performing object set, saving the model parameters of the initial model after each round of training, and marking corresponding saving time includes:
selecting a resource difference set corresponding to the task to be initiated from all the resource difference sets to obtain a resource difference set to be selected;
selecting an execution object to which the available resource belongs corresponding to the resource difference value which is not greater than zero in the target resource difference value set to obtain a target execution object set;
and selecting the objects in the target execution object set by using the available resources to obtain the task execution object.
And performing multiple rounds of iterative training on the initial model by using the task execution object, storing the model parameters of the initial model after each round of training, and marking corresponding storage time.
Optionally, the selecting, by using the available resources, an object in the target execution object set to obtain the task execution object includes:
obtaining an object state of each execution object in the target execution object set, wherein the object state includes: an execution state and an idle state;
selecting the execution object with the object state as the execution state in the target execution object set to obtain a first execution object set;
judging whether the first execution object set is an empty set or not;
when the first execution object set is an empty set, selecting an execution object of which the object state is an execution state in the target execution object set to obtain a second execution object set;
selecting the execution object with the minimum available resource in the second execution object set to obtain a task execution object;
and when the first execution object set is not an empty set, selecting the execution object with the minimum available resource in the first execution object set to obtain a task execution object.
Optionally, the performing, by using the task execution object, multiple rounds of iterative training on the initial model, saving the model parameters of the initial model after each round of training, and marking corresponding saving time includes:
acquiring a training sample data set and a data label of each training sample data in the training sample data set;
selecting training sample data in the training sample data set and inputting the training sample data into the initial model to obtain a predicted label value;
confirming a real value of the tag according to the data tag;
calculating by using a preset loss function according to the predicted tag value and the actual tag value to obtain a tag loss value;
when the label loss value is smaller than or equal to a preset loss threshold value, finishing training of the initial model to obtain a trained initial model;
and when the label loss value is greater than a preset loss threshold value, adjusting the model parameters of the initial model, saving the adjusted model parameters of the initial model, marking the corresponding saving time, and returning to the step of selecting the training sample data in the training sample data set and inputting the training sample data into the initial model.
Optionally, the updating the task to be initiated according to the saved model parameters and the saved time includes:
extracting the model parameter with the largest storage time from all the stored model parameters to obtain a target model parameter;
and replacing the initial model parameters in the task to be initiated with the target model parameters to obtain the updated task to be initiated.
In order to solve the above problem, the present invention further provides a model training task scheduling executing apparatus, including:
the task confirmation module is used for acquiring a task queue and consumption resources corresponding to each task in the task queue; judging whether the number of tasks in the task queue is zero or not; when the number of the tasks in the task queue is zero, stopping task scheduling; when the number of the tasks in the task queue is not zero, acquiring available resources corresponding to each execution object in a preset execution object set in real time; confirming a task to be initiated in the task queue according to the consumed resource and the available resource, wherein the task to be initiated comprises: model architecture information, initial model parameters, and priorities;
the model construction training module is used for constructing a model according to the model architecture information and the initial model parameters to obtain the initial model; performing multiple rounds of iterative training on the initial model by using the execution objects in the execution object set, storing the model parameters of the initial model after each round of training, and marking corresponding storage time;
the task scheduling execution module is used for detecting whether the training process of the initial model is interrupted in real time until the training of the initial model is finished; when the training process of the initial model is detected to be interrupted, removing the task queue of the task to be initiated, and updating the task to be initiated according to the stored model parameters and the storage time; and adding the updated task to be initiated into the task queue according to the priority of the task to be initiated, and returning to the step of judging whether the number of the tasks in the task queue is zero or not.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one computer program; and
and the processor executes the computer program stored in the memory to realize the model training task scheduling execution method.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the model training task scheduling execution method described above.
When the training process of the initial model is detected to be interrupted, removing the task queue of the task to be initiated, and updating the task to be initiated according to the saved model parameters and the saved time; according to the priority of the tasks to be initiated, the updated tasks to be initiated are added into the task queue, the step of judging whether the number of the tasks in the task queue is zero is returned, the tasks which are interrupted in execution are directly rescheduled, and the tasks are updated at the same time, so that the updated tasks can be continuously trained on the basis of the interruption of the original tasks, the original tasks do not need to be re-executed, and the task scheduling execution efficiency is higher.
Drawings
Fig. 1 is a schematic flowchart of a method for scheduling and executing a model training task according to an embodiment of the present invention;
FIG. 2 is a block diagram of a model training task scheduling executing apparatus according to an embodiment of the present invention;
fig. 3 is a schematic internal structural diagram of an electronic device implementing a model training task scheduling execution method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a model training task scheduling execution method. The execution subject of the model training task scheduling execution method includes, but is not limited to, at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiments of the present application. In other words, the model training task scheduling execution method may be executed by software or hardware installed in the terminal device or the server device, and the software may be a blockchain platform. The server includes but is not limited to: the cloud server can be an independent server, or can be a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Referring to fig. 1, which is a schematic flow diagram of a method for scheduling and executing a model training task according to an embodiment of the present invention, in an embodiment of the present invention, the method for scheduling and executing a model training task includes:
s1, acquiring a task queue and a consumption resource corresponding to each task in the task queue;
in the embodiment of the invention, the task queue is a queue of different model training tasks, wherein the task queue comprises different model training tasks.
Further, the consumed resources in the embodiment of the present invention are computing resources required for executing the corresponding task, and if the computing resources required for executing the task a are 2kh/s, the computing resources consumed by the task a are 2 kh/s.
S2, judging whether the number of the tasks in the task queue is zero or not;
in order to determine that task scheduling is not required, whether the number of tasks in the task queue is zero needs to be judged first.
S3, stopping task scheduling when the number of the tasks in the task queue is zero;
in detail, in the embodiment of the present invention, when the number of tasks in the task queue is zero, it indicates that all tasks have been executed, and task scheduling is stopped.
S4, when the number of tasks in the task queue is not zero, acquiring available resources corresponding to each execution object in a preset execution object set in real time;
in the embodiment of the invention, when the number of the tasks in the task queue is not zero, the available resources corresponding to each execution object in the preset execution object set need to be acquired, so that the tasks needing to be scheduled and executed are determined according to the available resources.
In detail, in the embodiment of the present invention, the execution object is a GPU server or a CPU server, and the available resources are available computing resources of the execution object.
In the embodiment of the present invention, each execution object may execute multiple tasks, as long as the available resources of the execution object can meet the task requirements, for example: and if the available resource of the execution object is 2kh/s and the consumed resource corresponding to the task A is 1kh/s, the execution object can run the task A.
S5, confirming the task to be initiated in the task queue according to the consumed resource and the available resource, wherein the task to be initiated comprises: model architecture information, initial model parameters, and priorities;
in detail, the model architecture information in the embodiment of the present invention is architecture information that the task to be initiated needs to be trained. The initial model parameters are preset model initial parameters, and the priorities are priorities corresponding to the tasks.
Optionally, in the embodiment of the present invention, the priority may be represented by any integer within a preset range, and the smaller the numerical value, the higher the priority. Such as: the preset range is [0,255 ].
In detail, the determining the task to be launched in the task queue according to the consumed resource and the available resource in the embodiment of the present invention includes:
step 1: respectively calculating the consumed resources and each available resource to obtain corresponding resource difference values;
specifically, in the embodiment of the present invention, a difference between the consumed resource and each of the available resources is calculated to obtain a corresponding resource difference, for example: the consumed resource is 4kh/s and the available resource is 3kh/s, and then the resource difference is 4kh/s-3 kh/s-1 kh/s.
Step 2: summarizing all the resource difference values corresponding to each consumed resource to obtain a corresponding resource difference value set;
and step 3: selecting a resource difference set with a value not greater than zero in all the resource difference sets to obtain a target resource difference set;
in detail, in the embodiment of the present invention, if the resource difference is greater than zero, it indicates that the corresponding available resource cannot meet the training resource consumption of the task, and therefore, a resource difference set having a value not greater than zero in all the resource difference sets needs to be selected to obtain a target resource difference set.
And 4, step 4: determining the task to which the consumption resource corresponding to the target resource difference value set belongs as a task to be selected;
and 5: and selecting the task to be selected with the top rank in the task queue to obtain the task to be initiated.
For example: the task queue is [ task A, task C, task B and task D ], wherein the task C and the task B are tasks to be selected, the task C is ranked at the second position in the task queue, the task B is ranked at the third position in the task queue, and therefore the task B is a task to be initiated.
S6, carrying out model construction according to the model architecture information and the initial model parameters to obtain the initial model;
in the embodiment of the present invention, the initial model is an artificial intelligence model, and optionally, the initial model is a convolutional neural network model.
In detail, in the embodiment of the present invention, a model frame and a model structure in the model construction information are obtained; and placing all the model structures into the model frame, connecting according to a preset connecting sequence to obtain the combined model, and setting the initial model parameters as the model parameters of the combined model to obtain the initial model.
S7, performing multiple rounds of iterative training on the initial model by using the execution objects in the execution object set, saving the model parameters of the initial model after each round of training, and marking corresponding saving time; (ii) a
In the embodiment of the present invention, the determining, according to the task to be initiated, the task execution objects in all the execution object sets includes:
selecting a resource difference set corresponding to the task to be initiated from all the resource difference sets to obtain a resource difference set to be selected;
and selecting an execution object to which the available resource belongs corresponding to the resource difference value which is not greater than zero in the target resource difference value set to obtain a target execution object set.
Further, the embodiment of the present invention selects an object in the target execution object set by using the available resource to obtain the task execution object; and performing multiple rounds of iterative training on the initial model by using the task execution object, storing the model parameters of the initial model after each round of training, and marking corresponding storage time.
In detail, in the embodiment of the present invention, selecting an object in the target execution object set by using the available resource to obtain the task execution object includes:
obtaining an object state of each execution object in the target execution object set, wherein the object state includes: an execution state and an idle state;
selecting the execution object with the object state as the execution state in the target execution object set to obtain a first execution object set;
judging whether the first execution object set is an empty set or not; if the first execution object set is an empty set, selecting an execution object of which the object state is an execution state in the target execution object set to obtain a second execution object set;
and selecting the execution object with the minimum available resource in the second execution object set to obtain a task execution object.
And if the first execution object set is not an empty set, selecting the execution object with the minimum available resource in the first execution object set to obtain a task execution object.
Optionally, in the embodiment of the present invention, performing multiple rounds of iterative training on the initial model by using the task execution object, storing the model parameters of the initial model after each round of training, and marking corresponding storage time includes:
acquiring a training sample data set and a data label of each training sample data in the training sample data set;
in the embodiment of the present invention, the data tag is a tag for marking training sample data, and includes: category, coordinates, etc.
Optionally, in the embodiment of the present invention, the task execution object is used to obtain a training sample data set and a data tag of each training sample data in the training sample data set, and load training resources to perform multiple rounds of training on the initial model, and store the model parameters of the initial model after each training and mark corresponding storage time until the initial model is trained, and output the trained initial model.
Selecting training sample data in the training sample data set and inputting the training sample data into the initial model to obtain a predicted label value;
confirming a real value of the tag according to the data tag;
calculating by using a preset loss function according to the predicted tag value and the actual tag value to obtain a tag loss value;
when the label loss value is smaller than or equal to a preset loss threshold value, finishing training of the initial model to obtain a trained initial model;
and when the label loss value is greater than a preset loss threshold value, adjusting the model parameters of the initial model, saving the model parameters of the initial model, marking corresponding saving time, and returning to the step of selecting the training sample data in the training sample data set and inputting the training sample data into the initial model.
And when the initial model training is completed, outputting the trained model, releasing the training resources loaded by the task execution object, removing the task queue of the task to be initiated, and returning to the step S2.
In another embodiment of the present invention, the model parameters may be stored in the blockchain nodes, and the data access efficiency is improved by using the characteristic of high throughput of the blockchain nodes.
S8, detecting whether the training process of the initial model is interrupted in real time until the training of the initial model is finished;
in detail, in the embodiment of the present invention, in order to prevent the training process of the initial model from being interrupted, it is necessary to monitor whether the training process of the initial model is interrupted in real time before the training of the initial model is completed.
S9, when the training process of the initial model is detected to be interrupted, removing the task queue of the task to be initiated, and updating the task to be initiated according to the saved model parameters and the saved time;
in the embodiment of the invention, in order to prevent the interruption of the training process of the model and prevent the task in the task queue from being continuously executed, the model parameter with the largest storage time in all the stored model parameters is extracted to obtain the target model parameter; and replacing the initial model parameters in the task to be initiated with the target model parameters to obtain the updated task to be initiated.
Optionally, in the embodiment of the present invention, it is determined whether the training process of the initial model is interrupted by monitoring whether the training process of the initial model is stopped.
And S10, adding the updated task to be initiated into the task queue according to the priority of the task to be initiated, and returning to S2.
In the embodiment of the present invention, the updated task priority of the task to be initiated is the same as the task priority of the task to be initiated, so in the embodiment of the present invention, according to the priority of the task to be initiated, the updated task to be initiated is added to the task queue, and the step of determining whether the number of tasks in the task queue is zero is returned, and the step of returning to S2 is returned.
S11, when the training process of the initial model is not detected to be interrupted, outputting the initial model after training, removing the task queue of the task to be initiated and returning to S2;
in detail, when the initial model training is completed, the trained model is output, meanwhile, the training resources loaded by the task execution object are released, and the task to be initiated is removed from the task queue and returns to S2.
FIG. 2 is a functional block diagram of the model training task scheduling executing apparatus according to the present invention.
The model training task scheduling executing apparatus 100 according to the present invention may be installed in an electronic device. According to the implemented functions, the model training task scheduling execution device may include a task confirmation module 101, an execution object confirmation module 102, and a task scheduling execution module 103, which may also be referred to as a unit, and refers to a series of computer program segments that can be executed by a processor of an electronic device and can perform fixed functions, and are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the task confirmation module 101 is configured to obtain a task queue and a consumed resource corresponding to each task in the task queue; judging whether the number of tasks in the task queue is zero or not; when the number of the tasks in the task queue is zero, stopping task scheduling; when the number of the tasks in the task queue is not zero, acquiring available resources corresponding to each execution object in a preset execution object set in real time; confirming a task to be initiated in the task queue according to the consumed resource and the available resource, wherein the task to be initiated comprises: model architecture information, initial model parameters, and priorities;
the model construction training module 102 is configured to perform model construction according to the model architecture information and the initial model parameters to obtain the initial model; performing multiple rounds of iterative training on the initial model by using the execution objects in the execution object set, storing the model parameters of the initial model after each round of training, and marking corresponding storage time;
the task scheduling execution module 103 is configured to detect whether the training process of the initial model is interrupted in real time until the training of the initial model is completed; when the training process of the initial model is detected to be interrupted, removing the task queue of the task to be initiated, and updating the task to be initiated according to the stored model parameters and the storage time; and adding the updated task to be initiated into the task queue according to the priority of the task to be initiated, and returning to the step of judging whether the number of the tasks in the task queue is zero or not.
Fig. 2 is a schematic structural diagram of an electronic device implementing the method for scheduling and executing the model training task according to the present invention.
The electronic device may include a processor 10, a memory 11, a communication bus 12, and a communication interface 13, and may further include a computer program, such as a model training task scheduling executive, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used to store not only application software installed in the electronic device and various types of data, such as codes of a model training task scheduling execution program, but also temporarily store data that has been output or will be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., model training task scheduling execution programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The communication bus 12 may be a PerIPheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The communication bus 12 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Fig. 2 shows only an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 2 does not constitute a limitation of the electronic device, and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power source may also include any component of one or more dc or ac power sources, recharging devices, power failure classification circuits, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Optionally, the communication interface 13 may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which is generally used to establish a communication connection between the electronic device and other electronic devices.
Optionally, the communication interface 13 may further include a user interface, which may be a Display (Display), an input unit (such as a Keyboard (Keyboard)), and optionally, a standard wired interface, or a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The model training task scheduling execution program stored in the memory 11 of the electronic device is a combination of a plurality of computer programs, and when running in the processor 10, can realize:
acquiring a task queue and a consumption resource corresponding to each task in the task queue;
judging whether the number of tasks in the task queue is zero or not;
when the number of the tasks in the task queue is zero, stopping task scheduling;
when the number of the tasks in the task queue is not zero, acquiring available resources corresponding to each execution object in a preset execution object set in real time;
confirming a task to be initiated in the task queue according to the consumed resource and the available resource, wherein the task to be initiated comprises: model architecture information, initial model parameters, and priorities;
carrying out model construction according to the model architecture information and the initial model parameters to obtain the initial model;
performing multiple rounds of iterative training on the initial model by using the execution objects in the execution object set, storing the model parameters of the initial model after each round of training, and marking corresponding storage time;
detecting whether the training process of the initial model is interrupted in real time until the training of the initial model is finished;
when the training process of the initial model is detected to be interrupted, removing the task queue of the task to be initiated, and updating the task to be initiated according to the stored model parameters and the storage time;
and adding the updated task to be initiated into the task queue according to the priority of the task to be initiated, and returning to the step of judging whether the number of the tasks in the task queue is zero or not.
Specifically, the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable medium may be non-volatile or volatile. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
Embodiments of the present invention may also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor of an electronic device, the computer program may implement:
acquiring a task queue and a consumption resource corresponding to each task in the task queue;
judging whether the number of tasks in the task queue is zero or not;
when the number of the tasks in the task queue is zero, stopping task scheduling;
when the number of the tasks in the task queue is not zero, acquiring available resources corresponding to each execution object in a preset execution object set in real time;
confirming a task to be initiated in the task queue according to the consumed resource and the available resource, wherein the task to be initiated comprises: model architecture information, initial model parameters, and priorities;
carrying out model construction according to the model architecture information and the initial model parameters to obtain the initial model;
performing multiple rounds of iterative training on the initial model by using the execution objects in the execution object set, storing the model parameters of the initial model after each round of training, and marking corresponding storage time;
detecting whether the training process of the initial model is interrupted in real time until the training of the initial model is finished;
when the training process of the initial model is detected to be interrupted, removing the task queue of the task to be initiated, and updating the task to be initiated according to the stored model parameters and the storage time;
and adding the updated task to be initiated into the task queue according to the priority of the task to be initiated, and returning to the step of judging whether the number of the tasks in the task queue is zero or not.
Further, the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A model training task scheduling execution method is characterized by comprising the following steps:
acquiring a task queue and a consumption resource corresponding to each task in the task queue;
judging whether the number of tasks in the task queue is zero or not;
when the number of the tasks in the task queue is zero, stopping task scheduling;
when the number of the tasks in the task queue is not zero, acquiring available resources corresponding to each execution object in a preset execution object set in real time;
confirming a task to be initiated in the task queue according to the consumed resource and the available resource, wherein the task to be initiated comprises: model architecture information, initial model parameters, and priorities;
carrying out model construction according to the model architecture information and the initial model parameters to obtain the initial model;
performing multiple rounds of iterative training on the initial model by using the execution objects in the execution object set, storing the model parameters of the initial model after each round of training, and marking corresponding storage time;
detecting whether the training process of the initial model is interrupted in real time until the training of the initial model is finished;
when the training process of the initial model is detected to be interrupted, removing the task queue of the task to be initiated, and updating the task to be initiated according to the stored model parameters and the storage time;
and adding the updated task to be initiated into the task queue according to the priority of the task to be initiated, and returning to the step of judging whether the number of the tasks in the task queue is zero or not.
2. The method of claim 1, wherein the identifying tasks to be launched in the task queue based on the consumed resources and the available resources comprises:
respectively calculating the consumed resources and each available resource to obtain corresponding resource difference values;
summarizing all the resource difference values corresponding to each consumed resource to obtain a corresponding resource difference value set;
selecting a resource difference set with resource differences not greater than zero in all the resource difference sets to obtain a target resource difference set;
determining the task to which the consumption resource corresponding to the target resource difference value set belongs as a task to be selected;
and selecting the task to be selected with the top rank in the task queue to obtain the task to be initiated.
3. The method as claimed in claim 1, wherein the step of performing model construction on the root according to the model architecture information and the initial model parameters to obtain the initial model comprises:
obtaining a model frame and a model structure in the model construction information;
putting all the model structures into the model frame, and connecting according to a preset connecting sequence to obtain the combined model;
and setting the initial model parameters as the model parameters of the combined model to obtain the initial model.
4. The method of claim 2, wherein the performing multiple rounds of iterative training on the initial model by using the execution objects in the execution object set, saving model parameters of the initial model after each round of training, and marking corresponding saving time comprises:
selecting a resource difference set corresponding to the task to be initiated from all the resource difference sets to obtain a resource difference set to be selected;
selecting an execution object to which the available resource belongs corresponding to the resource difference value which is not greater than zero in the target resource difference value set to obtain a target execution object set;
and selecting the objects in the target execution object set by using the available resources to obtain the task execution object.
And performing multiple rounds of iterative training on the initial model by using the task execution object, storing the model parameters of the initial model after each round of training, and marking corresponding storage time.
5. The method as claimed in claim 4, wherein the selecting the object in the target execution object set by using the available resources to obtain the task execution object comprises:
obtaining an object state of each execution object in the target execution object set, wherein the object state includes: an execution state and an idle state;
selecting the execution object with the object state as the execution state in the target execution object set to obtain a first execution object set;
judging whether the first execution object set is an empty set or not;
when the first execution object set is an empty set, selecting an execution object of which the object state is an execution state in the target execution object set to obtain a second execution object set;
selecting the execution object with the minimum available resource in the second execution object set to obtain a task execution object;
and when the first execution object set is not an empty set, selecting the execution object with the minimum available resource in the first execution object set to obtain a task execution object.
6. The method as claimed in claim 4, wherein the performing the task scheduling for training the initial model by using the task execution object for multiple rounds of iteration, saving the model parameters of the initial model after each round of training, and marking the corresponding saving time comprises:
acquiring a training sample data set and a data label of each training sample data in the training sample data set;
selecting training sample data in the training sample data set and inputting the training sample data into the initial model to obtain a predicted label value;
confirming a real value of the tag according to the data tag;
calculating by using a preset loss function according to the predicted tag value and the actual tag value to obtain a tag loss value;
when the label loss value is smaller than or equal to a preset loss threshold value, finishing training of the initial model to obtain a trained initial model;
and when the label loss value is greater than a preset loss threshold value, adjusting the model parameters of the initial model, saving the adjusted model parameters of the initial model, marking the corresponding saving time, and returning to the step of selecting the training sample data in the training sample data set and inputting the training sample data into the initial model.
7. The method for scheduling and executing model training tasks according to any one of claims 1 to 6, wherein the updating the to-be-initiated task according to the saved model parameters and the saving time comprises:
extracting the model parameter with the largest storage time from all the stored model parameters to obtain a target model parameter;
and replacing the initial model parameters in the task to be initiated with the target model parameters to obtain the updated task to be initiated.
8. A model training task scheduling execution apparatus, comprising:
the task confirmation module is used for acquiring a task queue and consumption resources corresponding to each task in the task queue; judging whether the number of tasks in the task queue is zero or not; when the number of the tasks in the task queue is zero, stopping task scheduling; when the number of the tasks in the task queue is not zero, acquiring available resources corresponding to each execution object in a preset execution object set in real time; confirming a task to be initiated in the task queue according to the consumed resource and the available resource, wherein the task to be initiated comprises: model architecture information, initial model parameters, and priorities;
the model construction training module is used for constructing a model according to the model architecture information and the initial model parameters to obtain the initial model; performing multiple rounds of iterative training on the initial model by using the execution objects in the execution object set, storing the model parameters of the initial model after each round of training, and marking corresponding storage time;
the task scheduling execution module is used for detecting whether the training process of the initial model is interrupted in real time until the training of the initial model is finished; when the training process of the initial model is detected to be interrupted, removing the task queue of the task to be initiated, and updating the task to be initiated according to the stored model parameters and the storage time; and adding the updated task to be initiated into the task queue according to the priority of the task to be initiated, and returning to the step of judging whether the number of the tasks in the task queue is zero or not.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor;
wherein the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the model training task schedule execution method of any one of claims 1 to 7.
10. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the model training task schedule execution method according to any one of claims 1 to 7.
CN202111181731.7A 2021-10-11 2021-10-11 Model training task scheduling execution method and device, electronic equipment and storage medium Active CN113918296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111181731.7A CN113918296B (en) 2021-10-11 2021-10-11 Model training task scheduling execution method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111181731.7A CN113918296B (en) 2021-10-11 2021-10-11 Model training task scheduling execution method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113918296A true CN113918296A (en) 2022-01-11
CN113918296B CN113918296B (en) 2024-09-13

Family

ID=79238995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111181731.7A Active CN113918296B (en) 2021-10-11 2021-10-11 Model training task scheduling execution method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113918296B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114661450A (en) * 2022-05-26 2022-06-24 南京云信达科技有限公司 Backup system task scheduling method and system based on time series learning and prediction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110879750A (en) * 2017-10-13 2020-03-13 华为技术有限公司 Resource management method and terminal equipment
CN112181645A (en) * 2020-09-21 2021-01-05 中国建设银行股份有限公司 Resource scheduling method, device, equipment and storage medium
CN112685153A (en) * 2020-12-25 2021-04-20 广州奇盾信息技术有限公司 Micro-service scheduling method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110879750A (en) * 2017-10-13 2020-03-13 华为技术有限公司 Resource management method and terminal equipment
CN112181645A (en) * 2020-09-21 2021-01-05 中国建设银行股份有限公司 Resource scheduling method, device, equipment and storage medium
CN112685153A (en) * 2020-12-25 2021-04-20 广州奇盾信息技术有限公司 Micro-service scheduling method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114661450A (en) * 2022-05-26 2022-06-24 南京云信达科技有限公司 Backup system task scheduling method and system based on time series learning and prediction

Also Published As

Publication number Publication date
CN113918296B (en) 2024-09-13

Similar Documents

Publication Publication Date Title
CN111984426B (en) Task scheduling method and device, electronic equipment and storage medium
CN111694844B (en) Enterprise operation data analysis method and device based on configuration algorithm and electronic equipment
CN114881616A (en) Business process execution method and device, electronic equipment and storage medium
CN117193975A (en) Task scheduling method, device, equipment and storage medium
CN113890712A (en) Data transmission method and device, electronic equipment and readable storage medium
CN114491047A (en) Multi-label text classification method and device, electronic equipment and storage medium
CN112256783A (en) Data export method and device, electronic equipment and storage medium
CN114844844A (en) Delay message processing method, device, equipment and storage medium
CN115129753A (en) Data blood relationship analysis method and device, electronic equipment and storage medium
CN114880368A (en) Data query method and device, electronic equipment and readable storage medium
CN113240351A (en) Business data consistency checking method and device, electronic equipment and medium
CN113918296B (en) Model training task scheduling execution method and device, electronic equipment and storage medium
CN114817408B (en) Scheduling resource identification method and device, electronic equipment and storage medium
CN115373826B (en) Task scheduling method and device based on cloud computing
CN115827179B (en) Calculation power scheduling method, device and equipment of physical machine equipment and storage medium
CN114625512A (en) Task scheduling method and device, electronic equipment and storage medium
CN114942855A (en) Interface calling method and device, electronic equipment and storage medium
CN115033605A (en) Data query method and device, electronic equipment and storage medium
CN113918305A (en) Node scheduling method and device, electronic equipment and readable storage medium
CN114547011A (en) Data extraction method and device, electronic equipment and storage medium
CN114510400A (en) Task execution method and device, electronic equipment and storage medium
CN114185622A (en) Page loading method, device, equipment and storage medium
CN114185588A (en) Incremental package generation method, device, equipment and storage medium
CN112579046A (en) User story analysis method and device, electronic equipment and storage medium
CN111552631A (en) System testing method, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant