CN111831452A - Task execution method and device, storage medium and electronic device - Google Patents

Task execution method and device, storage medium and electronic device Download PDF

Info

Publication number
CN111831452A
CN111831452A CN202010713701.5A CN202010713701A CN111831452A CN 111831452 A CN111831452 A CN 111831452A CN 202010713701 A CN202010713701 A CN 202010713701A CN 111831452 A CN111831452 A CN 111831452A
Authority
CN
China
Prior art keywords
hardware
target
hardware unit
determining
hardware units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010713701.5A
Other languages
Chinese (zh)
Inventor
孙昉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010713701.5A priority Critical patent/CN111831452A/en
Publication of CN111831452A publication Critical patent/CN111831452A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The embodiment of the invention provides a task execution method, a task execution device, a storage medium and an electronic device, wherein the method comprises the following steps: determining a target AI hardware unit for executing a target task from the at least two AI hardware units based on performance data of a target model when the target model runs on the at least two artificial intelligence AI hardware units and total loads of the at least two AI hardware units; executing the target task with the target AI hardware unit. The invention solves the problem of unreasonable AI computing resource allocation and scheduling in the related technology, and achieves the effect of reasonably allocating and scheduling AI computing resources.

Description

Task execution method and device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a task execution method, a task execution device, a storage medium and an electronic device.
Background
A plurality of AI computing units exist in the edge computing equipment, different models can be distributed to a specified computing unit to be executed according to the complexity of the models, the models with high complexity are executed on hardware with strong performance, the models with low complexity are executed on hardware with weak performance, and the different models are automatically and flexibly distributed to the different computing units.
In the related art, the input of the scheduling policy is a static model parameter, not a real model performance parameter in an actual operating environment.
Therefore, the problem that the AI computing resource allocation scheduling is unreasonable exists in the related art.
In view of the above problems in the related art, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a task execution method, a task execution device, a storage medium and an electronic device, which are used for at least solving the problem of unreasonable AI computing resource allocation and scheduling in the related art.
According to an embodiment of the present invention, there is provided a task execution method including: determining a target AI hardware unit for executing a target task from the at least two AI hardware units based on performance data of a target model when the target model runs on the at least two artificial intelligence AI hardware units and total loads of the at least two AI hardware units; executing the target task with the target AI hardware unit.
According to another embodiment of the present invention, there is provided a task performing apparatus including: the system comprises a first determination module, a second determination module and a task execution module, wherein the first determination module is used for determining a target AI hardware unit for executing a target task from at least two AI hardware units based on performance data of a target model when the target model runs on the at least two AI hardware units and the total load of the at least two AI hardware units; and the execution module is used for executing the target task by utilizing the target AI hardware unit.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the target AI hardware unit used for executing the target task in the plurality of AI hardware units is determined according to the performance data and the total load of the target model when the target model runs on the plurality of AI hardware units, and the target AI hardware unit is utilized to execute the target task.
Drawings
Fig. 1 is a block diagram of a hardware configuration of a mobile terminal according to a task execution method of an embodiment of the present invention;
FIG. 2 is a flow diagram of a method of task execution according to an embodiment of the invention;
FIG. 3 is a flow diagram of an initial phase of a task execution method model according to a specific embodiment of the present invention;
FIG. 4 is a flow diagram of a task execution method model actual run phase in accordance with a specific embodiment of the present invention;
FIG. 5 is a block diagram of a task performing device according to an embodiment of the present invention;
fig. 6 is a block diagram of a task execution device according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking an example of the present invention running on a mobile terminal, fig. 1 is a block diagram of a hardware structure of the mobile terminal of a task execution method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the task execution method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In this embodiment, a task execution method is provided, and fig. 2 is a flowchart of a task execution method according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, determining a target AI hardware unit for executing a target task from at least two AI hardware units based on performance data of a target model when the target model runs on the at least two artificial intelligence AI hardware units and total loads of the at least two AI hardware units;
step S204, executing the target task by using the target AI hardware unit.
In the above embodiment, the target model may be an algorithm model or the like, the performance data may include the number of cycles actually executed by the AI hardware unit, the number of hours consumed, and the like, and in order to ensure the correctness and robustness of the input of the policy, the test may be repeated multiple times to obtain the average value.
Optionally, the main body of the above steps may be a processor or other devices with similar processing capabilities, and may also be a machine integrated with at least a data processing device, where the data processing device may include a terminal such as a computer, a mobile phone, and the like, but is not limited thereto.
According to the invention, the target AI hardware unit used for executing the target task in the plurality of AI hardware units is determined according to the performance data and the total load of the target model when the target model runs on the plurality of AI hardware units, and the target AI hardware unit is utilized to execute the target task.
In one exemplary embodiment, before determining a target AI hardware unit for performing a target task from the at least two AI hardware units based on performance data of a target model when running on the at least two artificial intelligence AI hardware units and a total load of the at least two AI hardware units, the method further comprises: sequentially running the target model on the at least two AI hardware units; determining performance data of the target model respectively running on the at least two AI hardware units based on the running results. In this embodiment, assuming that there are A, B, C AI hardware acceleration units, A, B, C AI hardware acceleration units respectively run target models, and determine performance data of the target models when the target models are run on A, B, C AI hardware acceleration units.
In one exemplary embodiment, determining a target AI hardware unit for performing a target task from among at least two AI hardware units based on performance data of a target model when running on the at least two AI hardware units and a total load of the at least two AI hardware units comprises: determining a load sum corresponding to each AI hardware unit included in the at least two AI hardware units based on: determining performance data of the target model when running on a first AI hardware unit and a total load of the first AI hardware unit, and summing the performance data of the target model when running on the first AI hardware unit and the total load of the first AI hardware unit to obtain a load sum of the first AI hardware unit, wherein the first AI hardware unit is any one of the at least two AI hardware units; determining the smallest AI hardware unit with the smallest load sum among the at least two AI hardware units as the target AI hardware unit. In this embodiment, the model performance data may be added to the total load of each corresponding hardware unit, the hardware unit with the minimum total load after the addition is obtained, and the model inference task is put into the task list to wait for the completion of execution. And all AI computing units are distributed and scheduled according to a load balancing strategy, so that the utilization rate of each computing unit can be improved to the maximum extent, the model can be rapidly executed as far as possible, and the waiting time for executing the model is shortened.
In one exemplary embodiment, determining the total load of the first AI hardware unit includes: determining a pre-configured priority of the target model; and determining the total load of the model task to be executed of the first AI hardware unit corresponding to the priority of the target model as the total load of the first AI hardware unit. In this embodiment, the priority may be configured to be issued during a model initialization phase, and according to the model priority, the total load (for example, the total time consumption or the total cycle number) of all the different hardware units to execute the model task at the priority is counted, and the total load is determined as the total load of the hardware units. Wherein the priority of executing the model task may be a manually configured priority. By utilizing the model priority strategy, the algorithm with important real-time performance has high execution fast priority and low execution slow priority according to the algorithm service
In an exemplary embodiment, the performance data includes a cycle number and/or a time-consuming number; the total load includes the cycle number and/or the elapsed time number. In this embodiment, the performance data of the hardware unit may be the cycle number and/or the elapsed time number when the target model is run, and the total load may be the sum of the cycle number and/or the elapsed time number when the plurality of target models are run on the hardware unit.
How to perform the tasks is described below with reference to specific embodiments:
fig. 3 is a flowchart of an initial stage of a task execution method model according to an embodiment of the present invention, as shown in fig. 3, the flowchart includes:
step S302, in the algorithm model initialization phase, all the different AI computing unit models exist are loaded.
And step S304, actually operating and acquiring performance data of the model on different AI computing units and storing the performance data as the input of a scheduling strategy when the subsequent algorithm model actually operates. When the model performance data is obtained, the number of cycles actually executed by hardware or time-consuming data is preferably obtained, so that the correctness of the input of the strategy is ensured; in addition, for robustness, the test may be repeated multiple times to obtain an average.
Step S306, saving the corresponding hardware data and the priority, where the specific saving location is not limited as long as it can be obtained. The priority is a user-defined priority, and refers to the priority of the task.
Step S308, determining whether all the hardware units complete traversing, if yes, executing step S310, and if no, executing step S302.
Step S310 ends the initialization process.
Fig. 4 is a flowchart of an actual operation phase of a task execution method model according to an embodiment of the present invention, and as shown in fig. 4, the flowchart includes:
step S402, obtaining an algorithm model to be executed.
Step S404, the priority of the algorithm model to be executed is obtained. Wherein the priority is configured to be issued during the model initialization phase.
Step S406, according to the model priority, counting the total load (total time consumption or total cycle number) of the model tasks to be executed of all the different hardware units on the priority.
And step S408, adding the model performance data and the total load of each corresponding hardware unit to obtain the hardware unit with the minimum total load after addition.
And step S410, putting the model reasoning task into a task list to wait for the completion of execution.
Step S412, the added load data is updated to the hardware unit at the same time.
Step S414, waiting for completion of execution, and obtaining a result.
In the embodiment, the calculation unit executed by the model is distributed according to the performance data of the model in actual operation, the statistics is carried out according to the performance parameters of the model in actual operation and is used as the input basis of the strategy, the scheduling is carried out based on the actual condition instead of the scheduling which is preset according to static data, and the effect of reasonably distributing and scheduling AI calculation resources is achieved; all AI computing resources are distributed according to a load balancing strategy, and all AI computing units are distributed and scheduled according to the load balancing strategy, so that the utilization rate of each computing unit can be improved to the maximum extent, the model can be rapidly executed as far as possible, and the waiting time for model execution is shortened; the model execution has a priority strategy for meeting the requirements of different algorithm services, and the model priority strategy can be used for realizing high priority of execution of the algorithm with important real-time performance and low priority of execution with low real-time performance requirement according to the algorithm services.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a task execution device is further provided, and the task execution device is used for implementing the above embodiments and preferred embodiments, and the description of the task execution device is omitted for brevity. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram showing a first structure of a task execution device according to an embodiment of the present invention, as shown in fig. 5, the device includes:
a first determining module 52, configured to determine a target AI hardware unit for performing a target task from the at least two AI hardware units based on performance data of a target model when running on the at least two artificial intelligence AI hardware units and a total load of the at least two AI hardware units;
an execution module 54, configured to execute the target task using the target AI hardware unit.
Fig. 6 is a block diagram of a second structure of a task execution device according to an embodiment of the present invention, and as shown in fig. 6, the device includes, in addition to all modules shown in fig. 5:
an operation module 62, configured to sequentially operate the target model on at least two AI hardware units before determining a target AI hardware unit for performing a target task from the at least two AI hardware units based on performance data of the target model when the target model is operated on the at least two artificial intelligence AI hardware units and a total load of the at least two AI hardware units;
a second determining module 64, configured to determine performance data of the target model when running on the at least two AI hardware units, respectively, based on the running result.
In an exemplary embodiment, the first determining module 52 includes: a first determining unit, configured to determine a sum of loads corresponding to the AI hardware units included in the at least two AI hardware units based on: determining performance data of the target model when running on a first AI hardware unit and a total load of the first AI hardware unit, and summing the performance data of the target model when running on the first AI hardware unit and the total load of the first AI hardware unit to obtain a load sum of the first AI hardware unit, wherein the first AI hardware unit is any one of the at least two AI hardware units; a second determining unit, configured to determine, as the target AI hardware unit, an AI hardware unit with a smallest load sum among the at least two AI hardware units.
In one exemplary embodiment, the first determination unit may determine the total load of the first AI hardware unit by: determining a pre-configured priority of the target model; and determining the total load of the model task to be executed of the first AI hardware unit corresponding to the priority of the target model as the total load of the first AI hardware unit.
In an exemplary embodiment, the performance data includes a cycle number and/or a time-consuming number; the total load includes the cycle number and/or the elapsed time number.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of task execution, comprising:
determining a target AI hardware unit for executing a target task from the at least two AI hardware units based on performance data of a target model when the target model runs on the at least two artificial intelligence AI hardware units and total loads of the at least two AI hardware units;
executing the target task with the target AI hardware unit.
2. The method of claim 1, wherein before determining a target AI hardware unit for performing a target task from the at least two AI hardware units based on performance data of a target model when running on the at least two artificial intelligence AI hardware units and a total load of the at least two AI hardware units, the method further comprises:
sequentially running the target model on the at least two AI hardware units;
determining performance data of the target model respectively running on the at least two AI hardware units based on the running results.
3. The method of claim 1, wherein determining a target AI hardware unit from the at least two AI hardware units for performing a target task based on performance data of a target model when running on the at least two artificial intelligence AI hardware units and a total load of the at least two AI hardware units comprises:
determining a load sum corresponding to each AI hardware unit included in the at least two AI hardware units based on: determining performance data of the target model when running on a first AI hardware unit and a total load of the first AI hardware unit, and summing the performance data of the target model when running on the first AI hardware unit and the total load of the first AI hardware unit to obtain a load sum of the first AI hardware unit, wherein the first AI hardware unit is any one of the at least two AI hardware units;
determining the smallest AI hardware unit with the smallest load sum among the at least two AI hardware units as the target AI hardware unit.
4. The method of claim 3, wherein determining the total load of the first AI hardware unit comprises:
determining a pre-configured priority of the target model;
and determining the total load of the model task to be executed of the first AI hardware unit corresponding to the priority of the target model as the total load of the first AI hardware unit.
5. The method according to any one of claims 1 to 4,
the performance data comprises cycle number and/or time consumption number;
the total load includes the cycle number and/or the elapsed time number.
6. A task execution apparatus, comprising:
the system comprises a first determination module, a second determination module and a task execution module, wherein the first determination module is used for determining a target AI hardware unit for executing a target task from at least two AI hardware units based on performance data of a target model when the target model runs on the at least two AI hardware units and the total load of the at least two AI hardware units;
and the execution module is used for executing the target task by utilizing the target AI hardware unit.
7. The apparatus of claim 6, further comprising:
the system comprises an operation module, a task execution module and a task execution module, wherein the operation module is used for sequentially operating target models on at least two AI hardware units before determining a target AI hardware unit for executing a target task from the at least two AI hardware units based on performance data of the target models during operation on the at least two artificial intelligence AI hardware units and total loads of the at least two AI hardware units;
a second determining module, configured to determine, based on the operation result, performance data of the target model when the target model is respectively executed on the at least two AI hardware units.
8. The apparatus of claim 6, wherein the first determining module comprises:
a first determining unit, configured to determine a sum of loads corresponding to the AI hardware units included in the at least two AI hardware units based on: determining performance data of the target model when running on a first AI hardware unit and a total load of the first AI hardware unit, and summing the performance data of the target model when running on the first AI hardware unit and the total load of the first AI hardware unit to obtain a load sum of the first AI hardware unit, wherein the first AI hardware unit is any one of the at least two AI hardware units;
a second determining unit, configured to determine, as the target AI hardware unit, an AI hardware unit with a smallest load sum among the at least two AI hardware units.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 5 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 5.
CN202010713701.5A 2020-07-22 2020-07-22 Task execution method and device, storage medium and electronic device Pending CN111831452A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010713701.5A CN111831452A (en) 2020-07-22 2020-07-22 Task execution method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010713701.5A CN111831452A (en) 2020-07-22 2020-07-22 Task execution method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN111831452A true CN111831452A (en) 2020-10-27

Family

ID=72926431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010713701.5A Pending CN111831452A (en) 2020-07-22 2020-07-22 Task execution method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN111831452A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112394882A (en) * 2020-11-17 2021-02-23 浙江大华技术股份有限公司 Data storage method and device, storage medium and electronic device
CN113961333A (en) * 2021-12-22 2022-01-21 北京燧原智能科技有限公司 Method and device for generating and executing circular task, AI chip and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112394882A (en) * 2020-11-17 2021-02-23 浙江大华技术股份有限公司 Data storage method and device, storage medium and electronic device
CN112394882B (en) * 2020-11-17 2022-04-19 浙江大华技术股份有限公司 Data storage method and device, storage medium and electronic device
CN113961333A (en) * 2021-12-22 2022-01-21 北京燧原智能科技有限公司 Method and device for generating and executing circular task, AI chip and storage medium
CN113961333B (en) * 2021-12-22 2022-03-11 北京燧原智能科技有限公司 Method and device for generating and executing circular task, AI chip and storage medium

Similar Documents

Publication Publication Date Title
CN110308980A (en) Batch processing method, device, equipment and the storage medium of data
CN110430068B (en) Characteristic engineering arrangement method and device
CN111176840B (en) Distribution optimization method and device for distributed tasks, storage medium and electronic device
CN111831452A (en) Task execution method and device, storage medium and electronic device
CN113918314A (en) Task processing method, edge computing device, computer device, and medium
CN109766172A (en) A kind of asynchronous task scheduling method and device
CN111124640A (en) Task allocation method and system, storage medium and electronic device
CN113849302A (en) Task execution method and device, storage medium and electronic device
CN111338787B (en) Data processing method and device, storage medium and electronic device
CN110569129A (en) Resource allocation method and device, storage medium and electronic device
CN114629960B (en) Resource scheduling method, device, system, equipment, medium and program product
CN112379906A (en) Service updating method, device, storage medium and electronic device
CN112388625B (en) Task issuing method and device and task executing method and device
CN115712572A (en) Task testing method and device, storage medium and electronic device
CN111143033A (en) Operation execution method and device based on scalable operating system
CN113110982B (en) Data access layer verification method and device, storage medium and electronic device
CN109857533A (en) A kind of timed task dispatching method, device and intelligent terminal
CN109670932A (en) Credit data calculate method, apparatus, system and computer storage medium
CN113301087B (en) Resource scheduling method, device, computing equipment and medium
CN114490083A (en) CPU resource binding method and device, storage medium and electronic device
CN110580172B (en) Configuration rule verification method and device, storage medium and electronic device
CN110188490B (en) Method and device for improving data simulation efficiency, storage medium and electronic device
CN113504981A (en) Task scheduling method and device, storage medium and electronic equipment
CN110018906A (en) Dispatching method, server and scheduling system
CN109151007B (en) Data processing method, core server and transmission server for application scheduling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination