CN116820714A - Scheduling method, device, equipment and storage medium of computing equipment - Google Patents

Scheduling method, device, equipment and storage medium of computing equipment Download PDF

Info

Publication number
CN116820714A
CN116820714A CN202310711741.XA CN202310711741A CN116820714A CN 116820714 A CN116820714 A CN 116820714A CN 202310711741 A CN202310711741 A CN 202310711741A CN 116820714 A CN116820714 A CN 116820714A
Authority
CN
China
Prior art keywords
computing
task
information
recognition model
calculation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310711741.XA
Other languages
Chinese (zh)
Inventor
刘韡
沙云飞
侯志华
秦涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunxian Technology Jiaxing Co ltd
Original Assignee
Yunxian Technology Jiaxing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunxian Technology Jiaxing Co ltd filed Critical Yunxian Technology Jiaxing Co ltd
Priority to CN202310711741.XA priority Critical patent/CN116820714A/en
Publication of CN116820714A publication Critical patent/CN116820714A/en
Pending legal-status Critical Current

Links

Abstract

The application relates to a scheduling method, a scheduling device and a scheduling storage medium of computing equipment. The main technical scheme comprises the following steps: by receiving calculation request information of a calculation task sent by a user side, wherein the calculation request information comprises a project name and a task data set of the calculation task, the matching information of the calculation task is determined based on a pre-trained recognition model set according to the project name and the task data set, and computing power equipment for executing the calculation task is scheduled according to the matching information and preset label information.

Description

Scheduling method, device, equipment and storage medium of computing equipment
Technical Field
The present application relates to the field of information technologies, and in particular, to a method, an apparatus, a device, and a storage medium for scheduling a computing device.
Background
At present, the dispatching mode of the computing equipment mainly comprises point-to-point dispatching, distributed computing, cloud computing and the like. The point-to-point scheduling mode deploys the item to be executed on a server of the target power computing equipment for calculation, and performs data transmission and calculation result return through network connection, and the mode is suitable for smaller-scale calculation tasks, such as single-machine programs or parallel calculation tasks. Distributed computing typically uses cluster or grid computing techniques to distribute computing tasks to multiple computing nodes and a central manager coordinates the computing tasks of the individual nodes. Although the distributed computing mode can complete large-scale computing tasks, the suitability between the item to be executed and the computing equipment cannot be fully considered, the computing equipment is difficult to reasonably schedule, and the utilization rate of computing resources is reduced.
Disclosure of Invention
Based on the above, the application provides a dispatching method, a dispatching device and a storage medium of the computing equipment, so as to reasonably dispatch the computing equipment and realize the efficient utilization of computing resources.
In a first aspect, a method for scheduling a computing device is provided, the method comprising:
receiving calculation request information of a calculation task sent by a user side, wherein the calculation request information comprises a project name of the calculation task and a task data set;
determining matching information of a computing task based on a pre-trained recognition model set according to the project name and the task data set;
and according to the matching information and the preset label information, dispatching the computing power equipment for executing the computing task.
According to one implementation manner in the embodiment of the application, the matching information comprises resource type information and resource demand information of the computing task; according to the matching information and the preset label information, the computing power equipment for executing the computing task is scheduled, and the computing power equipment comprises:
determining at least one computing node corresponding to the resource type information according to the preset label information, wherein the computing node comprises at least one computing device;
and scheduling the computing power equipment corresponding to the resource demand information in the at least one computing power node.
According to one implementation manner in the embodiment of the application, the resource requirement information comprises equipment priority and equipment quantity; scheduling the computing power equipment corresponding to the resource demand information in the at least one computing power node, including:
determining at least one preliminary computing device from the at least one computing node according to the device priority;
and determining the computing power equipment corresponding to the resource demand information from at least one preparation computing power equipment according to the equipment number.
According to one implementation of an embodiment of the present application, the set of recognition models includes a plurality of different recognition models; determining matching information for a computing task based on a pre-trained set of recognition models from a project name and a task data set, comprising:
determining a target recognition model corresponding to the calculation task from the recognition model set according to the project name and the task data set;
and inputting the project name and the task data set into a target recognition model to obtain matching information of the computing task.
According to one implementation manner in the embodiment of the present application, the method further includes:
training a target recognition model according to the project name, the task data set and the information of computing power equipment for executing the computing task to obtain a current recognition model;
and updating the current recognition model to the recognition model set to obtain the latest recognition model set.
According to one implementation manner of the embodiment of the present application, receiving calculation request information of a calculation task input by a user includes:
and receiving calculation request information of a calculation task sent by a user terminal based on the block chain.
According to one implementation manner in the embodiment of the present application, the method further includes:
and sending execution result information of the computing power equipment of the computing task to the user side based on the block chain, so that the user pays the fee corresponding to the computing task based on the signed intelligent contract according to the execution result information.
In a second aspect, there is provided a scheduling apparatus for a computing device, the apparatus comprising:
the receiving module is used for receiving calculation request information of a calculation task sent by a user side, wherein the calculation request information comprises a project name of the calculation task and a task data set;
the determining module is used for determining matching information of the computing task based on a pre-trained recognition model set according to the project name and the task data set;
and the scheduling module is used for scheduling the power computing equipment for executing the computing task according to the matching information and the preset label information.
In a third aspect, there is provided a computer device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores computer instructions executable by the at least one processor to enable the at least one processor to perform the method referred to in the first aspect above.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method referred to in the first aspect above.
According to the technical content provided by the embodiment of the application, the calculation request information of the calculation task sent by the user side is received, the calculation request information comprises the project name and the task data set of the calculation task, the matching information of the calculation task is determined based on the pre-trained recognition model set according to the project name and the task data set, the computing equipment for executing the calculation task is scheduled according to the matching information and the preset label information, and the appropriate computing equipment can be matched according to the resource requirement of the calculation task, so that the utilization rate of the calculation resource is improved.
Drawings
FIG. 1 is an application environment diagram of a method of scheduling computing devices in one embodiment;
FIG. 2 is a flow diagram of a method of scheduling computing devices in one embodiment;
FIG. 3 is a block diagram of a scheduler of a computing device in one embodiment;
fig. 4 is a schematic structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
For ease of understanding, a system to which the present application is applicable will first be described. The scheduling method of the computing equipment provided by the application can be applied to the system architecture shown in the figure 1. Wherein, the user terminal 110 communicates with the server 120 via a network, and the server 120 communicates with the plurality of computing devices 130 via a network. The server 120 receives calculation request information of a calculation task sent by the user terminal 110, where the calculation request information includes a project name and a task data set of the calculation task, determines matching information of the calculation task based on a pre-trained recognition model set according to the project name and the task data set, and schedules a computing power device 130 for executing the calculation task according to the matching information and preset tag information. The client 110 may be, but not limited to, various personal computers, notebook computers, and tablet computers, and the server 120 and the computing device 130 may be implemented by a separate server or a server cluster formed by a plurality of servers.
The user registers a blockchain account with the client 110 for managing information of the computing device 130. The user needs to authorize the blockchain account to access the computing device 130 and provide the computing device 130 with corresponding access rights. The user signs an intelligent contract that specifies information about the manner, cost, etc. in which to use the computing device 130.
The user authorizes the computing device 130 to access the computing network via the user terminal 110 to provide the computing network with corresponding access rights, and at this time, each computing device 130 generates a unique identifier. The user terminal 110 reads and collects information of each computing device 130, and attaches corresponding labels to the computing devices 130 according to the information and performance characteristics of the computing devices 130, wherein each computing device 130 is attached with at least one label. The tag may include the following:
device model tag: the equipment is classified according to information such as equipment manufacturers, product lines, models and the like, so that different types of equipment can be conveniently and rapidly distinguished.
Operating system tags: classification is based on operating system type and version information, such as Windows, linux, macOS, etc.
GPU model label: classifying according to information such as GPU model, brand, core frequency, video memory size and the like, wherein the tag is more prone to distinguishing the computational power of the equipment.
Calculating force type label: classifying according to the performance characteristics of the power computing equipment, such as semi-precision, single precision, double precision and the like, and helping a user to better select the proper power computing equipment.
Storage capacity label: the classification is made according to the storage capacity of the device, for example 128GB, 256GB, 512GB, etc.
Energy consumption label: classifying according to the power consumption of the computing equipment, such as 50W, 100W, 200W and the like, helps users optimize energy utilization efficiency.
First, the computing device 130 is labeled with an operating system. Because different operating systems have different characteristics and application scenes, in practical use, it is also important to select an appropriate operating system. For example, in the field of deep learning, most of the deep learning frameworks are developed based on the Linux system, which has better stability and compatibility in deep learning training. In the fields of graphics rendering, game development, etc., windows operating system is more adopted.
The computing device 130 is then labeled with more refined labels, such as a device model label, a GPU model label, a computing type label, a storage capacity label, and an energy consumption label. When screening the power calculation devices needing to be scheduled, a plurality of factors are considered, and the most suitable device is selected according to actual requirements and use scenes. For example, GPU model labels and computing force type labels are combined to screen out computing force equipment biased towards semifinishing/monofinishing/mixed computing force.
Finally, information of the power computing device to which the refined label is attached is stored in a database in the server 120, so that the power computing node required by the user request can be conveniently inquired and selected. One computing device may be regarded as one computing node, or a plurality of computing devices may be regarded as one computing node. For example, a user may have multiple devices that are all part of the same node and may share the same account and access rights, in which case the devices may be considered a node and managed as a whole.
An off-centered application is established in the server 120 via blockchain technology and smart contract technology, and the distribution and supervisory computing devices 130 are managed by the off-centered application. The user sends a request of a calculation task to the server 120 through the user terminal 110, after the de-centralization application program receives the request, the request is analyzed, proper computing power equipment is matched, a scheduling request is sent to a computing power node, the computing power node can perform the corresponding calculation task according to the request of the user, the result is returned to the user, the user needs to pay corresponding cost, the cost is stored on a blockchain, and the safety and the credibility of the transaction are ensured.
Fig. 2 is a flowchart of a method for scheduling computing devices according to an embodiment of the present application, where the method may be performed by the server 120 in the system shown in fig. 1. As shown in fig. 2, the method may include the steps of:
s210, receiving calculation request information of a calculation task sent by a user side.
The computing task is a task that a user requests to match computing power, for example, an image classification task, an image recognition task, and the like. The calculation request information is request information for sending a calculation task, and may include a project name of the calculation task, a request time, a request user, a task data set, and the like, where the project name is the name of the calculation task, the task data set is a data set of the calculation task, for example, an image classification task, where the task may be provided with images of many different sizes and types, and then the project name of the image classification task is image classification, and the task data set is an image set.
The decentralizing application program in the server 120 receives the calculation request information of the calculation task sent by the user terminal through the blockchain, so that the data privacy and the security can be better ensured.
S220, according to the project names and the task data sets, determining matching information of the computing tasks based on the pre-trained recognition model sets.
The matching information includes resource type information and resource requirement information of the computing task, wherein the resource type information may include a running environment type and a floating point number type, and the resource requirement information includes a device priority and a device number.
The running environment types can comprise Windows system types, linux system types, macOS system types and the like, and the floating point number types can comprise semi-precision types, single-precision types, double-precision types, mixed-precision types and the like. In addition to the floating point number type, the inference data type may also be adopted, where the inference data type includes an 8-bit inference 8-bit reference type, a 16-bit inference 16-bit reference type, and a 16-bit training type, for example, the inference data type is divided according to the performance data of the graphics card in the 8-bit reference, 16-bit reference, and 16-bit training.
The device priority is the priority of the devices required by the computing task, and the number of the devices is the number of the devices required by the computing task.
The set of recognition models includes a plurality of different recognition models, the different recognition models corresponding to different computing tasks. And each time a computing task is executed, the recognition model of the computing task is trained again, the latest recognition model set is obtained, and more accurate matching information can be provided.
The training process of the recognition model set is as follows: and obtaining sample data, wherein the sample data comprises project names and task data sets of various computing tasks, and resource type information and resource demand information corresponding to the project names and task data sets, taking the project names and the task data sets as input of an initial neural network, taking the resource type information and the resource demand information corresponding to the project names and the task data sets as output of the initial neural network, and training the initial neural network. The initial neural network may be a convolutional neural network, a cyclic neural network, or any other neural network, and may be selected according to a data type, which is not limited herein.
And inputting the project name and the task data set into an identification model set, and outputting matching information of the computing task by the identification model set.
S230, according to the matching information and the preset label information, the computing equipment for executing the computing task is scheduled.
The preset tag information is information of the computing devices to which the tags are attached, which is stored in the database of the server 120, and includes a unique identity identifier of each computing device and all tags marked by the unique identity identifier.
And searching the computing power equipment conforming to the matching information based on the preset label information to schedule so as to execute the computing task. The computing device is scheduled by the unique identity identifier of the computing device, and the schedulable computing device can be one device or a plurality of devices.
When the task amount is detected to be smaller and can run on a single device or when a user submits a requirement, selecting the single device for scheduling, and selecting the optimal computing device for scheduling. The scheduling mode is simple and feasible, and reduces the communication complexity.
When a task requires a large amount of resources or needs to run on multiple devices simultaneously, multiple computing devices with higher priorities can be selected for parallel scheduling. The scheduling mode can improve the calculation speed and efficiency.
It can be seen that, in the embodiment of the present application, by receiving calculation request information of a calculation task sent by a user side, the calculation request information includes a project name and a task data set of the calculation task, according to the project name and the task data set, the matching information of the calculation task is determined based on a pre-trained recognition model set, and according to the matching information and preset label information, computing power equipment for executing the calculation task is scheduled, so that appropriate computing power equipment can be matched according to resource requirements of the calculation task, and the utilization rate of calculation resources is improved.
As an achievable way, according to the matching information and the preset tag information, the computing power device for executing the computing task is scheduled, including:
determining at least one computing node corresponding to the resource type information according to the preset label information, wherein the computing node comprises at least one computing device;
and scheduling the computing power equipment corresponding to the resource demand information in the at least one computing power node.
First, at least one computing node with the same label as the running environment type of the computing task is determined according to preset label information, and then at least one computing node with the same label as the floating point number type of the computing task is screened from the computing nodes. Next, the computing power equipment corresponding to the resource demand information in the computing power nodes is scheduled, which specifically comprises:
determining at least one preliminary computing device from the at least one computing node according to the device priority;
and determining the computing power equipment corresponding to the resource demand information from at least one preparation computing power equipment according to the equipment number.
The equipment priority is determined by considering the distance, the operation time length, the energy consumption and the like of the power equipment according to the requirements of the computing tasks, and the preset number of expected power equipment is determined based on the equipment priority according to the actual requirements, wherein the preset number is larger than the equipment number. And selecting computing equipment matched with the expected computing equipment from at least one computing node as preliminary computing equipment, wherein the number of the preliminary computing equipment can be preset or smaller than the preset number, but is larger than the number of the equipment. And (3) primarily screening out computing power equipment conforming to the computing task through equipment priority, and further screening by combining the actual conditions of the computing power equipment.
The current available resource information of the pre-computing power equipment is obtained, wherein the current available resource information comprises equipment types, quantity, configuration information, use states and the like. And matching the computing task with the current available resource information, and reordering the preset computing equipment according to the matching degree of the computing task and the current available resource information to obtain a new priority, and determining the computing equipment corresponding to the resource demand information from at least one preparation computing equipment based on the new priority and the equipment number.
The theoretical screening is completed through the equipment priority, and then the actual use condition of the theoretically screened computing equipment is combined for screening again, so that the screened computing equipment can execute the computing task more accurately and reasonably, the waste of computing resources is avoided, and the utilization rate of the computing resources is improved.
As one implementation, determining matching information for a computing task based on a pre-trained set of recognition models from a project name and a task data set, includes:
determining a target recognition model corresponding to the calculation task from the recognition model set according to the project name and the task data set;
and inputting the project name and the task data set into a target recognition model to obtain matching information of the computing task.
The method comprises the steps of inputting a project name and a task data set into a recognition model set, determining a recognition model of matching the input data with the project name and the task data set as a target recognition model, inputting the project name and the task data set into the target recognition model, and outputting matching information of a calculation task by the target recognition model.
As one implementation, the method further includes:
training a target recognition model according to the project name, the task data set and the information of computing power equipment for executing the computing task to obtain a current recognition model;
and updating the current recognition model to the recognition model set to obtain the latest recognition model set.
The information of the computing equipment for executing the computing task comprises the running environment type, the floating point number type, the latest equipment priority and the equipment number of the computing equipment, the project name and the task data set are used as the input of a target recognition model, the information of the computing equipment for executing the computing task is used as the output of the target recognition model, and model training is carried out to obtain the current recognition model. Updating the recognition model set to obtain the latest recognition model set, wherein the recognition model set is more attached to the real working state of the power equipment.
As one implementation, the method further includes:
and sending execution result information of the computing power equipment of the computing task to the user side based on the block chain, so that the user pays the fee corresponding to the computing task based on the signed intelligent contract according to the execution result information.
The decentralizing application assigns the computing tasks to the computing devices to be scheduled and initiates the corresponding computing tasks and data communication processes. Meanwhile, the decentralizing application program can monitor the execution condition of the computing task and dynamically adjust the resource allocation and the load balancing when needed so as to optimize the system performance and the throughput.
Each computing device will update its status and log records after completing its respective assigned computing task so that the de-centralized application can timely learn about the usage and availability of the computing device. Meanwhile, the decentralizing application program can record the execution result information of the computing task so as to facilitate subsequent auditing and analysis. The execution result information comprises task completion conditions, task output results, task cost, task starting time, task ending time and the like.
The decentralizing application program sends execution result information to the user terminal based on the blockchain, and the user pays a fee corresponding to the calculation task based on the signed intelligent contract according to the execution result information, wherein the fee is stored on the blockchain, so that the safety and the credibility of the transaction are ensured.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited in the present application, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 3 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Fig. 3 is a schematic structural diagram of a scheduling apparatus for computing power equipment according to an embodiment of the present application, where the apparatus may be disposed in a server 120 in the system shown in fig. 1, so as to execute a method flow shown in fig. 2. As shown in fig. 3, the apparatus may include: the receiving module 310, the determining module 320, and the scheduling module 330 may further include: training module and sending module. The main functions of each component module are as follows:
a receiving module 310, configured to receive calculation request information of a calculation task sent by a user side, where the calculation request information includes a project name of the calculation task and a task data set;
a determining module 320, configured to determine matching information of the computing task based on the pre-trained recognition model set according to the project name and the task data set;
the scheduling module 330 is configured to schedule the computing device that performs the computing task according to the matching information and the preset tag information.
As one implementation, the matching information includes resource type information and resource requirement information of the computing task; the scheduling module 330 is specifically configured to determine, according to preset tag information, at least one computing node corresponding to the resource type information, where the computing node includes at least one computing device;
and scheduling the computing power equipment corresponding to the resource demand information in the at least one computing power node.
As one implementation, the resource requirement information includes device priority and number of devices; the scheduling module 330 is specifically configured to:
determining at least one preliminary computing device from the at least one computing node according to the device priority;
and determining the computing power equipment corresponding to the resource demand information from at least one preparation computing power equipment according to the equipment number.
As one implementation, the set of recognition models includes a plurality of different recognition models; a determining module 320, configured to determine, from the recognition model set, a target recognition model corresponding to the computing task according to the project name and the task data set;
and inputting the project name and the task data set into a target recognition model to obtain matching information of the computing task.
As an achievable way, the device further comprises a training module for training the target recognition model according to the project name, the task data set and the matching information to obtain a current recognition model;
and updating the current recognition model to the recognition model set to obtain the latest recognition model set.
As an implementation manner, the receiving module 310 is specifically configured to receive, based on the blockchain, calculation request information of a calculation task sent by the user side.
As an implementation manner, the device further comprises a sending module, which is used for sending the execution result information of the computing power equipment of the computing task to the user side based on the blockchain, so that the user pays the fee corresponding to the computing task based on the signed intelligent contract according to the execution result information.
The same and similar parts of the above embodiments are all referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
It should be noted that, in the embodiment of the present application, the use of user data may be involved, and in practical application, user specific personal data may be used in the schemes described herein within the scope allowed by applicable legal regulations under the condition that the applicable legal regulations of the country are met (for example, the user explicitly agrees, the user is explicitly notified, the user is explicitly authorized, etc.).
According to an embodiment of the present application, the present application also provides a computer device, a computer-readable storage medium.
As shown in fig. 4, is a block diagram of a computer device according to an embodiment of the present application. Computer equipment is intended to represent various forms of digital computers or mobile devices. Wherein the digital computer may comprise a desktop computer, a portable computer, a workstation, a personal digital assistant, a server, a mainframe computer, and other suitable computers. The mobile device may include a tablet, a smart phone, a wearable device, etc.
As shown in fig. 4, the apparatus 400 includes a computing unit 401, a ROM402, a RAM403, a bus 404, and an input/output (I/O) interface 405, and the computing unit 401, the ROM402, and the RAM403 are connected to each other by the bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
The computing unit 401 may perform various processes in the method embodiments of the present application according to computer instructions stored in a Read Only Memory (ROM) 402 or computer instructions loaded from a storage unit 408 into a Random Access Memory (RAM) 403. The computing unit 401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. The computing unit 401 may include, but is not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), as well as any suitable processor, controller, microcontroller, etc. In some embodiments, the methods provided by embodiments of the present application may be implemented as a computer software program tangibly embodied on a computer-readable storage medium, such as the storage unit 408.
RAM403 may also store various programs and data required for operation of device 400. Part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM802 and/or the communication unit 409.
The input unit 406, the output unit 407, the storage unit 408, and the communication unit 409 in the device 400 may be connected to the I/O interface 405. Wherein the input unit 406 may be such as a keyboard, mouse, touch screen, microphone, etc.; the output unit 407 may be, for example, a display, a speaker, an indicator light, or the like. The device 400 is capable of exchanging information, data, etc. with other devices through the communication unit 409.
It should be noted that the device may also include other components necessary to achieve proper operation. It is also possible to include only the components necessary to implement the inventive arrangements, and not necessarily all the components shown in the drawings.
Various implementations of the systems and techniques described here can be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof.
Computer instructions for implementing the methods of the present application may be written in any combination of one or more programming languages. These computer instructions may be provided to the computing unit 401 such that the computer instructions, when executed by the computing unit 401, such as a processor, cause the steps involved in the method embodiments of the present application to be performed.
The computer readable storage medium provided by the present application may be a tangible medium that may contain, or store, computer instructions for performing the steps involved in the method embodiments of the present application. The computer readable storage medium may include, but is not limited to, storage media in the form of electronic, magnetic, optical, electromagnetic, and the like.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (10)

1. A method of scheduling a computing device, the method comprising:
receiving calculation request information of a calculation task sent by a user side, wherein the calculation request information comprises a project name of the calculation task and a task data set;
determining matching information of the computing task based on a pre-trained recognition model set according to the project name and the task data set;
and dispatching the computing equipment for executing the computing task according to the matching information and the preset label information.
2. The method of claim 1, wherein the matching information includes resource type information and resource requirement information of the computing task; the computing power equipment for executing the computing task is scheduled according to the matching information and the preset label information, and comprises the following components:
determining at least one computing node corresponding to the resource type information according to preset label information, wherein the computing node comprises at least one computing device;
and dispatching the computing equipment corresponding to the resource demand information in the at least one computing node.
3. The method of claim 2, wherein the resource requirement information includes device priority and number of devices; the scheduling the computing power equipment corresponding to the resource demand information in the at least one computing power node comprises the following steps:
determining at least one preliminary computing device from the at least one computing node according to the device priority;
and determining the computing power equipment corresponding to the resource demand information from the at least one preparation computing power equipment according to the equipment number.
4. The method of claim 1, wherein the set of recognition models includes a plurality of different recognition models; the determining matching information of the computing task based on a pre-trained recognition model set according to the project name and the task data set comprises:
determining a target recognition model corresponding to the computing task from the recognition model set according to the project name and the task data set;
and inputting the project name and the task data set into the target recognition model to obtain the matching information of the computing task.
5. The method according to claim 4, wherein the method further comprises:
training a target recognition model according to the project name, the task data set and the information of the computing power equipment for executing the computing task to obtain a current recognition model;
and updating the current recognition model to the recognition model set to obtain the latest recognition model set.
6. The method of claim 1, wherein receiving calculation request information of a calculation task input by a user comprises:
and receiving calculation request information of a calculation task sent by a user terminal based on the block chain.
7. The method according to claim 1, wherein the method further comprises:
and sending the execution result information of the computing power equipment of the computing task to a user side based on the block chain, so that the user pays the fee corresponding to the computing task based on the signed intelligent contract according to the execution result information.
8. A scheduling apparatus for a computing device, the apparatus comprising:
the receiving module is used for receiving calculation request information of a calculation task sent by a user side, wherein the calculation request information comprises a project name of the calculation task and a task data set;
the determining module is used for determining matching information of the computing task based on a pre-trained recognition model set according to the project name and the task data set;
and the scheduling module is used for scheduling the computing power equipment for executing the computing task according to the matching information and the preset label information.
9. A computer device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores computer instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any of claims 1 to 7.
CN202310711741.XA 2023-06-15 2023-06-15 Scheduling method, device, equipment and storage medium of computing equipment Pending CN116820714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310711741.XA CN116820714A (en) 2023-06-15 2023-06-15 Scheduling method, device, equipment and storage medium of computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310711741.XA CN116820714A (en) 2023-06-15 2023-06-15 Scheduling method, device, equipment and storage medium of computing equipment

Publications (1)

Publication Number Publication Date
CN116820714A true CN116820714A (en) 2023-09-29

Family

ID=88115901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310711741.XA Pending CN116820714A (en) 2023-06-15 2023-06-15 Scheduling method, device, equipment and storage medium of computing equipment

Country Status (1)

Country Link
CN (1) CN116820714A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472550A (en) * 2023-12-27 2024-01-30 环球数科集团有限公司 Computing power sharing system based on AIGC

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472550A (en) * 2023-12-27 2024-01-30 环球数科集团有限公司 Computing power sharing system based on AIGC
CN117472550B (en) * 2023-12-27 2024-03-01 环球数科集团有限公司 Computing power sharing system based on AIGC

Similar Documents

Publication Publication Date Title
US11762697B2 (en) Method and apparatus for scheduling resource for deep learning framework
WO2022088659A1 (en) Resource scheduling method and apparatus, electronic device, storage medium, and program product
CN111324786B (en) Method and device for processing consultation problem information
CN111949795A (en) Work order automatic classification method and device
CN114881616A (en) Business process execution method and device, electronic equipment and storage medium
CN116820714A (en) Scheduling method, device, equipment and storage medium of computing equipment
CN114491047A (en) Multi-label text classification method and device, electronic equipment and storage medium
CN115794341A (en) Task scheduling method, device, equipment and storage medium based on artificial intelligence
CN112631751A (en) Task scheduling method and device, computer equipment and storage medium
CN115292046A (en) Calculation force distribution method and device, storage medium and electronic equipment
CN115129753A (en) Data blood relationship analysis method and device, electronic equipment and storage medium
CN114880368A (en) Data query method and device, electronic equipment and readable storage medium
CN114564294A (en) Intelligent service arranging method and device, computer equipment and storage medium
CN113868528A (en) Information recommendation method and device, electronic equipment and readable storage medium
CN112541640A (en) Resource authority management method and device, electronic equipment and computer storage medium
CN112182111A (en) Block chain based distributed system layered processing method and electronic equipment
CN111988429A (en) Algorithm scheduling method and system
CN115373826B (en) Task scheduling method and device based on cloud computing
CN110866605A (en) Data model training method and device, electronic equipment and readable medium
CN114896164A (en) Interface optimization method and device, electronic equipment and storage medium
US20220269531A1 (en) Optimization of Workload Scheduling in a Distributed Shared Resource Environment
CN114625512A (en) Task scheduling method and device, electronic equipment and storage medium
CN113918296A (en) Model training task scheduling execution method and device, electronic equipment and storage medium
CN113822215A (en) Equipment operation guide file generation method and device, electronic equipment and storage medium
WO2023207630A1 (en) Task solving method and apparatus therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination