CN111651276A - Scheduling method and device and electronic equipment - Google Patents

Scheduling method and device and electronic equipment Download PDF

Info

Publication number
CN111651276A
CN111651276A CN202010501711.2A CN202010501711A CN111651276A CN 111651276 A CN111651276 A CN 111651276A CN 202010501711 A CN202010501711 A CN 202010501711A CN 111651276 A CN111651276 A CN 111651276A
Authority
CN
China
Prior art keywords
computing resources
computing
intelligent analysis
computing resource
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010501711.2A
Other languages
Chinese (zh)
Inventor
李祥平
范炳辉
雷凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN202010501711.2A priority Critical patent/CN111651276A/en
Publication of CN111651276A publication Critical patent/CN111651276A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources

Abstract

The embodiment of the invention provides a scheduling method, a scheduling device and electronic equipment. Wherein the method comprises the following steps: according to the supported intelligent analysis task, dividing the computing resources with different architectures into a plurality of computing resource groups; when an intelligent analysis task is received, scheduling computing resources in a computing resource group supporting the intelligent analysis task; and configuring an algorithm suitable for the architecture of the computing resource aiming at the scheduled computing resource so as to realize the intelligent analysis task. The resources can be uniformly scheduled by reclassifying the computing resources of different architectures according to the supported intelligent analysis tasks, and the algorithm can be adapted to the computing resources of different architectures through the uniform scheduling of the algorithm, so that the computing resources of different architectures can simultaneously realize the same intelligent analysis task, that is, the computing power of different computing resources is fully integrated, and the intelligent analysis task with larger calculation amount can be solved.

Description

Scheduling method and device and electronic equipment
Technical Field
The invention relates to the technical field of cloud computing, in particular to a scheduling method and device and electronic equipment.
Background
The architecture of the computing resources used at different time periods may be different for several reasons, such as an upgrade to hardware architecture, hardware procurement channel changes, and so forth. In the related art, the corresponding algorithms may be independently executed using computing resources of different architectures, respectively.
However, due to the limited amount of computing resources of each architecture, the computing power of the computing resources of a single architecture is limited, and it is difficult to implement the intelligent analysis task with large computation amount. Therefore, how to fully integrate the computing resources of different architectures becomes an urgent technical problem to be solved.
Disclosure of Invention
The embodiment of the invention aims to provide a scheduling method, a scheduling device and electronic equipment, so as to realize the integration of the computing power of computing resources of various different architectures and realize an intelligent analysis task with a large computing amount.
The specific technical scheme is as follows:
in a first aspect of the embodiments of the present invention, a scheduling method is provided, which is applied to a scheduling system, where the scheduling system includes multiple computing resources with different architectures, and the method includes:
according to the supported intelligent analysis tasks, dividing the computing resources with different architectures into a plurality of computing resource groups, wherein the intelligent analysis tasks supported by the computing resources in each computing resource group are the same;
when an intelligent analysis task is received, scheduling computing resources in a computing resource group supporting the intelligent analysis task;
and configuring an algorithm suitable for the architecture of the computing resource aiming at the scheduled computing resource so as to realize the intelligent analysis task.
In a possible embodiment, the dividing the plurality of computing resources with different architectures into a plurality of computing resource groups according to the supported intelligent analysis task includes:
according to the supported intelligent analysis tasks, dividing the computing resources supporting a single intelligent analysis task in the computing resources with different architectures into a plurality of computing resource groups;
the method further comprises the following steps:
when an intelligent analysis task is received, computing resources which do not belong to the computing resource group are scheduled to realize the intelligent analysis task.
In a possible embodiment, after the partitioning the plurality of architecturally distinct computing resources into the plurality of computing resource groups per supported intelligent analytics task, the method further comprises:
configuring a group scheduler for each group of computing resources;
the scheduling of computing resources in a set of computing resources that support the intelligent analytics tasks includes:
and sending a scheduling instruction to a group scheduler of the computing resource group supporting the intelligent analysis task so as to control the group scheduler to schedule the computing resources in the computing resource group.
In one possible embodiment, the configuration group scheduler comprises:
selecting one computing resource in the computing resource group and configuring the computing resource as a group scheduler; alternatively, the first and second electrodes may be,
a virtual group scheduler is configured for the set of computing resources.
In a possible embodiment, the dividing the plurality of computing resources with different architectures into a plurality of computing resource groups according to the supported intelligent analysis task includes:
for each architecture, the computing resources of the architecture are divided into a plurality of computing resource groups according to the supported intelligent analysis tasks.
In a second aspect of the embodiments of the present invention, there is provided a scheduling apparatus, applied to a scheduling system, where the scheduling system includes a plurality of computing resources with different architectures, the apparatus including:
the resource management module is used for dividing the computing resources with different architectures into a plurality of computing resource groups according to the supported intelligent analysis tasks, wherein the intelligent analysis tasks supported by the computing resources in each computing resource group are the same;
the resource scheduling module is used for scheduling computing resources in a computing resource group supporting the intelligent analysis task when the intelligent analysis task is received;
and the algorithm scheduling module is used for configuring an algorithm suitable for the architecture of the computing resource aiming at the scheduled computing resource so as to realize the intelligent analysis task.
In a possible embodiment, the resource management module is specifically configured to divide the multiple computing resources with different architectures into multiple computing resource groups according to the supported intelligent analysis tasks, where the intelligent analysis tasks supported by the computing resources included in each computing resource group are the same;
when an intelligent analysis task is received, scheduling computing resources in a computing resource group supporting the intelligent analysis task;
and configuring an algorithm suitable for the architecture of the computing resource aiming at the scheduled computing resource so as to realize the intelligent analysis task.
The resource scheduling module is specifically configured to schedule, when an intelligent analysis task is received, a computing resource that does not belong to a computing resource group, so as to implement the intelligent analysis task.
In a possible embodiment, the resource management module is further configured to configure a group scheduler for each group of computing resources;
the resource scheduling module is specifically configured to send a scheduling instruction to a group scheduler of a computing resource group supporting the intelligent analysis task, so as to control the group scheduler to schedule computing resources in the computing resource group to which the group scheduler belongs.
In a possible embodiment, the resource management module is specifically configured to select one computing resource from the computing resource group and configure the computing resource group as a group scheduler; alternatively, the first and second electrodes may be,
a virtual group scheduler is configured for the set of computing resources.
In a possible embodiment, the resource management module is specifically configured to, for each architecture, divide the computing resources of the architecture into a plurality of computing resource groups according to the supported intelligent analysis task.
In a third aspect of embodiments of the present invention, there is provided an electronic device, including:
a memory for storing a computer program;
a processor adapted to perform the method steps of any of the above first aspects when executing a program stored in the memory.
In a fourth aspect of embodiments of the present invention, a computer-readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, carries out the method steps of any one of the above-mentioned first aspects.
According to the scheduling method, the scheduling device and the electronic equipment provided by the embodiment of the invention, the computing resources of different architectures can be reclassified according to the supported intelligent analysis tasks to uniformly schedule the resources, and the algorithm can be adapted to the computing resources of different architectures through the uniform scheduling of the algorithm, so that the same intelligent analysis task can be simultaneously realized by the computing resources of different architectures, that is, the computing capabilities of different computing resources are fully integrated, and the intelligent analysis task with a larger calculation amount can be solved. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a scheduling method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a scheduling system according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of resource scheduling in a scheduling system according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a scheduling apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to more clearly describe the scheduling method provided by the embodiment of the present invention, a possible application scenario of the scheduling method provided by the embodiment of the present invention will be described below. It can be understood that the application scenario is only one possible application scenario of the scheduling method provided in the embodiment of the present invention, and the scheduling method provided in the embodiment of the present invention may also be applied to other possible application scenarios, which is not limited in this embodiment of the present invention.
Assume that a computing resource used when a cloud platform service provider constructs a cloud platform is a server (hereinafter referred to as server 1) configured with a chip of model 1 produced by vendor a. After the cloud platform is built, the manufacturer a releases a server (hereinafter, referred to as a server 2) configured with a new type 2 chip, and since the architectures of the server 1 and the server 2 are different, the computing capabilities of the server 1 and the server 2 cannot be integrated in the related art, when a cloud platform service provider needs to expand the performance of the cloud platform, the server 1 may be eliminated, and the server 2 may be purchased, and a cloud platform with a stronger computing capability may be reconstructed based on the purchased server 2.
Or after the cloud platform is built, the cloud platform service provider may not continuously purchase the server from the manufacturer a, and the cloud platform provider may purchase the server configured with the chip of model 3 (hereinafter referred to as server 3) from the manufacturer B, because the architectures of the server 1 and the server 3 are different, the computing capabilities of the server 1 and the server 3 cannot be integrated in the related art, when the cloud platform service provider needs to expand the cloud platform performance, the server 1 may be eliminated, and the server 3 may be purchased, and the cloud platform with stronger computing capability may be reconstructed based on the purchased server 2
In the two scenarios, the server 1 purchased during the construction of the cloud platform cannot be continuously utilized, which results in the waste of computing resources and makes the cost for expanding the performance of the cloud platform higher.
Based on this, an embodiment of the present invention provides a scheduling method, which is applied to a scheduling system, and can refer to fig. 1, where the method includes:
s101, dividing the computing resources with different architectures into a plurality of computing resource groups according to the supported intelligent analysis task.
S102, when the intelligent analysis task is received, the computing resources in the computing resource group supporting the intelligent analysis task are scheduled.
S103, aiming at the scheduled computing resources, an algorithm suitable for the architecture of the computing resources is configured to realize the intelligent analysis task.
By adopting the embodiment, the resources can be uniformly scheduled by reclassifying the computing resources of different architectures according to the supported intelligent analysis tasks, and the algorithm can be adapted to the computing resources of different architectures through the uniform scheduling of the algorithm, so that the computing resources of different architectures can simultaneously realize the same intelligent analysis task, that is, the computing power of different computing resources is fully integrated, and the intelligent analysis task with larger calculated amount can be solved.
By taking the application scenario as an example, the embodiment can be selected to fully integrate the computing capabilities of the server 1 and the server 2 or the server 3, and when the cloud platform service provider expands the performance of the cloud platform, the purchased server 1 can be fully utilized, so that the cost of cloud platform performance expansion is effectively reduced, and the waste of computing resources is avoided.
In S101, the intelligent analysis tasks supported by the computing resources included in each computing resource group are the same, and the intelligent analysis tasks supported by the two computing resources are the same, which means that the types of the intelligent analysis tasks that can be realized by the two computing resources are the same. The types of the intelligent analysis tasks can be classified in different ways according to different application scenarios, and illustratively, in one possible application scenario, the intelligent analysis tasks can be divided into: the method comprises a face analysis task, a vehicle analysis task, a video structuring task, a behavior analysis task and a model comparison task. In other possible application scenarios, the intelligent analysis task may be divided into other categories, which is not limited in this embodiment.
The intelligent analysis tasks supported by the computing resources in different computing resource groups can be the same or different. For example, assume that there are 10 servers 1, 10 servers 2, and 10 servers 3, where 5 servers 1 support face analysis tasks, 5 servers 1 support vehicle analysis tasks, 5 servers 2 support face analysis tasks, 5 servers 2 support video structuring tasks, 5 servers 3 support face analysis tasks, and 5 servers 3 support behavior analysis tasks.
The method can be divided into a computing resource group 1 by 5 servers 1 supporting face analysis tasks, a computing resource group 2 by 5 servers 2 supporting vehicle analysis tasks, a computing resource group 3 by 5 servers 2 supporting face analysis tasks, a computing resource group 4 by 5 servers 2 supporting video structuring tasks, a computing resource group 5 by 5 servers 3 supporting face analysis tasks, and a computing resource group 6 by 5 servers 3 supporting behavior analysis tasks. In this partitioning scheme, the computing resources in the computing resource group 1 and the computing resource group 3 support the same intelligent analysis task, and the computing resources in the computing resource group 2 and the computing resource group 3 support different intelligent analysis tasks.
In one possible embodiment, to facilitate management of computing resources, the computing resources of the architecture may be divided into multiple computing resource groups for each architecture according to the supported intelligent analysis tasks. By adopting the embodiment, the architecture of the computing resources in each computing resource group can be the same, and management is facilitated.
In S102, when the intelligent analysis task is received, a corresponding amount of computing resources may be scheduled according to the amount of computation required to complete the intelligent analysis task. For example, assuming that the intelligent analysis task is to perform face analysis on 1000 paths of video data, and assuming that 15 servers 1 are required to complete the intelligent analysis task, 15 servers 1 may be scheduled in a computing resource group supporting the face analysis task.
In S103, algorithms applicable to computing resources of different architectures may be stored in the algorithm scheduling center in advance, and for example, the face analysis algorithm of version a, the face analysis algorithm of version B, and the face analysis algorithm of version C may be stored in the algorithm scheduling center in advance, assuming that the face analysis algorithm of version a is applicable to the server 1, the face analysis algorithm of version B is applicable to the server 2, and the face analysis algorithm of version C is applicable to the server 3. In the algorithm scheduling, assuming that 3 servers 1, 2 servers 2 and 4 servers 3 are scheduled in total, a face analysis algorithm of version a may be configured for the scheduled server 1, a face analysis algorithm of version B may be configured for the scheduled server 2, and a face analysis algorithm of version C may be configured for the scheduled server 3, so that the scheduled servers 1, 2 and 3 can jointly implement an intelligent analysis task according to the configured algorithms.
In some application scenarios, there may be computing resources that support multiple different intelligent analysis tasks, and in one possible embodiment, these computing resources may not be grouped (or may be divided into the same computing resource group). In this embodiment, when the intelligent analysis task is received, the computing resources that do not belong to the computing resource group may also be scheduled to implement the intelligent analysis task. With the embodiment, it can be understood that, because the computing resources support a plurality of different intelligent analysis tasks, the computing resources can be scheduled for the different intelligent analysis tasks, and are not grouped, so that more flexible scheduling can be realized.
The implementation manner of the computing resource group may be different according to different application scenarios, and for example, multiple computing resources may be aggregated into one computing resource cluster, so as to divide the multiple computing resources into the same computing resource group.
The way of aggregating computing resources of different architectures into one computing resource cluster may be different, and the following respectively describes the cluster construction process by taking the computing resources as a server configured with a general GPU, a server configured with an embedded GPU, and as an example.
For the servers configured with the general-purpose GPU, each server is configured with one management IP when shipped from a factory, one server may be selected from a plurality of servers randomly or according to a preset rule, the management IP of the server is taken as a cluster management IP, and other servers except the one server in the plurality of servers are added to the cluster management IP to form a cluster consisting of the plurality of servers.
For the servers configured with the embedded GPU, each server comprises 1 or 2 analysis boards, and the embedded GPU chips are distributed on the analysis boards. When the servers leave a factory, each analysis board has a management IP, one analysis board in one server in a plurality of servers can be selected, the management IP of the analysis board is used as a cluster management IP, and other analysis boards except the analysis board in the plurality of servers are added into the cluster management IP to form a cluster consisting of the plurality of servers.
The scheduling method provided in this embodiment of the present invention will be described in detail below with reference to a specific scheduling system, where the scheduling system mentioned in this example is only one possible scheduling system to which the scheduling method provided in this embodiment of the present invention is applied, and in other possible embodiments, the structure of the scheduling system may also not be as shown in this example, which is not limited in this embodiment of the present invention.
The scheduling system can include the following 3 servers with different architectures:
a server equipped with an embedded TX1 chip manufactured by enginea corporation (hereinafter referred to as an embedded TX1 server), a server equipped with an embedded ARM chip (hereinafter referred to as an ARM server), and a server equipped with a P4/P40/T4/V100 chip manufactured by enginea corporation (hereinafter referred to as a general-purpose server).
According to the supported intelligent analysis task, the following computing resource groups can be divided:
the system comprises an embedded TX1 server supporting a face analysis task, an immigration ARM server supporting the face analysis task, an embedded TX1 server supporting vehicle analysis, an embedded ARM server supporting vehicle analysis, an embedded TX1 server supporting a video structuring task, an embedded ARM server supporting a video structuring task, a TX1 server supporting a behavior analysis task, an embedded ARM server supporting a behavior analysis task and a general GPU server supporting a model comparison task.
The embedded TX1 server, the embedded ARM server, and the general GPU server that support the full analysis task are not grouped.
Since the grouped servers support only one intelligent analysis task, for convenience of description, these services are hereinafter referred to as single intelligent mode servers, in this example, the single intelligent mode servers include: the system comprises an embedded TX1 server supporting a face analysis task, an immigration ARM server supporting the face analysis task, an embedded TX1 server supporting vehicle analysis, an embedded ARM server supporting vehicle analysis, an embedded TX1 server supporting a video structuring task, an embedded ARM server supporting a video structuring task, a TX1 server supporting a behavior analysis task, an embedded ARM server supporting a behavior analysis task and a general GPU server supporting a model comparison task.
Ungrouped servers support the full analysis task, so for ease of description these servers are hereinafter referred to as multiple intelligent model servers, in this example, multiple intelligent mode servers including: an embedded TX1 server supporting full analysis tasks, an embedded ARM server and a general GPU server.
The architecture of the scheduling system can be as shown in fig. 2, a task manager 210, an algorithm management center 220, a center scheduler 230, a face analysis cluster 241, a vehicle analysis cluster 242, a video structured cluster 243, a behavior analysis cluster 244, a model comparison cluster 245, a TX1 analysis unit 251, an ARM analysis unit 252, a container cloud 253, and a general GPU server/device 254.
For the construction manner of each cluster, reference may be made to the foregoing description about cluster construction, and details are not repeated here. The TX1 analysis unit is the full-analysis embedded TX1 server, the ARM analysis unit is the full-analysis embedded ARM server, and the general GPU server/device is the full-analysis general server.
The container cloud cluster adopts GPU bare metal, a container management scheduling component is deployed on a bare metal server, a related algorithm mirror image is selected when a container is created, the container is hung on a general server to the related container in the starting process, the container is generated, and high-performance computing power is provided for algorithm service in the container. The central scheduler is used for uniformly scheduling the TX1 analysis unit, the ARM analysis unit, the general GPU server/equipment and the container cloud.
The principle of the scheduling system may be as shown in fig. 3, where fig. 3 is a schematic diagram of a scheduling flow of the scheduling system provided in the embodiment of the present invention, and the scheduling flow may include:
step 1, non-central scheduling resources are accessed to a computing resource management component.
The non-central scheduling resources comprise a face analysis cluster, a vehicle analysis cluster, a video structuring cluster and a behavior analysis cluster.
And 2, acquiring a GPU computing resource list.
The GPU computing resources may include a resource scheduling type, a computing resource detail type, and a number of each type of computing resource.
And 3, acquiring an algorithm list.
The amount of computing resources required for one-way operation of each algorithm can be obtained at the same time, for example, the amount of embedded ARM chips required for executing 1-way algorithm a can be obtained.
And 4, receiving an intelligent analysis task.
For example, a face analysis task for 1000 passes of video data is received.
And 5, selecting an algorithm according to the intelligent analysis task.
The algorithm type and the algorithm version may be selected according to an intelligent analysis task, for example, a face analysis algorithm of version a may be selected according to a face analysis task.
And 6, selecting the type of the computing resource according to the type of the algorithm.
For example, assuming that the selected algorithm is a face analysis algorithm, the computing resources supporting the face analysis algorithm include a face analysis cluster and a centrally scheduled full analysis computing resource.
And 7, calculating the quantity of resources according to the task quantity of the intelligent analysis task.
For example, if the intelligent analysis task is a face analysis task for 1000 channels of video data, it can be determined how many embedded TX1 servers are needed if the face analysis task is completed by an embedded TX1 server, how many embedded ARM servers are needed if the face analysis task is completed by an embedded ARM server, and how many general servers are needed if the face analysis task is completed by a general server.
And 8, applying for computing resources.
And 9, scheduling the computing resources according to the applied computing resources.
For the non-central scheduling resource management component, a single intelligent server resource can be scheduled, and for the central scheduling resource management component, a single intelligent server and multiple intelligent server resources can be scheduled, that is, all server resources can be scheduled.
And step 10, completing the scheduling of the computing resources.
The application results and the list of computing resources may be returned to the management control center by the computing resource management component.
And step 11, the management control center issues an instruction to the algorithm management center, and the algorithm is loaded in the computing resources and the operating environment.
And step 12, the management control center issues an instruction to the task management center, issues the intelligent analysis task to the algorithm service, and executes the intelligent analysis task.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a scheduling apparatus according to an embodiment of the present invention, where the scheduling apparatus may include:
the resource management module 401 is configured to divide the multiple computing resources with different architectures into multiple computing resource groups according to the supported intelligent analysis tasks, where the intelligent analysis tasks supported by the computing resources included in each computing resource group are the same;
a resource scheduling module 402, configured to schedule, when an intelligent analysis task is received, a computing resource in a computing resource group supporting the intelligent analysis task;
an algorithm scheduling module 403, configured to configure, for the scheduled computing resource, an algorithm applicable to the architecture of the computing resource, so as to implement the intelligent analysis task.
In a possible embodiment, the resource management module 401 is specifically configured to divide a plurality of computing resources with different architectures into a plurality of computing resource groups according to the supported intelligent analysis tasks, where the intelligent analysis tasks supported by the computing resources included in each computing resource group are the same;
when an intelligent analysis task is received, scheduling computing resources in a computing resource group supporting the intelligent analysis task;
and configuring an algorithm suitable for the architecture of the computing resource aiming at the scheduled computing resource so as to realize the intelligent analysis task.
The resource scheduling module 402 is specifically configured to schedule, when an intelligent analysis task is received, a computing resource that does not belong to a computing resource group, so as to implement the intelligent analysis task.
In a possible embodiment, the resource management module 401 is further configured to configure a group scheduler for each computing resource group;
the resource scheduling module 402 is specifically configured to send a scheduling instruction to a group scheduler of a computing resource group supporting the intelligent analysis task, so as to control the group scheduler to schedule computing resources in the computing resource group to which the group scheduler belongs.
In a possible embodiment, the resource management module 401 is specifically configured to select one computing resource from the computing resource group, and configure the computing resource group as a group scheduler; alternatively, the first and second electrodes may be,
a virtual group scheduler is configured for the set of computing resources.
In a possible embodiment, the resource management module 401 is specifically configured to, for each architecture, divide the computing resources of the architecture into a plurality of computing resource groups according to the supported intelligent analysis task.
An embodiment of the present invention further provides an electronic device, as shown in fig. 5, including:
a memory 501 for storing a computer program;
the processor 502 is configured to implement the following steps when executing the program stored in the memory 501:
according to the supported intelligent analysis tasks, dividing the computing resources with different architectures into a plurality of computing resource groups, wherein the intelligent analysis tasks supported by the computing resources in each computing resource group are the same;
when an intelligent analysis task is received, scheduling computing resources in a computing resource group supporting the intelligent analysis task;
and configuring an algorithm suitable for the architecture of the computing resource aiming at the scheduled computing resource so as to realize the intelligent analysis task.
In a possible embodiment, the dividing the plurality of computing resources with different architectures into a plurality of computing resource groups according to the supported intelligent analysis task includes:
according to the supported intelligent analysis tasks, dividing the computing resources supporting a single intelligent analysis task in the computing resources with different architectures into a plurality of computing resource groups;
the method further comprises the following steps:
when an intelligent analysis task is received, computing resources which do not belong to the computing resource group are scheduled to realize the intelligent analysis task.
In a possible embodiment, after the partitioning the plurality of architecturally distinct computing resources into the plurality of computing resource groups per supported intelligent analytics task, the method further comprises:
configuring a group scheduler for each group of computing resources;
the scheduling of computing resources in a set of computing resources that support the intelligent analytics tasks includes:
and sending a scheduling instruction to a group scheduler of the computing resource group supporting the intelligent analysis task so as to control the group scheduler to schedule the computing resources in the computing resource group.
In one possible embodiment, the configuration group scheduler comprises:
selecting one computing resource in the computing resource group and configuring the computing resource as a group scheduler; alternatively, the first and second electrodes may be,
a virtual group scheduler is configured for the set of computing resources.
In a possible embodiment, the dividing the plurality of computing resources with different architectures into a plurality of computing resource groups according to the supported intelligent analysis task includes:
for each architecture, the computing resources of the architecture are divided into a plurality of computing resource groups according to the supported intelligent analysis tasks.
The Memory mentioned in the above electronic device may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, which has instructions stored therein, and when the instructions are executed on a computer, the instructions cause the computer to execute any one of the scheduling methods in the foregoing embodiments.
In yet another embodiment, a computer program product containing instructions is provided, which when run on a computer, causes the computer to perform any of the scheduling methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the electronic device, the computer-readable storage medium, and the computer program product, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (12)

1. A scheduling method is applied to a scheduling system, wherein the scheduling system comprises a plurality of computing resources with different architectures, and the method comprises:
according to the supported intelligent analysis tasks, dividing the computing resources with different architectures into a plurality of computing resource groups, wherein the intelligent analysis tasks supported by the computing resources in each computing resource group are the same;
when an intelligent analysis task is received, scheduling computing resources in a computing resource group supporting the intelligent analysis task;
and configuring an algorithm suitable for the architecture of the computing resource aiming at the scheduled computing resource so as to realize the intelligent analysis task.
2. The method of claim 1, wherein the partitioning of the plurality of architecturally distinct computing resources into a plurality of computing resource groups in accordance with the supported intelligent analytics tasks comprises:
according to the supported intelligent analysis tasks, dividing the computing resources supporting a single intelligent analysis task in the computing resources with different architectures into a plurality of computing resource groups;
the method further comprises the following steps:
when an intelligent analysis task is received, computing resources which do not belong to the computing resource group are scheduled to realize the intelligent analysis task.
3. The method of claim 1, wherein after said partitioning the plurality of architecturally distinct computing resources into a plurality of sets of computing resources in accordance with the supported intelligent analytics task, the method further comprises:
configuring a group scheduler for each group of computing resources;
the scheduling of computing resources in a set of computing resources that support the intelligent analytics tasks includes:
and sending a scheduling instruction to a group scheduler of the computing resource group supporting the intelligent analysis task so as to control the group scheduler to schedule the computing resources in the computing resource group.
4. The method of claim 3, wherein the configuring the group scheduler comprises:
selecting one computing resource in the computing resource group and configuring the computing resource as a group scheduler; alternatively, the first and second electrodes may be,
a virtual group scheduler is configured for the set of computing resources.
5. The method of claim 1, wherein the partitioning of the plurality of architecturally distinct computing resources into a plurality of computing resource groups in accordance with the supported intelligent analytics tasks comprises:
for each architecture, the computing resources of the architecture are divided into a plurality of computing resource groups according to the supported intelligent analysis tasks.
6. A scheduling apparatus, applied to a scheduling system including a plurality of computing resources with different architectures, the apparatus comprising:
the resource management module is used for dividing the computing resources with different architectures into a plurality of computing resource groups according to the supported intelligent analysis tasks, wherein the intelligent analysis tasks supported by the computing resources in each computing resource group are the same;
the resource scheduling module is used for scheduling computing resources in a computing resource group supporting the intelligent analysis task when the intelligent analysis task is received;
and the algorithm scheduling module is used for configuring an algorithm suitable for the architecture of the computing resource aiming at the scheduled computing resource so as to realize the intelligent analysis task.
7. The apparatus according to claim 6, wherein the resource management module is specifically configured to divide the multiple computing resources with different architectures into multiple computing resource groups according to the supported intelligent analysis tasks, where the intelligent analysis tasks supported by the computing resources included in each computing resource group are the same;
when an intelligent analysis task is received, scheduling computing resources in a computing resource group supporting the intelligent analysis task;
aiming at the scheduled computing resources, configuring an algorithm suitable for the architecture of the computing resources to realize the intelligent analysis task;
the resource scheduling module is specifically configured to schedule, when an intelligent analysis task is received, a computing resource that does not belong to a computing resource group, so as to implement the intelligent analysis task.
8. The apparatus of claim 6, wherein the resource management module is further configured to configure a group scheduler for each set of computing resources;
the resource scheduling module is specifically configured to send a scheduling instruction to a group scheduler of a computing resource group supporting the intelligent analysis task, so as to control the group scheduler to schedule computing resources in the computing resource group to which the group scheduler belongs.
9. The apparatus of claim 8, wherein the resource management module is specifically configured to select one of the computing resources from the set of computing resources to configure as a set scheduler; alternatively, the first and second electrodes may be,
a virtual group scheduler is configured for the set of computing resources.
10. The apparatus according to claim 6, wherein the resource management module is specifically configured to, for each architecture, divide the computing resources of the architecture into a plurality of computing resource groups according to the supported intelligent analysis task.
11. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 5 when executing a program stored in the memory.
12. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-5.
CN202010501711.2A 2020-06-04 2020-06-04 Scheduling method and device and electronic equipment Pending CN111651276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010501711.2A CN111651276A (en) 2020-06-04 2020-06-04 Scheduling method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010501711.2A CN111651276A (en) 2020-06-04 2020-06-04 Scheduling method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111651276A true CN111651276A (en) 2020-09-11

Family

ID=72347159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010501711.2A Pending CN111651276A (en) 2020-06-04 2020-06-04 Scheduling method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111651276A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106302628A (en) * 2015-12-29 2017-01-04 北京典赞科技有限公司 ARM architecture network cluster calculates the unified management dispatching method of resource
CN107436806A (en) * 2016-05-27 2017-12-05 苏宁云商集团股份有限公司 A kind of resource regulating method and system
CN109564528A (en) * 2017-07-06 2019-04-02 华为技术有限公司 The system and method for computational resource allocation in distributed computing
CN109669768A (en) * 2018-12-11 2019-04-23 北京工业大学 A kind of resource allocation and method for scheduling task towards side cloud combination framework
CN109739640A (en) * 2018-12-13 2019-05-10 北京计算机技术及应用研究所 A kind of container resource management system based on Shen prestige framework
CN110389820A (en) * 2019-06-28 2019-10-29 浙江大学 A kind of private clound method for scheduling task carrying out resources based on v-TGRU model
CN111126895A (en) * 2019-11-18 2020-05-08 青岛海信网络科技股份有限公司 Management warehouse and scheduling method for scheduling intelligent analysis algorithm in complex scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106302628A (en) * 2015-12-29 2017-01-04 北京典赞科技有限公司 ARM architecture network cluster calculates the unified management dispatching method of resource
CN107436806A (en) * 2016-05-27 2017-12-05 苏宁云商集团股份有限公司 A kind of resource regulating method and system
CN109564528A (en) * 2017-07-06 2019-04-02 华为技术有限公司 The system and method for computational resource allocation in distributed computing
CN109669768A (en) * 2018-12-11 2019-04-23 北京工业大学 A kind of resource allocation and method for scheduling task towards side cloud combination framework
CN109739640A (en) * 2018-12-13 2019-05-10 北京计算机技术及应用研究所 A kind of container resource management system based on Shen prestige framework
CN110389820A (en) * 2019-06-28 2019-10-29 浙江大学 A kind of private clound method for scheduling task carrying out resources based on v-TGRU model
CN111126895A (en) * 2019-11-18 2020-05-08 青岛海信网络科技股份有限公司 Management warehouse and scheduling method for scheduling intelligent analysis algorithm in complex scene

Similar Documents

Publication Publication Date Title
US10452438B2 (en) Parameter selection for optimization of task execution based on execution history for prior tasks
Téllez et al. A tabu search method for load balancing in fog computing
US8104038B1 (en) Matching descriptions of resources with workload requirements
US11171845B2 (en) QoS-optimized selection of a cloud microservices provider
US9876703B1 (en) Computing resource testing
US9535754B1 (en) Dynamic provisioning of computing resources
WO2010028868A1 (en) Method and system for sharing performance data between different information technology product/solution deployments
CN111176818B (en) Distributed prediction method, device, system, electronic equipment and storage medium
CN110233802B (en) Method for constructing block chain structure with one main chain and multiple side chains
EP3602289A1 (en) Virtualised network function deployment
CN112463375A (en) Data processing method and device
CN111352711A (en) Multi-computing engine scheduling method, device, equipment and storage medium
US20200310828A1 (en) Method, function manager and arrangement for handling function calls
CN114490062A (en) Local disk scheduling method and device, electronic equipment and storage medium
Bellavista et al. GAMESH: a grid architecture for scalable monitoring and enhanced dependable job scheduling
CN1783121B (en) Method and system for executing design automation
US11750451B2 (en) Batch manager for complex workflows
CN111651276A (en) Scheduling method and device and electronic equipment
CN110659125A (en) Analysis task execution method, device and system and electronic equipment
US20230115217A1 (en) Optimizing a just-in-time compilation process in a container orchestration system
CN110968420A (en) Scheduling method and device for multi-crawler platform, storage medium and processor
CN114090201A (en) Resource scheduling method, device, equipment and storage medium
CN113301087A (en) Resource scheduling method, device, computing equipment and medium
CN111078263A (en) Hot deployment method, system, server and storage medium based on Drools rule engine
CN113434283B (en) Service scheduling method and device, server and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination