CN117130787A - Resource scheduling method and device, electronic equipment and storage medium - Google Patents

Resource scheduling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117130787A
CN117130787A CN202311119390.XA CN202311119390A CN117130787A CN 117130787 A CN117130787 A CN 117130787A CN 202311119390 A CN202311119390 A CN 202311119390A CN 117130787 A CN117130787 A CN 117130787A
Authority
CN
China
Prior art keywords
task
hardware
software
target
cpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311119390.XA
Other languages
Chinese (zh)
Inventor
滕飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Original Assignee
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd filed Critical Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority to CN202311119390.XA priority Critical patent/CN117130787A/en
Publication of CN117130787A publication Critical patent/CN117130787A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application relates to a resource scheduling method, a device, electronic equipment and a storage medium, which comprise the following steps: selecting any CPU from a target system with a plurality of CPUs as a foreground CPU and other CPUs as background CPUs; acquiring a target task to be processed; analyzing the software and hardware tasks of the target task to obtain a software task and a hardware task to be processed; and scheduling software and hardware resources of the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task. Therefore, the unified scheduling and management of different hardware resources in the SSD are supported when the FTL software has multiple CPU cores, and an FTL developer can call different types of internal hardware resources by adopting a unified interface.

Description

Resource scheduling method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of software and hardware resource scheduling, in particular to a resource scheduling method, a resource scheduling device, electronic equipment and a storage medium.
Background
The FTL is a core software control layer of a solid state disk (Solid State Drive, SSD), and an algorithm and an implementation scheme thereof directly determine the advantages and disadvantages of the SSD in terms of reliability, performance, durability and the like. The core function is to complete the translation of the host logical address to the flash memory physical address, and control the execution and feedback of host commands including reading, writing, trim, formatting, powering up and down, etc. Since the Nand storage medium has a high-dimensional structure different from the linear address space defined by the front-end command standard and does not have a write-over function, it is necessary to perform an entire erase in units of blocks. This requires the FTL to tightly manage the relevant workflow. The FTL software is encapsulated in the firmware of most of the existing SSD products so as to schedule different types of Nand in a targeted manner, and the cache hardware and other hardware algorithm modules work cooperatively. The following basic operation modes are mostly adopted in the current mainstream products: the multithreading operates asynchronously, adopt the message queue to communicate between every software and hardware module; each functional module is provided with an independent state machine; the modules can be executed in parallel in a pipeline (pipeline) manner.
However, the FTL software described above needs to be designed for the specific available resources contained in the SSD product. For example, different numbers of internal CPU cores may result in a software designer employing disparate usage policies for each CPU core; different internal cache sizes may also result in the designer using entirely different cache usage policies; because the hardware calling interfaces are inconsistent, software codes are mixed and distributed, and different hardware is called in a specific calling mode in the software codes. This makes the FTL very tedious to call different hardware resources inside the SSD when having multiple CPU cores, time consuming and inefficient.
Disclosure of Invention
In view of the foregoing, in order to solve the foregoing technical problems or some of the technical problems, embodiments of the present application provide a resource scheduling method, apparatus, electronic device, and storage medium.
In a first aspect, an embodiment of the present application provides a resource scheduling method, including:
selecting any CPU from a target system with a plurality of CPUs as a foreground CPU and other CPUs as background CPUs;
acquiring a target task to be processed;
analyzing the software and hardware tasks of the target task to obtain a software task and a hardware task to be processed;
and scheduling software and hardware resources of the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task.
In one possible embodiment, the method further comprises:
and distributing the software task to at least one background CPU through a unified software interface, and distributing the hardware task to a hardware processing module through a unified hardware interface so that the hardware processing module performs hardware resource scheduling on the target task.
In one possible embodiment, the method further comprises
Setting corresponding software memory requirements and hardware memory requirements for the software task and the hardware task respectively;
selecting a target background CPU and a target solid state disk which meet preset conditions from other CPUs based on the memory requirement;
and distributing the software task to the target background CPU through a unified software interface, and distributing the hardware task to a hardware processing module positioned in the target solid state disk through a unified hardware interface.
In one possible embodiment, the method further comprises:
and allocating a first target memory area for the target background CPU based on the software memory requirement, and allocating a second target memory area for the target solid state disk based on the hardware memory requirement.
In one possible embodiment, the method further comprises:
receiving task receiving results returned by the target background CPU and the hardware processing module;
if the resources of the target background CPU or the hardware processing module are insufficient, the task receiving result is failure;
and running the software task of which the target background CPU fails to receive in the foreground CPU, and caching the hardware task of which the hardware processing module fails to receive until the hardware processing module has sufficient resources, and continuing to process the hardware task.
In one possible embodiment, the method further comprises:
if the resources of the target background CPU are sufficient, the task receiving result is successful;
receiving a task receiving result of the target background CPU on the software task;
if the task receiving result is successful, receiving a task processing result returned by the target background CPU;
and if the task receiving result is failure, running the software task with the failure task receiving result in a foreground CPU.
In one possible embodiment, the method further comprises:
judging the running state of each background CPU based on a first-in first-out queue waiting for running tasks and a second first-out queue waiting for the running results of the foreground CPU after the tasks are run, wherein the running state refers to the number of software tasks waiting for execution in the task queue of each background CPU;
judging whether the hardware processing module has an idle channel or not based on a preset data structure.
In a second aspect, an embodiment of the present application provides a resource scheduling apparatus, including:
the setting module is used for selecting any CPU from a target system with a plurality of CPUs as a foreground CPU and other CPUs as background CPUs;
the acquisition module is used for acquiring a target task to be processed;
the analysis module is used for analyzing the software and hardware tasks of the target task to obtain a software task and a hardware task to be processed;
and the scheduling module is used for scheduling the software and hardware resources of the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task.
In a third aspect, an embodiment of the present application provides a server, including: the resource scheduling method comprises a processor and a memory, wherein the processor is used for executing a resource scheduling program stored in the memory so as to realize the resource scheduling method in the first aspect.
In a fourth aspect, an embodiment of the present application provides a storage medium, including: the storage medium stores one or more programs executable by one or more processors to implement the resource scheduling method described in the first aspect.
According to the resource scheduling scheme provided by the embodiment of the application, any CPU is selected as a foreground CPU in a target system with a plurality of CPUs, and other CPUs are selected as background CPUs; acquiring a target task to be processed; analyzing the software and hardware tasks of the target task to obtain a software task and a hardware task to be processed; and scheduling software and hardware resources of the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task. Compared with the problems of complicated calling, long time consumption and low efficiency of different hardware resources in the SSD when the FTL software has multiple CPU cores in the prior art, the scheme supports unified scheduling and management of the different hardware resources in the SSD when the FTL software has the multiple CPU cores, and an FTL developer can call different types of internal hardware resources by adopting a unified interface.
Drawings
Fig. 1 is a schematic diagram of a system architecture of resource scheduling according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a resource scheduling method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of S24 according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a resource scheduling device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
For the purpose of facilitating an understanding of the embodiments of the present application, reference will now be made to the following description of specific embodiments, taken in conjunction with the accompanying drawings, which are not intended to limit the embodiments of the application.
Fig. 1 is a schematic diagram of a system architecture of resource scheduling provided by an embodiment of the present application, as shown in fig. 1, a resource scheduling system in the present application adopts a master-slave operating mode, where the master-slave mode is a parallel software operating mode, and an operating state is divided into a foreground (master) and a background (slave), where the foreground is responsible for allocation of computing tasks and storage resources, and the background is responsible for specifically executing computing tasks.
Specifically, the master is fixedly located in a certain CPU core, other CPU cores are slave, and when the master enters an idle state, the master can be switched to slave to execute tasks. The entire workflow is maintained by a master, which is responsible for defining slave tasks, hardware operations, and host commands. Only master has the right to schedule slave to perform tasks, initiate hardware operations, and get host commands. Memory management is all performed by a master that is responsible for transferring specific memory regions to slave or specific hardware processing modules to perform tasks. And maintaining the dependency relationship among the tasks by a developer according to the memory access requirement, and not using a thread lock. The system limits the memory requirements of various tasks, and after the master receives the task processing command, the task is decomposed first to adapt to the memory requirements, and the memory is not allowed to be dynamically allocated for the task so as to avoid memory errors.
Fig. 2 is a flow chart of a resource scheduling method according to an embodiment of the present application, as shown in fig. 2, where the method specifically includes:
s21, selecting any CPU as a foreground CPU in a target system with a plurality of CPUs, and other CPUs as background CPUs.
The embodiment of the application is applied to a general parallel computing platform to support unified scheduling and management of different hardware resources in SSD (solid State disk) when FTL software has multiple CPU cores. Based on the platform, the FTL developer can call different types of internal hardware resources by adopting a unified interface. The master-slave working mode is adopted, and is a parallel software working mode, the working state is divided into a foreground (master) and a background (slave), the foreground is responsible for distributing computing tasks and storage resources, and the background is responsible for specifically executing the computing tasks.
First, in a target system having a plurality of CPUs, one CPU may be arbitrarily set as a foreground CPU (master), and the other CPUs as background CPUs (slave).
S22, acquiring a target task to be processed.
The foreground CPU (master) receives a task processing command, where the task processing command carries a target task to be processed, and typically, the target task includes both a software task and a hardware task. Wherein software tasks need to be processed on a background CPU (slave), and hardware tasks need to be processed on a specific type of hardware processing module.
The types of hardware processing modules may include: the memory data transmission module is from Nand to FTL; from FTL memory to Nand data transfer module; a Nand data erasing module; a DMA copy module from Host memory to FTL memory; a DMA copy module from the FTL memory to the Host memory; a direct data transmission module from Nand to Host memory; and finishing the block data exclusive OR operation.
Wherein, nand refers to an actual physical storage medium in the solid state disk; the Host memory is a memory which is controlled by a Host operating system connected with the solid state disk in a dominant way; the FTL memory is a memory managed by an internal software control layer of the solid state disk.
S23, analyzing the software and hardware tasks of the target task to obtain the software task and the hardware task to be processed.
In general, the target tasks include both software tasks and hardware tasks, and software and hardware task analysis is performed on the received target tasks to obtain software tasks and hardware tasks to be processed, where the number of software tasks and hardware tasks is one or more.
S24, carrying out software and hardware resource scheduling on the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task.
In the embodiment of the application, as a developer continuously submits new tasks to the platform, the working states of each background CPU (slave) and the hardware processing module positioned in the solid state disk are updated. Each background CPU (slave) maintains a queue of software tasks waiting to be executed and records the length of the queue. Each hardware processing module maintains an idle state for a particular data structure management task execution path.
Further, the software tasks are distributed to a background CPU (slave) with the least queuing software tasks or forcedly appointed by a developer through a unified software interface, the hardware tasks are distributed to a hardware processing module in the solid state disk through the unified hardware interface, whether the hardware processing module has an idle channel or not is judged through a preset data structure, and if the idle channel exists, the hardware tasks are distributed to the idle channel, so that the solid state disk processes the target tasks.
Wherein, each background CPU maintains a first-in first-out queue (FIFO) waiting for running tasks and a plurality of task first-in first-out queues (FIFOs) which are determined by the type of software tasks and wait for the foreground CPU to inquire the running results. Each type of hardware processing module can execute a plurality of hardware tasks in parallel, and each executable hardware task corresponds to one channel; each type of hardware processing module maintains a data structure that indicates whether each of its lanes is idle and supports traversing the next idle lane for a constant time.
According to the resource scheduling method provided by the embodiment of the application, any CPU is selected as a foreground CPU in a target system with a plurality of CPUs, and other CPUs are selected as background CPUs; acquiring a target task to be processed; analyzing the software and hardware tasks of the target task to obtain a software task and a hardware task to be processed; and scheduling software and hardware resources of the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task. Compared with the problems of complicated calling, long time consumption and low efficiency of different hardware resources in the SSD when the FTL software has multiple CPU cores in the prior art, the method supports unified scheduling and management of the different hardware resources in the SSD when the FTL software has the multiple CPU cores, and an FTL developer can call different types of internal hardware resources by adopting a unified interface.
Fig. 3 is a schematic flow chart of S24 provided in the embodiment of the present application, as shown in fig. 3, specifically including:
s31, respectively setting corresponding software memory requirements and hardware memory requirements for the software task and the hardware task.
In the embodiment of the application, after receiving the task processing command, the foreground CPU (master) first decomposes the target task to be processed to obtain the software task and the hardware task to be processed, limits the memory requirements of various tasks, and does not allow the dynamic allocation of memory for the tasks to avoid memory errors, so that corresponding software memory requirements and hardware memory requirements can be respectively set for the software task and the hardware task according to the actual processing condition and the resource condition.
S32, selecting a target background CPU and a target solid state disk which meet preset conditions from other CPUs based on the memory requirement.
In the embodiment of the application, as a developer continuously submits new tasks to the platform, the working states of each background CPU (slave) and the hardware processing module positioned in the solid state disk are updated. Each background CPU (slave) maintains a queue of software tasks waiting to be executed and records the length of the queue. Each hardware processing module maintains an idle state for a particular data structure management task execution path.
And selecting a background CPU (Slave) with the least queuing tasks and an idle channel of each hardware processing module to process the target task.
S33, a first target memory area is allocated for the target background CPU based on the software memory requirement, and a second target memory area is allocated for the target solid state disk based on the hardware memory requirement.
Before a background CPU (slave) and a hardware processing module are allocated to process a target task, a first target memory area is allocated to the target background CPU (slave) based on software memory requirements, and a second target memory area is allocated to the target solid state disk based on hardware memory requirements.
S34, distributing the software task to the target background CPU through a unified software interface, and distributing the hardware task to a hardware processing module positioned in the target solid state disk through a unified hardware interface.
Further, the software task is distributed to at least one background CPU (slave) in an idle state through a unified software interface, the hardware task is distributed to a hardware processing module in the solid state disk through the unified hardware interface, whether an idle channel exists in the hardware processing module is judged through a preset data structure, and if the idle channel exists in the hardware processing module, the hardware task is distributed to the idle channel, so that the solid state disk processes the target task.
Wherein, each background CPU maintains a first-in first-out queue (FIFO) waiting for running tasks and a plurality of task first-in first-out queues (FIFOs) which are determined by the type of software tasks and wait for the foreground CPU to inquire the running results. Each type of hardware processing module can execute a plurality of hardware tasks in parallel, and each executable hardware task corresponds to one channel; each type of hardware processing module maintains a data structure that indicates whether each of its lanes is idle and supports traversing the next idle lane for a constant time.
After the software task is sent to the target background CPU (slave) and the hardware task is sent to the hardware processing module, receiving task receiving results returned by the target background CPU (slave) and the target solid state disk; if the resources of the target background CPU or the target solid state disk are insufficient, the task receiving result is failure; and running the software task of which the target background CPU fails to receive in a foreground CPU (master), and caching the hardware task of which the target solid state disk fails to receive until the resources of the target solid state disk are sufficient, and continuing to process the hardware task.
Optionally, if the resources of the target background CPU (slave) are sufficient, the task receiving result is successful; receiving a task running state of a target background CPU (slave) on a software task; and if the task running state is successful, receiving a task processing completion result returned by the target background CPU (slave).
According to the resource scheduling method provided by the embodiment of the application, any CPU is selected as a foreground CPU in a target system with a plurality of CPUs, and other CPUs are selected as background CPUs; acquiring a target task to be processed; analyzing the software and hardware tasks of the target task to obtain a software task and a hardware task to be processed; and scheduling software and hardware resources of the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task. By the method, the unified scheduling and management of different hardware resources in the SSD are supported when the FTL software has multiple CPU cores, and an FTL developer can call different types of internal hardware resources by adopting a unified interface.
Fig. 4 is a schematic structural diagram of a resource scheduling device according to an embodiment of the present application, which specifically includes:
a setting module 401, configured to select any CPU as a foreground CPU and other CPUs as background CPUs in a target system having multiple CPUs;
an obtaining module 402, configured to obtain a target task to be processed;
the parsing module 403 is configured to parse the software and hardware tasks of the target task to obtain a software task and a hardware task to be processed;
and the scheduling module 404 is configured to schedule software and hardware resources for the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task.
In a possible implementation manner, the setting module 401 is further configured to determine an operation state of each background CPU based on a first fifo queue of a task waiting to be operated maintained by each background CPU and a second fifo queue of a plurality of tasks that are operated and wait for a foreground CPU to query an operation result, where the operation state refers to the number of software tasks waiting to be executed in the task queue of each background CPU; judging whether the hardware processing module has an idle channel or not based on a preset data structure.
In a possible implementation manner, the scheduling module 404 is further configured to distribute the software task to at least one background CPU through a unified software interface, and distribute the hardware task to a hardware processing module through a unified hardware interface, so that the hardware processing module performs hardware resource scheduling on the target task.
In a possible implementation manner, the scheduling module 404 is further configured to set corresponding software memory requirements and hardware memory requirements for the software task and the hardware task, respectively; selecting a target background CPU and a target solid state disk which meet preset conditions from other CPUs based on the memory requirement; and distributing the software task to the target background CPU through a unified software interface, and distributing the hardware task to a hardware processing module positioned in the target solid state disk through a unified hardware interface.
In a possible implementation manner, the scheduling module 404 is further configured to allocate a first target memory area for the target background CPU based on the software memory requirement, and allocate a second target memory area for the target solid state hard disk based on the hardware memory requirement.
In a possible implementation manner, the scheduling module 404 is further configured to receive a task receiving result returned by the target background CPU and the hardware processing module; if the resources of the target background CPU or the hardware processing module are insufficient, the task receiving result is failure; and running the software task of which the target background CPU fails to receive in the foreground CPU, and caching the hardware task of which the hardware processing module fails to receive until the hardware processing module has sufficient resources, and continuing to process the hardware task.
In a possible implementation manner, the scheduling module 404 is further configured to, if the resources of the target background CPU are sufficient, determine that the task reception result is successful; receiving a task receiving result of the target background CPU on the software task; if the task receiving result is successful, receiving a task processing result returned by the target background CPU; and if the task receiving result is failure, running the software task with the failure task receiving result in a foreground CPU.
The resource scheduling device provided in this embodiment may be a resource scheduling device as shown in fig. 4, and may perform all steps of the resource scheduling method as shown in fig. 2-3, so as to achieve the technical effects of the resource scheduling method as shown in fig. 2-3, and the detailed description will be omitted herein for brevity.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and an electronic device 500 shown in fig. 5 includes: at least one processor 501, memory 502, at least one network interface 504, and other user interfaces 503. The various components in the electronic device 500 are coupled together by a bus system 505. It is understood that bus system 505 is used to enable connected communications between these components. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 505 in fig. 5.
The user interface 503 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, a trackball, a touch pad, or a touch screen, etc.).
It will be appreciated that the memory 502 in embodiments of the application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DRRAM). The memory 502 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some implementations, the memory 502 stores the following elements, executable units or data structures, or a subset thereof, or an extended set thereof: an operating system 5021 and application programs 5022.
The operating system 5021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 5022 includes various application programs such as a Media Player (Media Player), a Browser (Browser), and the like for realizing various application services. A program for implementing the method according to the embodiment of the present application may be included in the application 5022.
In the embodiment of the present application, the processor 501 is configured to execute the method steps provided by the method embodiments by calling a program or an instruction stored in the memory 502, specifically, a program or an instruction stored in the application 5022, for example, including:
selecting any CPU from a target system with a plurality of CPUs as a foreground CPU and other CPUs as background CPUs; acquiring a target task to be processed; analyzing the software and hardware tasks of the target task to obtain a software task and a hardware task to be processed; and scheduling software and hardware resources of the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task.
In one possible implementation, the software task is distributed to at least one background CPU through a unified software interface, and the hardware task is distributed to a hardware processing module through a unified hardware interface, so that the hardware processing module performs hardware resource scheduling on the target task.
In one possible implementation manner, corresponding software memory requirements and hardware memory requirements are set for the software task and the hardware task respectively; selecting a target background CPU and a target solid state disk which meet preset conditions from other CPUs based on the memory requirement; and distributing the software task to the target background CPU through a unified software interface, and distributing the hardware task to a hardware processing module positioned in the target solid state disk through a unified hardware interface.
In one possible implementation, a first target memory area is allocated for the target background CPU based on the software memory requirement, and a second target memory area is allocated for the target solid state disk based on the hardware memory requirement.
In one possible implementation manner, receiving task receiving results returned by the target background CPU and the hardware processing module; if the resources of the target background CPU or the hardware processing module are insufficient, the task receiving result is failure; and running the software task of which the target background CPU fails to receive in the foreground CPU, and caching the hardware task of which the hardware processing module fails to receive until the hardware processing module has sufficient resources, and continuing to process the hardware task.
In one possible implementation manner, if the resources of the target background CPU are sufficient, the task receiving result is successful; receiving a task receiving result of the target background CPU on the software task; if the task receiving result is successful, receiving a task processing result returned by the target background CPU; and if the task receiving result is failure, running the software task with the failure task receiving result in a foreground CPU.
In one possible implementation manner, the running state of each background CPU is judged based on a first-in first-out queue of one waiting running task and a second first-out queue of a plurality of tasks, which are maintained by each background CPU, and the running result is inquired by a foreground CPU, wherein the running state refers to the number of software tasks waiting to be executed in the task queue of each background CPU;
judging whether the hardware processing module has an idle channel or not based on a preset data structure.
The method disclosed in the above embodiment of the present application may be applied to the processor 501 or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware or instructions in software in the processor 501. The processor 501 may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software elements in a decoding processor. The software elements may be located in a random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 502, and the processor 501 reads information in the memory 502 and, in combination with its hardware, performs the steps of the method described above.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (dspev, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The electronic device provided in this embodiment may be an electronic device as shown in fig. 5, and may perform all steps of the resource scheduling method shown in fig. 2-3, so as to achieve the technical effects of the resource scheduling method shown in fig. 2-3, and the detailed description will be omitted herein for brevity.
The embodiment of the application also provides a storage medium (computer readable storage medium). The storage medium here stores one or more programs. Wherein the storage medium may comprise volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, hard disk, or solid state disk; the memory may also comprise a combination of the above types of memories.
When one or more programs in the storage medium are executable by one or more processors, the above-described resource scheduling method executed on the electronic device side is implemented.
The processor is configured to execute a resource scheduling program stored in the memory, so as to implement the following steps of a resource scheduling method executed on the electronic device side:
selecting any CPU from a target system with a plurality of CPUs as a foreground CPU and other CPUs as background CPUs; acquiring a target task to be processed; analyzing the software and hardware tasks of the target task to obtain a software task and a hardware task to be processed; and scheduling software and hardware resources of the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task.
In one possible implementation, the software task is distributed to at least one background CPU through a unified software interface, and the hardware task is distributed to a hardware processing module through a unified hardware interface, so that the hardware processing module performs hardware resource scheduling on the target task.
In one possible implementation manner, corresponding software memory requirements and hardware memory requirements are set for the software task and the hardware task respectively; selecting a target background CPU and a target solid state disk which meet preset conditions from other CPUs based on the memory requirement; and distributing the software task to the target background CPU through a unified software interface, and distributing the hardware task to a hardware processing module positioned in the target solid state disk through a unified hardware interface.
In one possible implementation, a first target memory area is allocated for the target background CPU based on the software memory requirement, and a second target memory area is allocated for the target solid state disk based on the hardware memory requirement.
In one possible implementation manner, receiving task receiving results returned by the target background CPU and the hardware processing module; if the resources of the target background CPU or the hardware processing module are insufficient, the task receiving result is failure; and running the software task of which the target background CPU fails to receive in the foreground CPU, and caching the hardware task of which the hardware processing module fails to receive until the hardware processing module has sufficient resources, and continuing to process the hardware task.
In one possible implementation manner, if the resources of the target background CPU are sufficient, the task receiving result is successful; receiving a task receiving result of the target background CPU on the software task; if the task receiving result is successful, receiving a task processing result returned by the target background CPU; and if the task receiving result is failure, running the software task with the failure task receiving result in a foreground CPU.
In one possible implementation manner, the running state of each background CPU is judged based on a first-in first-out queue of one waiting running task and a second first-out queue of a plurality of tasks, which are maintained by each background CPU, and the running result is inquired by a foreground CPU, wherein the running state refers to the number of software tasks waiting to be executed in the task queue of each background CPU;
judging whether the hardware processing module has an idle channel or not based on a preset data structure.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the application, and is not meant to limit the scope of the application, but to limit the application to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the application are intended to be included within the scope of the application.

Claims (10)

1. A method for scheduling resources, comprising:
selecting any CPU from a target system with a plurality of CPUs as a foreground CPU and other CPUs as background CPUs;
acquiring a target task to be processed;
analyzing the software and hardware tasks of the target task to obtain a software task and a hardware task to be processed;
and scheduling software and hardware resources of the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task.
2. The method according to claim 1, wherein said scheduling software and hardware resources for the software and hardware tasks to process the target task through a unified software interface and/or hardware interface comprises:
and distributing the software task to at least one background CPU through a unified software interface, and distributing the hardware task to a hardware processing module through a unified hardware interface so that the hardware processing module performs hardware resource scheduling on the target task.
3. The method of claim 2, wherein the distributing the software tasks to at least one background CPU through a unified software interface and distributing the hardware tasks to a hardware processing module through a unified hardware interface to cause the hardware processing module to schedule hardware resources for the target tasks comprises:
setting corresponding software memory requirements and hardware memory requirements for the software task and the hardware task respectively;
selecting at least one target background CPU and one target solid state disk which meet preset conditions from other CPUs based on the memory requirement;
and distributing the software task to the target background CPU through a unified software interface, and distributing the hardware task to a hardware processing module positioned in the target solid state disk through a unified hardware interface.
4. A method according to claim 3, characterized in that the method further comprises:
and allocating a first target memory area for the target background CPU based on the software memory requirement, and allocating a second target memory area for the target solid state disk based on the hardware memory requirement.
5. The method of claim 3, wherein the distributing the software tasks to the target background CPU through a unified software interface and the hardware tasks to a hardware processing module located inside the target solid state disk through a unified hardware interface comprises:
receiving task receiving results returned by the target background CPU and the hardware processing module;
if the resources of the target background CPU or the hardware processing module are insufficient, the task receiving result is failure;
and running the software task of which the target background CPU fails to receive in the foreground CPU, and caching the hardware task of which the hardware processing module fails to receive until the hardware processing module has sufficient resources, and continuing to process the hardware task.
6. The method of claim 5, wherein the method further comprises:
if the resources of the target background CPU are sufficient, the task receiving result is successful;
receiving a task receiving result of the target background CPU on the software task;
if the task receiving result is successful, receiving a task processing result returned by the target background CPU;
and if the task receiving result is failure, running the software task with the failure task receiving result in a foreground CPU.
7. The method according to claim 1, wherein the method further comprises:
judging the running state of each background CPU based on a first-in first-out queue waiting for running tasks and a second first-out queue waiting for the running results of the foreground CPU after the tasks are run, wherein the running state refers to the number of software tasks waiting for execution in the task queue of each background CPU;
judging whether the hardware processing module has an idle channel or not based on a preset data structure.
8. A resource scheduling apparatus, comprising:
the setting module is used for selecting any CPU from a target system with a plurality of CPUs as a foreground CPU and other CPUs as background CPUs;
the acquisition module is used for acquiring a target task to be processed;
the analysis module is used for analyzing the software and hardware tasks of the target task to obtain a software task and a hardware task to be processed;
and the scheduling module is used for scheduling the software and hardware resources of the software task and the hardware task through a unified software interface and/or hardware interface so as to process the target task.
9. An electronic device, comprising: a processor and a memory, the processor being configured to execute a resource scheduler stored in the memory to implement the resource scheduling method of any one of claims 1 to 7.
10. A storage medium storing one or more programs executable by one or more processors to implement the resource scheduling method of any one of claims 1-7.
CN202311119390.XA 2023-08-31 2023-08-31 Resource scheduling method and device, electronic equipment and storage medium Pending CN117130787A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311119390.XA CN117130787A (en) 2023-08-31 2023-08-31 Resource scheduling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311119390.XA CN117130787A (en) 2023-08-31 2023-08-31 Resource scheduling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117130787A true CN117130787A (en) 2023-11-28

Family

ID=88856186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311119390.XA Pending CN117130787A (en) 2023-08-31 2023-08-31 Resource scheduling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117130787A (en)

Similar Documents

Publication Publication Date Title
US11561830B2 (en) System and method for low latency node local scheduling in distributed resource management
US9135126B2 (en) Multi-core re-initialization failure control system
KR101936453B1 (en) Memory management model and interface for new applications
CN110489213B (en) Task processing method and processing device and computer system
KR101976221B1 (en) Memory management model and interface for unmodified applications
US9378069B2 (en) Lock spin wait operation for multi-threaded applications in a multi-core computing environment
CN113918101B (en) Method, system, equipment and storage medium for writing data cache
JP2004171234A (en) Task allocation method in multiprocessor system, task allocation program and multiprocessor system
US20110219373A1 (en) Virtual machine management apparatus and virtualization method for virtualization-supporting terminal platform
CN114168271B (en) Task scheduling method, electronic device and storage medium
KR20180089273A (en) Method and apparatus for implementing out-of-order resource allocation
JP2007188523A (en) Task execution method and multiprocessor system
US8972693B2 (en) Hardware managed allocation and deallocation evaluation circuit
CN117130787A (en) Resource scheduling method and device, electronic equipment and storage medium
EP4040295A1 (en) Memory bandwidth allocation for multi-tenant fpga cloud infrastructures
US9015719B2 (en) Scheduling of tasks to be performed by a non-coherent device
US20140149691A1 (en) Data processing system and data processing method
CN117501254A (en) Providing atomicity for complex operations using near-memory computation
US10740150B2 (en) Programmable state machine controller in a parallel processing system
CN110968418A (en) Signal-slot-based large-scale constrained concurrent task scheduling method and device
JP6364827B2 (en) Information processing apparatus, resource access method thereof, and resource access program
WO2022242777A1 (en) Scheduling method, apparatus and system, and computing device
WO2019188182A1 (en) Pre-fetch controller
CN116795490A (en) vCPU scheduling method, device, equipment and storage medium
CN115658324A (en) Process scheduling method, computing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination