CN115686782A - Resource scheduling method and device based on solid state disk, electronic equipment and storage medium - Google Patents

Resource scheduling method and device based on solid state disk, electronic equipment and storage medium Download PDF

Info

Publication number
CN115686782A
CN115686782A CN202211276782.2A CN202211276782A CN115686782A CN 115686782 A CN115686782 A CN 115686782A CN 202211276782 A CN202211276782 A CN 202211276782A CN 115686782 A CN115686782 A CN 115686782A
Authority
CN
China
Prior art keywords
data
cache
management
data cache
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211276782.2A
Other languages
Chinese (zh)
Inventor
李敬超
钟戟
赵宝林
王鑫
邓京涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202211276782.2A priority Critical patent/CN115686782A/en
Publication of CN115686782A publication Critical patent/CN115686782A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a resource scheduling method based on a solid state disk, which is applied to a write processing device and comprises the following steps: receiving a plurality of first data cache blocks to be executed and sent by a plurality of data management devices; executing a write action according to the plurality of first data cache blocks and generating a plurality of executed second data cache blocks; and when all the write actions are executed, accessing the first cache management chain table and the cache region corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices. The method comprises the steps of determining the number of data cache blocks in a first cache management linked list and whether second data cache blocks are stored in a corresponding cache region, and distributing approximately equal number of second data cache blocks to each data management device in the solid state disk in a polling scheduling mode for command and data processing during reading and writing, so that balanced resource scheduling of the solid state disk is achieved.

Description

Resource scheduling method and device based on solid state disk, electronic equipment and storage medium
Technical Field
The present invention relates to the field of resource scheduling, and in particular, to a resource scheduling method and apparatus based on a solid state disk, an electronic device, and a storage medium.
Background
The SSD master control chip is a CPU with an SMP architecture, consists of a plurality of cores with the same performance, runs an embedded operating system, and each module is used as a task to be deployed on each core. In order to meet the performance requirement, the DM module responsible for the read-write task is deployed on multiple cores, and the same number of context resources are partially allocated in each core for command and data processing during read-write. If the writing operation is carried out, the DM module receives the user data issued by the Host and stores the user data into the CCB, the CCB is transmitted to the WM to carry out the actual writing action after certain processing, and the WM releases the CCB to the DM after the writing operation is finished, and the operation is repeated. When the CCB amount in the DM is consumed, the DM suspends the receiving because the Host data cannot be stored, and the DM can continue to operate after waiting for the WM to release the CCB to the DM. In a scenario of mixed writing and reading/writing operations of a large data block, because the WM processes a CCB transferred by a DM, the DM can only send all CCBs to the DM and process the CCB of the next DM completely, and at this time, the CCBs of other DMs are used up, and a new write command cannot be executed.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a resource scheduling method and apparatus based on a solid state disk, an electronic device, and a storage medium, which are capable of balancing resources of the solid state disk.
In a first aspect, a resource scheduling method based on a solid state disk is provided, and is applied to a write processing apparatus, where the method includes:
receiving a plurality of first data cache blocks to be executed and sent by a plurality of data management devices;
executing a write action according to the plurality of first data cache blocks and generating a plurality of executed second data cache blocks;
and when all the write actions are executed, accessing the first cache management link tables and the cache regions corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices.
In one embodiment, after all the write actions are executed, the accessing the first cache management link table and the cache area corresponding to the plurality of data management apparatuses and scheduling the plurality of second data cache blocks to the plurality of data management apparatuses includes:
determining a target data management apparatus from the plurality of data management apparatuses according to the management sequence number;
determining whether all data cache blocks stored in a first target cache management linked list corresponding to the target data management device are smaller than a storage threshold value;
if not, accessing the rest data management devices in the plurality of data management devices according to the management serial numbers;
if so, determining whether a target cache region corresponding to the target data management device stores the second data cache block or not, and scheduling the second data cache block to the target data management device.
In one embodiment, the determining whether the target cache area corresponding to the target data management apparatus stores the second data cache block and scheduling the second data cache block to the target data management apparatus includes:
determining whether the target cache region stores the second data cache block;
if so, stopping scheduling and waiting for the second data cache block not stored in the target cache region;
and if not, taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region.
In one embodiment, said fetching a second data cache block from a second cache management linked list of said write processing apparatus and scheduling said second data cache block to said target cache region comprises:
accessing the other data management devices according to the management sequence number;
accessing the other cache management linked lists and the other cache regions of the other data management devices and scheduling the second data cache block to the other data management devices;
and when the second cache block does not exist in the second cache management linked list, stopping accessing the plurality of data management devices.
In one embodiment, a resource scheduling method based on a solid state disk is further provided, and is applied to a data management device, where the method includes:
receiving user data and storing the user data into a first data cache block in a first cache management linked list;
sending the first data cache block to a writing processing device;
and receiving a second data cache block scheduled by the write processing device and scheduling the second data cache block according to a cache region set by a user.
In one embodiment, the receiving the second data cache block scheduled by the write processing apparatus and scheduling the second data cache block according to a buffer set by a user includes:
determining whether the cache region stores the second data cache block;
if not, receiving a second data cache block sent by the write processing device;
and if so, determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold value and scheduling the second data cache block.
In one embodiment, the determining whether all data cache blocks in the first cache management linked list are less than the storage threshold and scheduling the second data cache block includes:
determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold;
if so, stopping scheduling the second data cache block and waiting that the data cache block is not smaller than the storage threshold;
and if not, taking out a second data cache block in the cache region and dispatching the second data cache block to the first cache management linked list.
In another aspect, a resource scheduling apparatus based on a solid state disk is provided, which is applied to a write processing apparatus, and includes:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a plurality of first data cache blocks to be executed and sent by a plurality of data management devices;
the execution module is used for executing write actions according to the first data cache blocks and generating a plurality of executed second data cache blocks;
and the first scheduling module is used for accessing the first cache management chain table and the cache region corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices after all the write actions are executed.
In one embodiment, after the write action is completely executed, the accessing, by the first scheduling module, the first cache management link table and the cache region corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices includes:
determining a target data management apparatus from the plurality of data management apparatuses according to the management sequence number;
determining whether all data cache blocks stored in a first target cache management linked list corresponding to the target data management device are smaller than a storage threshold value;
if not, accessing the rest data management devices in the plurality of data management devices according to the management serial numbers;
if so, determining whether a target cache region corresponding to the target data management device stores the second data cache block or not, and scheduling the second data cache block to the target data management device.
In one embodiment, the first scheduling module determining whether the target buffer corresponding to the target data management device stores the second data cache block and scheduling the second data cache block to the target data management device includes:
determining whether the target cache region stores the second data cache block;
if so, stopping scheduling and waiting for the second data cache block not stored in the target cache region;
and if not, taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region.
In one embodiment, the first scheduling module, after fetching a second data cache block from a second cache management linked list of the write processing apparatus and scheduling the second data cache block to the target cache region, includes:
accessing the rest data management devices according to the management serial numbers;
accessing the other cache management linked lists and the other cache regions of the other data management devices and scheduling the second data cache block to the other data management devices;
and when the second cache block does not exist in the second cache management linked list, stopping accessing the plurality of data management devices.
In one embodiment, a resource scheduling apparatus based on a solid state disk is further provided, which is applied to a data management apparatus, and the apparatus includes:
the storage module is used for receiving user data and storing the user data into a first data cache block in a first cache management linked list;
a sending module, configured to send the first data cache block to a write processing apparatus;
and the second scheduling module is used for receiving the second data cache block scheduled by the write processing device and scheduling the second data cache block according to a cache region set by a user.
In one embodiment, the receiving, by the second scheduling module, the second data cache block scheduled by the write processing apparatus and scheduling the second data cache block according to a buffer set by a user includes:
determining whether the cache region stores the second data cache block;
if not, receiving a second data cache block sent by the write processing device;
and if so, determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold value and scheduling the second data cache block.
In one embodiment, the second scheduling module determines whether all data cache blocks in the first cache management linked list are less than the storage threshold and schedules the second data cache block includes:
determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold;
if so, stopping scheduling the second data cache block and waiting that the data cache block is not smaller than the storage threshold;
and if not, taking out a second data cache block in the cache region and dispatching the second data cache block to the first cache management linked list.
In another aspect, an electronic device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor executes the computer program to implement the following steps:
receiving a plurality of first data cache blocks to be executed and sent by a plurality of data management devices;
executing a write action according to the plurality of first data cache blocks and generating a plurality of executed second data cache blocks;
and when all the write actions are executed, accessing the first cache management chain table and the cache region corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices.
In one embodiment, the processor, when executing the computer program, performs the steps of:
when all the write actions are executed, the accessing the first cache management link table and the cache area corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices includes:
determining a target data management apparatus from the plurality of data management apparatuses according to the management sequence number;
determining whether all data cache blocks stored in a first target cache management linked list corresponding to the target data management device are smaller than a storage threshold value;
if not, accessing the rest data management devices in the plurality of data management devices according to the management serial numbers;
if so, determining whether a target cache region corresponding to the target data management device stores the second data cache block or not, and scheduling the second data cache block to the target data management device.
In one embodiment, the processor, when executing the computer program, performs the steps of:
the determining whether the target cache area corresponding to the target data management device stores the second data cache block and scheduling the second data cache block to the target data management device includes:
determining whether the target cache region stores the second data cache block;
if so, stopping scheduling and waiting for the second data cache block not stored in the target cache region;
and if not, taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region.
In one embodiment, the processor, when executing the computer program, performs the steps of:
the step of taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region includes:
accessing the rest data management devices according to the management serial numbers;
accessing the other cache management linked lists and the other cache regions of the other data management devices and scheduling the second data cache block to the other data management devices;
and when the second cache block does not exist in the second cache management linked list, stopping accessing the plurality of data management devices.
In one embodiment, the processor, when executing the computer program, performs the steps of:
receiving user data and storing the user data into a first data cache block in a first cache management linked list;
sending the first data cache block to a writing processing device;
and receiving a second data cache block scheduled by the write processing device and scheduling the second data cache block according to a cache region set by a user.
In one embodiment, the processor, when executing the computer program, performs the steps of:
the receiving the second data cache block scheduled by the write processing device and scheduling the second data cache block according to a cache region set by a user includes:
determining whether the cache region stores the second data cache block;
if not, receiving a second data cache block sent by the write processing device;
and if so, determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold value and scheduling the second data cache block.
In one embodiment, the processor, when executing the computer program, performs the steps of:
the determining whether all data cache blocks in the first cache management linked list are less than the storage threshold and scheduling the second data cache block comprises:
determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold;
if so, stopping scheduling the second data cache block and waiting that the data cache block is not smaller than the storage threshold;
and if not, taking out a second data cache block in the cache region and dispatching the second data cache block to the first cache management linked list.
In yet another aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of:
receiving a plurality of first data cache blocks to be executed and sent by a plurality of data management devices;
executing a write action according to the plurality of first data cache blocks and generating a plurality of executed second data cache blocks;
and when all the write actions are executed, accessing the first cache management chain table and the cache region corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices.
In one embodiment, the computer program when executed by a processor implements the steps of:
when all the write actions are executed, the accessing a first cache management link table and a cache area corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices includes:
determining a target data management apparatus from the plurality of data management apparatuses according to the management sequence number;
determining whether all data cache blocks stored in a first target cache management linked list corresponding to the target data management device are smaller than a storage threshold value;
if not, accessing the rest data management devices in the plurality of data management devices according to the management serial numbers;
if so, determining whether a target cache region corresponding to the target data management device stores the second data cache block or not, and scheduling the second data cache block to the target data management device.
In one embodiment, the computer program when executed by a processor implements the steps of:
the determining whether the target cache area corresponding to the target data management device stores the second data cache block and scheduling the second data cache block to the target data management device includes:
determining whether the target cache region stores the second data cache block;
if so, stopping scheduling and waiting for the second data cache block not stored in the target cache region;
and if not, taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region.
In one embodiment, the computer program when executed by a processor performs the steps of:
the step of taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region includes:
accessing the rest data management devices according to the management serial numbers;
accessing the other cache management linked lists and the other cache regions of the other data management devices and scheduling the second data cache block to the other data management devices;
and when the second cache block does not exist in the second cache management linked list, stopping accessing the plurality of data management devices.
In one embodiment, the computer program when executed by a processor implements the steps of:
receiving user data and storing the user data into a first data cache block in a first cache management linked list;
sending the first data cache block to a writing processing device;
and receiving a second data cache block scheduled by the write processing device and scheduling the second data cache block according to a cache region set by a user.
In one embodiment, the computer program when executed by a processor implements the steps of:
the receiving the second data cache block scheduled by the write processing device and scheduling the second data cache block according to a cache region set by a user includes:
determining whether the cache region stores the second data cache block;
if not, receiving a second data cache block sent by the write processing device;
and if so, determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold value and scheduling the second data cache block.
In one embodiment, the computer program when executed by a processor implements the steps of:
the determining whether all data cache blocks in the first cache management linked list are less than the storage threshold and scheduling the second data cache block comprises:
determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold;
if so, stopping scheduling the second data cache block and waiting that the data cache block is not smaller than the storage threshold;
and if not, taking out a second data cache block in the cache region and dispatching the second data cache block to the first cache management linked list.
Receiving a plurality of first data cache blocks to be executed sent by a plurality of data management devices; executing a write action according to the plurality of first data cache blocks and generating a plurality of executed second data cache blocks; and when all the write actions are executed, accessing the first cache management chain table and the cache region corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices. The method comprises the steps of determining the number of data cache blocks in a first cache management linked list and whether second data cache blocks are stored in a corresponding cache region or not, and distributing the second data cache blocks with approximately equal total number to each data management device in the solid state disk in a polling scheduling mode for command and data processing during reading and writing, so that balanced resource scheduling of the solid state disk is achieved.
Drawings
FIG. 1 is a schematic flowchart of a resource scheduling method based on a solid state disk;
FIG. 2 is a schematic diagram illustrating steps of a resource scheduling method based on a solid state disk;
FIG. 3 is a schematic diagram of a resource scheduling apparatus based on a solid state disk;
fig. 4 is an internal structural diagram of a computer device in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The method provided by the application can be applied to the flow diagram of the resource scheduling method based on the solid state disk shown in fig. 1, and determines whether to schedule the second data cache block to the data management device by determining whether all the data cache blocks stored in the first cache management chain table are smaller than a storage threshold and determining whether the second data cache block is stored in the cache region, and implements balanced resource scheduling of the solid state disk in a polling scheduling manner. The Write management module Write processing device is provided with only one WM (Write management module Write processing device) in an SSD (Solid State Disk), a plurality of DMs (data management module data management devices) can be provided, a plurality of CCBs (Cache Control Block data Cache blocks) are provided, and each DM or WM is provided with a corresponding Cache management linked list.
In an embodiment, as shown in fig. 2, the present invention provides a resource scheduling method based on a solid state disk, which is applied to a write processing apparatus, and the method includes:
s201, receiving a plurality of first data cache blocks to be executed and sent by a plurality of data management devices;
s202, executing a write action according to the first data cache blocks and generating a plurality of executed second data cache blocks;
and S203, after all the write actions are executed, accessing the first cache management chain table and the cache region corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices.
Specifically, the scheme may design 8 DMs and 1 WM, where the 8 DMs send multiple CCBs to be executed to the WM through respective message channels, and the WM receives the CCBs to be executed from the corresponding channels according to the sequence of DM0 to DM7 and performs write processing. After all processing is completed, the executed CCB is released to each DM in Round Robin (Round Robin) mode, and the "owner" mode is no longer used. When the number of CCBs inside the DM reaches the storage threshold, the CCBs are not released for the DM, and the next DM is released continuously.
In one embodiment, after all the write actions are executed, the accessing the first cache management link table and the cache area corresponding to the plurality of data management apparatuses and scheduling the plurality of second data cache blocks to the plurality of data management apparatuses includes:
determining a target data management apparatus from the plurality of data management apparatuses according to the management sequence number;
determining whether all data cache blocks stored in a first target cache management linked list corresponding to the target data management device are smaller than a storage threshold value;
if not, accessing the rest data management devices in the plurality of data management devices according to the management serial numbers;
if so, determining whether a target cache region corresponding to the target data management device stores the second data cache block or not, and scheduling the second data cache block to the target data management device.
Specifically, the DM0 is determined to be the target data management device from the 8 DMs according to the management sequence number WM, that is, the cache management linked list in the DM0 is firstly queried, and it is determined whether all CCBs stored in the cache management linked list are smaller than the storage threshold value, if so, it indicates that the DM0 can continuously receive the executed CBB (second data cache block) sent by the WM, and at this time, the cache area in the DM0 is continuously queried; if the storage threshold is larger than or equal to the storage threshold, the storage threshold is set by a user and is generally lower than the maximum number of CBBs that DM0 can store, the cache management chain table of DM0 cannot receive CBBs scheduled by WM.
In one embodiment, the determining whether the target cache area corresponding to the target data management apparatus stores the second data cache block and scheduling the second data cache block to the target data management apparatus includes:
determining whether the target cache region stores the second data cache block;
if so, stopping scheduling and waiting for the second data cache block not stored in the target cache region;
and if not, taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region.
Specifically, as described above, when the executed CBBs have already been stored in the cache area of DM0, the scheduling is stopped, waiting for DM0 to schedule the CBBs in the cache area to its own cache management linked list is performed, and after the end of the scheduling, restarting scheduling a CBB to DM0 cache area.
In one embodiment, said fetching a second data cache block from a second cache management linked list of said write processing apparatus and scheduling said second data cache block to said target cache comprises:
accessing the rest data management devices according to the management serial numbers;
accessing the other cache management linked lists and the other cache regions of the other data management devices and scheduling the second data cache block to the other data management devices;
and when the second cache block does not exist in the second cache management linked list, stopping accessing the plurality of data management devices.
Specifically, as described above, after finishing scheduling a CBB to DM0, sequentially accessing the cache management chain table and the cache region in DM1 to DM7 according to the order, and then, as similar to the above operation, when the CBB stored in the cache management chain table of DM1 is smaller than the storage threshold, continuing to schedule a CBB to the cache region of DM1, and when the CBB stored in the cache management chain table of DM2 is larger than the storage threshold, skipping DM2 and continuing to access DM3 until there is no executed CBB that needs to be scheduled in the cache management chain table of WM.
In one embodiment, a resource scheduling method based on a solid state disk is further provided, and is applied to a data management apparatus, and the method includes:
s204, receiving user data and storing the user data into a first data cache block in a first cache management linked list;
s205, sending the first data cache block to a writing processing device;
s206, receiving the second data cache block scheduled by the writing processing device and scheduling the second data cache block according to a cache region set by a user.
Specifically, a buffer area is set for each DM before performing the uniform resource scheduling, and is used for temporarily storing the CCBs released by the WM. Then, when writing, the DM receives multiple sets of user data delivered by Host, stores the user data in a "blank" CCB in its own cache management linked list, and sends the CCB to be executed to the WM for actual writing after a certain processing. After the write action is completed, the WM scheduled multiple executed CBBs are received together, where each DM sends to the WM a CCB which can be one or more and the receiving WM scheduled CCB which can be one or more.
In one embodiment, the receiving the second data cache block scheduled by the write processing apparatus and scheduling the second data cache block according to a buffer set by a user includes:
determining whether the cache region stores the second data cache block;
if not, receiving a second data cache block sent by the write processing device;
and if so, determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold value and scheduling the second data cache block.
Specifically, for example, the DM0 determines whether the buffer area of the buffer area itself stores the completed CBB, if so, it indicates that the WM has scheduled one CBB in the previous round, and at this time, because possible reasons include that the buffer management chain table of the WM is full or has a fault, and the like, the buffer area still stores the CBB, the WM is rejected to receive another CBB scheduled first, and then it is determined whether all data cache blocks in the buffer management chain table are smaller than the storage threshold. And if the buffer area does not store the CBB, receiving the CBB scheduled by the WM into the self buffer area.
In one embodiment, the determining whether all data cache blocks in the first cache management linked list are less than the storage threshold and scheduling the second data cache block includes:
determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold;
if so, stopping scheduling the second data cache block and waiting that the data cache block is not smaller than the storage threshold;
and if not, taking out a second data cache block in the cache region and dispatching the second data cache block to the first cache management linked list.
Specifically, as described above, when it is determined that all data cache blocks in the cache management linked list are smaller than the storage threshold, the CBB stored in the cache region is scheduled to the own management cache linked list, and the process data and the like in the cache region are emptied, and then, the next CBB scheduled by the WM is waited to be received. And when determining that all data cache blocks in the cache management linked list are not smaller than the storage threshold value, stopping dispatching the CBB stored in the cache region to the own management cache linked list and waiting for receiving the user data issued by the Host. And after the CBB to be executed is sent to the WM, continuing to schedule the CBB which is executed in the buffer area after all the CBBs in the buffer management chain table are smaller than the storage threshold value.
The scheme of this application has following beneficial effect:
1) The method comprises the steps that a cache region is arranged for each DM, and the CCBs with the quantity approximately equal to that of the DMs are distributed in the solid state disk for command and data processing during reading and writing according to a polling scheduling mode, so that balanced resource scheduling of the solid state disk is realized;
2) The problem that the write test performance of the large data block of the client does not reach the standard is solved, the client requirements are met, and benefits are created for companies.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily performed sequentially, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 3, there is provided a resource scheduling apparatus based on a solid state disk, applied to a write processing apparatus, the apparatus including:
a receiving module 301, configured to receive a plurality of first data cache blocks to be executed and sent by a plurality of data management apparatuses;
an executing module 302, configured to execute a write action according to the plurality of first data cache blocks and generate a plurality of executed second data cache blocks;
a first scheduling module 303, configured to, after all the write actions are executed, access the first cache management link table and the cache area corresponding to the multiple data management apparatuses and schedule the multiple second data cache blocks to the multiple data management apparatuses.
In one embodiment, when all the write actions are executed, the accessing, by the first scheduling module, the first cache management link table and the cache area corresponding to the plurality of data management apparatuses and scheduling the plurality of second data cache blocks to the plurality of data management apparatuses includes:
determining a target data management apparatus from the plurality of data management apparatuses according to the management sequence number;
determining whether all data cache blocks stored in a first target cache management linked list corresponding to the target data management device are smaller than a storage threshold value;
if not, accessing the rest data management devices in the plurality of data management devices according to the management serial numbers;
if so, determining whether a target cache region corresponding to the target data management device stores the second data cache block or not, and scheduling the second data cache block to the target data management device.
In one embodiment, the first scheduling module determines whether the target cache area corresponding to the target data management device stores the second data cache block and schedules the second data cache block to the target data management device includes:
determining whether the target cache region stores the second data cache block;
if so, stopping scheduling and waiting for the second data cache block not stored in the target cache region;
and if not, taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region.
In one embodiment, the first scheduling module, after fetching a second data cache block from a second cache management linked list of the write processing apparatus and scheduling the second data cache block to the target cache region, includes:
accessing the other data management devices according to the management sequence number;
accessing the other cache management linked lists and the other cache regions of the other data management devices and scheduling the second data cache block to the other data management devices;
and when the second cache block does not exist in the second cache management linked list, stopping accessing the plurality of data management devices.
In one embodiment, a resource scheduling apparatus based on a solid state disk is further provided, which is applied to a data management apparatus, and the apparatus includes:
a storage module 304, configured to receive user data and store the user data in a first data cache block in a first cache management linked list;
a sending module 305, configured to send the first data cache block to a write processing apparatus;
a second scheduling module 306, configured to receive the second data cache block scheduled by the write processing apparatus and schedule the second data cache block according to a cache region set by a user.
In one embodiment, the receiving, by the second scheduling module, the second data cache block scheduled by the write processing apparatus and scheduling the second data cache block according to a buffer set by a user includes:
determining whether the cache region stores the second data cache block;
if not, receiving a second data cache block sent by the write processing device;
and if so, determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold value and scheduling the second data cache block.
In one embodiment, the second scheduling module determines whether all data cache blocks in the first cache management linked list are less than the storage threshold and schedules the second data cache block includes:
determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold;
if so, stopping scheduling the second data cache block and waiting that the data cache block is not smaller than the storage threshold;
and if not, taking out a second data cache block in the cache region and dispatching the second data cache block to the first cache management linked list.
For specific limitations of the resource scheduling apparatus based on the solid state disk, reference may be made to the above limitations of the resource scheduling method based on the solid state disk, and details are not described herein again. All or part of the modules in the solid state disk-based resource scheduling device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an alert information processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, an electronic device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
receiving a plurality of first data cache blocks to be executed and sent by a plurality of data management devices;
executing a write action according to the plurality of first data cache blocks and generating a plurality of executed second data cache blocks;
and when all the write actions are executed, accessing the first cache management chain table and the cache region corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices.
In one embodiment, the processor, when executing the computer program, performs the steps of:
when all the write actions are executed, the accessing a first cache management link table and a cache area corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices includes:
determining a target data management apparatus from the plurality of data management apparatuses according to the management sequence number;
determining whether all data cache blocks stored in a first target cache management linked list corresponding to the target data management device are smaller than a storage threshold value;
if not, accessing the rest data management devices in the plurality of data management devices according to the management serial numbers;
if so, determining whether a target cache region corresponding to the target data management device stores the second data cache block or not, and scheduling the second data cache block to the target data management device.
In one embodiment, the processor, when executing the computer program, performs the steps of:
the determining whether the target cache area corresponding to the target data management device stores the second data cache block and scheduling the second data cache block to the target data management device includes:
determining whether the target cache region stores the second data cache block;
if so, stopping scheduling and waiting for the second data cache block not stored in the target cache region;
and if not, taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region.
In one embodiment, the processor, when executing the computer program, performs the steps of:
the step of taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region includes:
accessing the rest data management devices according to the management serial numbers;
accessing the other cache management linked lists and the other cache regions of the other data management devices and scheduling the second data cache block to the other data management devices;
and when the second cache block does not exist in the second cache management linked list, stopping accessing the plurality of data management devices.
In one embodiment, the processor, when executing the computer program, performs the steps of:
receiving user data and storing the user data into a first data cache block in a first cache management linked list;
sending the first data cache block to a writing processing device;
and receiving a second data cache block scheduled by the write processing device and scheduling the second data cache block according to a cache region set by a user.
In one embodiment, the processor, when executing the computer program, performs the steps of:
the receiving the second data cache block scheduled by the write processing device and scheduling the second data cache block according to a cache region set by a user includes:
determining whether the cache region stores the second data cache block;
if not, receiving a second data cache block sent by the write processing device;
and if so, determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold value and scheduling the second data cache block.
In one embodiment, the processor, when executing the computer program, performs the steps of:
the determining whether all data cache blocks in the first cache management linked list are less than the storage threshold and scheduling the second data cache block comprises:
determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold;
if so, stopping scheduling the second data cache block and waiting that the data cache block is not smaller than the storage threshold;
and if not, taking out a second data cache block in the cache region and dispatching the second data cache block to the first cache management linked list.
In one embodiment, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of:
receiving a plurality of first data cache blocks to be executed and sent by a plurality of data management devices;
executing a write action according to the plurality of first data cache blocks and generating a plurality of executed second data cache blocks;
and when all the write actions are executed, accessing the first cache management chain table and the cache region corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices.
In one embodiment, the computer program when executed by a processor implements the steps of:
when all the write actions are executed, the accessing a first cache management link table and a cache area corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices includes:
determining a target data management apparatus from the plurality of data management apparatuses according to the management sequence number;
determining whether all data cache blocks stored in a first target cache management linked list corresponding to the target data management device are smaller than a storage threshold value;
if not, accessing the rest data management devices in the plurality of data management devices according to the management serial numbers;
if so, determining whether a target cache region corresponding to the target data management device stores the second data cache block or not, and scheduling the second data cache block to the target data management device.
In one embodiment, the computer program when executed by a processor performs the steps of:
the determining whether the target cache area corresponding to the target data management device stores the second data cache block and scheduling the second data cache block to the target data management device includes:
determining whether the target cache region stores the second data cache block;
if so, stopping scheduling and waiting for the second data cache block not stored in the target cache region;
and if not, taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region.
In one embodiment, the computer program when executed by a processor performs the steps of:
the step of taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region includes:
accessing the rest data management devices according to the management serial numbers;
accessing the other cache management linked lists and the other cache regions of the other data management devices and scheduling the second data cache block to the other data management devices;
and when the second cache block does not exist in the second cache management linked list, stopping accessing the plurality of data management devices.
In one embodiment, the computer program when executed by a processor implements the steps of:
receiving user data and storing the user data into a first data cache block in a first cache management linked list;
sending the first data cache block to a writing processing device;
and receiving a second data cache block scheduled by the write processing device and scheduling the second data cache block according to a cache region set by a user.
In one embodiment, the computer program when executed by a processor implements the steps of:
the receiving the second data cache block scheduled by the write processing device and scheduling the second data cache block according to a cache region set by a user includes:
determining whether the cache region stores the second data cache block;
if not, receiving a second data cache block sent by the write processing device;
and if so, determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold value and scheduling the second data cache block.
In one embodiment, the computer program when executed by a processor implements the steps of:
the determining whether all data cache blocks in the first cache management chain table are smaller than the storage threshold and scheduling the second data cache block comprises:
determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold value;
if so, stopping scheduling the second data cache block and waiting for the data cache block not to be smaller than the storage threshold;
and if not, taking out a second data cache block in the cache region and dispatching the second data cache block to the first cache management linked list.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments may be arbitrarily combined, and for the sake of simplicity of description, all possible combinations of the technical features in the above embodiments are not described, however, so long as there is no contradiction between the combinations of the technical features, the scope of the present description should be considered.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A resource scheduling method based on a solid state disk is applied to a write processing device, and comprises the following steps:
receiving a plurality of first data cache blocks to be executed and sent by a plurality of data management devices;
executing a write action according to the plurality of first data cache blocks and generating a plurality of executed second data cache blocks;
and when all the write actions are executed, accessing the first cache management chain table and the cache region corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices.
2. The method of claim 1, wherein when all of the write actions are performed, the accessing a first cache management link table and a cache corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices comprises:
determining a target data management apparatus from the plurality of data management apparatuses according to the management sequence number;
determining whether all data cache blocks stored in a first target cache management linked list corresponding to the target data management device are smaller than a storage threshold value;
if not, accessing the rest data management devices in the plurality of data management devices according to the management serial numbers;
if so, determining whether a target cache region corresponding to the target data management device stores the second data cache block or not, and scheduling the second data cache block to the target data management device.
3. The method of claim 2, wherein the determining whether the target cache corresponding to the target data management device stores the second data cache block and scheduling the second data cache block to the target data management device comprises:
determining whether the target cache region stores the second data cache block;
if so, stopping scheduling and waiting for the second data cache block not stored in the target cache region;
if not, taking out a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache region.
4. The method of claim 3, wherein the retrieving a second data cache block from a second cache management linked list of the write processing device and scheduling the second data cache block to the target cache comprises:
accessing the rest data management devices according to the management serial numbers;
accessing the other cache management linked lists and the other cache regions of the other data management devices and scheduling the second data cache block to the other data management devices;
and when the second cache block does not exist in the second cache management linked list, stopping accessing the plurality of data management devices.
5. A resource scheduling method based on a solid state disk is applied to a data management device, and comprises the following steps:
receiving user data and storing the user data into a first data cache block in a first cache management linked list;
sending the first data cache block to a writing processing device;
and receiving a second data cache block scheduled by the write processing device and scheduling the second data cache block according to a cache region set by a user.
6. The method of claim 5, wherein the receiving the second data cache block scheduled by the write processing apparatus and scheduling the second data cache block according to a buffer set by a user comprises:
determining whether the cache region stores the second data cache block;
if not, receiving a second data cache block sent by the write processing device;
and if so, determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold value and scheduling the second data cache block.
7. The method of claim 6, wherein determining whether all data cache blocks in the first cache management linked list are less than the storage threshold and scheduling the second data cache block comprises:
determining whether all data cache blocks in the first cache management linked list are smaller than the storage threshold;
if so, stopping scheduling the second data cache block and waiting that the data cache block is not smaller than the storage threshold;
and if not, taking out a second data cache block in the cache region and dispatching the second data cache block to the first cache management linked list.
8. A resource scheduling device based on a solid state disk is applied to a write processing device, and comprises:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a plurality of first data cache blocks to be executed and sent by a plurality of data management devices;
the execution module is used for executing write actions according to the first data cache blocks and generating a plurality of executed second data cache blocks;
and the first scheduling module is used for accessing the first cache management link tables and the cache areas corresponding to the plurality of data management devices and scheduling the plurality of second data cache blocks to the plurality of data management devices after all the write actions are executed.
9. An electronic device, comprising:
one or more processors; and memory associated with the one or more processors for storing program instructions which, when read and executed by the one or more processors, perform the method of any of claims 1-7.
10. A computer storage medium, characterized in that a computer program is stored thereon, wherein the program, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202211276782.2A 2022-10-18 2022-10-18 Resource scheduling method and device based on solid state disk, electronic equipment and storage medium Pending CN115686782A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211276782.2A CN115686782A (en) 2022-10-18 2022-10-18 Resource scheduling method and device based on solid state disk, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211276782.2A CN115686782A (en) 2022-10-18 2022-10-18 Resource scheduling method and device based on solid state disk, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115686782A true CN115686782A (en) 2023-02-03

Family

ID=85066926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211276782.2A Pending CN115686782A (en) 2022-10-18 2022-10-18 Resource scheduling method and device based on solid state disk, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115686782A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116755635A (en) * 2023-08-15 2023-09-15 苏州浪潮智能科技有限公司 Hard disk controller cache system, method, hard disk device and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116755635A (en) * 2023-08-15 2023-09-15 苏州浪潮智能科技有限公司 Hard disk controller cache system, method, hard disk device and electronic device
CN116755635B (en) * 2023-08-15 2023-11-03 苏州浪潮智能科技有限公司 Hard disk controller cache system, method, hard disk device and electronic device

Similar Documents

Publication Publication Date Title
US20060206894A1 (en) Method of scheduling jobs using database management system for real-time processing
CN111104208B (en) Process scheduling management method, device, computer equipment and storage medium
CN111338779B (en) Resource allocation method, device, computer equipment and storage medium
CN112286671B (en) Containerization batch processing job scheduling method and device and computer equipment
CN110532205B (en) Data transmission method, data transmission device, computer equipment and computer readable storage medium
CN115658277B (en) Task scheduling method and device, electronic equipment and storage medium
CN110515710A (en) Asynchronous task scheduling method, apparatus, computer equipment and storage medium
CN115686782A (en) Resource scheduling method and device based on solid state disk, electronic equipment and storage medium
US20120102012A1 (en) Cross-region access method for embedded file system
CN115098426A (en) PCIE (peripheral component interface express) equipment management method, interface management module, PCIE system, equipment and medium
CN115048216A (en) Resource management scheduling method, device and equipment for artificial intelligence cluster
CN116414581A (en) Multithreading time synchronization event scheduling system based on thread pool and Avl tree
CN112114958A (en) Resource isolation method, distributed platform, computer device, and storage medium
CN116010093A (en) Data processing method, apparatus, computer device and readable storage medium
US20140149691A1 (en) Data processing system and data processing method
CN114721814A (en) Task allocation method and device based on shared stack and computer equipment
CN113238842A (en) Task execution method and device and storage medium
CN110673931A (en) Distributed calculation method for document synthesis, document synthesis system and control device thereof
US11972110B2 (en) Storage device and storage system
US20230236730A1 (en) Storage device and storage system
CN114185687B (en) Heap memory management method and device for shared memory type coprocessor
CN113282382B (en) Task processing method, device, computer equipment and storage medium
CN110765721B (en) SOC chip acceleration verification method and device, computer equipment and storage medium
CN110362510B (en) Dynamic stack allocation method, device, computer equipment and storage medium
CN113641466A (en) Memory allocation method and system for XEN platform and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination