WO2023284173A1 - 固态硬盘的任务分配方法、系统、电子设备及存储介质 - Google Patents

固态硬盘的任务分配方法、系统、电子设备及存储介质 Download PDF

Info

Publication number
WO2023284173A1
WO2023284173A1 PCT/CN2021/127520 CN2021127520W WO2023284173A1 WO 2023284173 A1 WO2023284173 A1 WO 2023284173A1 CN 2021127520 W CN2021127520 W CN 2021127520W WO 2023284173 A1 WO2023284173 A1 WO 2023284173A1
Authority
WO
WIPO (PCT)
Prior art keywords
data management
management module
master data
scenario
module
Prior art date
Application number
PCT/CN2021/127520
Other languages
English (en)
French (fr)
Inventor
张乾坤
Original Assignee
苏州浪潮智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州浪潮智能科技有限公司 filed Critical 苏州浪潮智能科技有限公司
Priority to US18/263,142 priority Critical patent/US12019889B2/en
Publication of WO2023284173A1 publication Critical patent/WO2023284173A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/501Performance criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5022Workload threshold
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/506Constraint
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7205Cleaning, compaction, garbage collection, erase control

Definitions

  • the present application relates to the technical field of data processing, and in particular to a task allocation method, system, electronic device and storage medium of a solid-state hard disk.
  • Solid State Drive is usually divided into different modules according to different functions, such as the data management module DM (Data Manager) responsible for data management, the write management module WM (write manager) responsible for writing nand operations, Reclaim block management module RBM (Reclaim Block Manager) responsible for garbage collection and log management module JM (Journal Manager) responsible for snapshot preservation.
  • the quality of service (QOS) of the system is affected by the delay of read and write operations.
  • the quality of service of the system is mainly affected by the delay of read and write operations, and the bandwidth BW (Band Width) of the system is mainly affected by the concurrency of tasks but is not sensitive to delay.
  • NVME Non-Volatile Memory express, a register interface and command set for PCI Express additional storage
  • commands are processed in the data management module. If you want to achieve good service quality, you need to ensure that the data management module responds to the host SQ in a timely manner. (main submission queue), and timely processing of the internal context of the data management module.
  • the CPU hardware is limited, the number of CPU cores assigned to the data management module is limited. When high bandwidth is required, the concurrency within the data management module is limited by the processing power of the CPU, and concurrent reading will be affected.
  • the purpose of this application is to provide a task allocation method and system, an electronic device and a storage medium of a solid-state hard disk, which can set task allocation methods according to business scenarios, and achieve high service quality or high bandwidth under the condition of limited hardware resources data processing.
  • the present application provides a task allocation method for a solid-state hard disk
  • the solid-state hard disk includes a data management module for performing data management operations, a writing management module for performing data writing operations, and a
  • the task distribution method of the solid-state hard disk comprises:
  • the business scenario is a high-bandwidth scenario
  • control the master data management module and the slave data management module to work in the corresponding CPU core, and provide all master data management modules and all slave data
  • the management module assigns tasks; wherein, in the high-bandwidth scenario, the recovery block management modules of the solid-state hard drives corresponding to all CPU cores are in a working state;
  • control the master data management module to work in the corresponding CPU core, use the master data management module to set the working state of the slave data management module to an idle state, And assign tasks for all of the master data management modules; wherein, in the high-quality-of-service scenario, the write management module, block recovery module, and log management module of the solid-state hard disk are all only in the corresponding CPU of the slave data management module. work in the kernel.
  • the data management module of the solid-state hard disk is divided into multiple data management groups, including:
  • determine the business scenario according to the working status of all the master data management modules including:
  • setting the working state of the master data management module according to the load type of the master data management module in a continuous period of time includes:
  • the load type of the master data management module is set to a high load type
  • the load type of the master data management module is set to a low load type
  • the working state of the master data management module is set to a high load working state
  • the working state of the master data management module is set to the low load working state.
  • determine the business scenario according to the working status of all the master data management modules including:
  • the business scenario is the high-bandwidth scenario
  • the business scenario is the high-quality-of-service scenario.
  • the first master data management module that writes the working state to the query information is set as the starting point module, and the starting point module is controlled to send business scenarios to the writing management module, recovery block management module and log management module of the solid state disk Toggle message.
  • the present application also provides a task allocation system for a solid-state hard disk
  • the solid-state hard disk includes a data management module for performing data management operations, a write management module for performing data writing operations, and a recycling block for recycling garbage data
  • the task distribution system of the solid-state hard disk includes:
  • a grouping module configured to divide the data management module into a plurality of data management groups; wherein, each of the data management groups includes a master data management module and a slave data management module;
  • a scenario determination module configured to determine a business scenario according to the working status of all the master data management modules
  • the first task allocation module is used to control the master data management module and the slave data management module to work in the corresponding CPU cores when the business scenario is a high-bandwidth scenario, and provide all the master data
  • the management module and all the slave data management modules assign tasks; wherein, in the high-bandwidth scenario, the recovery block management modules of the solid-state hard drives corresponding to all CPU cores are in a working state;
  • the second task allocation module is used to control the master data management module to work in the corresponding CPU core when the business scenario is a high service quality scenario, and utilize the master data management module to transfer the slave data management module
  • the working state of the solid state disk is set to idle state, and tasks are assigned to all the main data management modules; wherein, in the high service quality scenario, the writing management module, recovery block management module and log management module of the solid state disk are all only Work in the CPU core corresponding to the slave data management module.
  • the present application also provides a storage medium, on which a computer program is stored, and when the computer program is executed, the steps performed by the task allocation method of the above-mentioned solid state disk are realized.
  • the present application also provides an electronic device, including a memory and a processor, the memory stores a computer program, and when the processor invokes the computer program in the memory, the steps performed by the task allocation method of the solid state disk described above are implemented.
  • the present application provides a task allocation method for a solid-state hard disk, comprising: dividing the data management module of the solid-state hard disk into a plurality of data management groups; wherein, each of the data management groups includes a master data management module and a slave data management module Module; determine the business scenario according to the working status of all the master data management modules; when the business scenario is a high-bandwidth scenario, then control the master data management module and the slave data management module to be in the corresponding CPU core Work, and assign tasks for all the master data management modules and all the slave data management modules; wherein, in the high-bandwidth scenario, all the recovery block management modules of the solid-state hard drives corresponding to all CPU cores are in a working state ; When the business scenario is a high-quality-of-service scenario, then control the master data management module to work in the corresponding CPU core, utilize the master data management module to set the working state of the slave data management module to an idle state , and assign tasks to all the master data management modules; wherein, in the high-
  • the data management module of the solid state disk is divided into multiple data management groups, and the current business scenario is determined according to the working status of the master data management module in all data management groups. If the business scenario is a high-bandwidth scenario, it means that the data throughput should be increased at this time.
  • the application controls both the master data management module and the slave data management module to work in the corresponding CPU cores, so as to provide All master data management modules and all slave data management modules are assigned tasks. If the business scenario is a high-quality-of-service scenario, it means that the task processing delay should be reduced at this time.
  • the application controls the master data management module to work in the corresponding CPU core, and utilizes the master data management module to convert the slave data
  • the working state of the management module is set to idle state, so as to only assign tasks to all the master data management modules. It can be seen that the application controls all data management modules to jointly process data management tasks in a high-bandwidth scenario, which improves data throughput; the application also controls the master data management module in the corresponding CPU core in a high-quality-of-service scenario At this time, the writing management module, recovery block management module and log management module of the solid state disk do not work in the same CPU core as the main data management module, which can improve the timeliness of the main data management module's processing tasks and reduce the waiting delay .
  • the application can set task allocation methods according to business scenarios, and realize data processing with high service quality or high bandwidth under the condition of limited hardware resources.
  • the present application also provides a solid-state hard disk task distribution system, an electronic device and a storage medium, which have the above-mentioned beneficial effects, and will not be repeated here.
  • FIG. 1 is a flow chart of a task allocation method for a solid state disk provided in an embodiment of the present application
  • FIG. 2 is a flow chart of a method for determining a current business scenario provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a query information delivery method provided by the embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a solid state disk task allocation system provided by an embodiment of the present application.
  • FIG. 1 is a flow chart of a task allocation method for a solid state disk provided in an embodiment of the present application.
  • this embodiment can be applied to electronic devices such as computers and servers, and the data management module of the solid-state hard disk can be divided into multiple data management groups according to preset rules, and each data management group can include a master data management module and a slave data management module. management module.
  • the solid state disk may include multiple data management modules, multiple write management modules, multiple reclaimed block management modules, and multiple log management modules, and each write management module corresponds to two data management modules.
  • the data management module (DM, Data Manager) is responsible for processing the read and write messages in the NVMe (NVM Express, non-volatile memory host controller interface specification) submission queue (submission queue) in order to complete the host data reception and read commands for write commands Process handling.
  • the write management module (WM, write Manager), also known as the NAND (computer flash memory device) write management module, is responsible for managing the NAND block (block), and is also used to maintain the organizational data stripes and maintain the LBA-PBA (logic block address- Physical block address, logical block address - physical block address) mapping and wear leveling.
  • LBA-PBA logic block address- Physical block address, logical block address - physical block address
  • the reclaim block management module (RBM, reclaim block manager), also known as the NAND block reclaim module, is responsible for block garbage collection and moves valid data in a block to a new block.
  • the journal management module (JM, Journal Manager), also known as the journal module, is responsible for saving the metadata of the solid-state disk during runtime, so that the stored content can be used for power-on recovery.
  • the corresponding relationship between the data management module and the write management module in the solid state disk can be determined, and the data management module corresponding to the same write management module can be set as the same data management module.
  • management group includes a master data management module and a slave data management module.
  • the proportion of master-slave allocation in the data management group can be determined according to the ratio of the number of data management modules that are always in working state and the number of data management modules that may enter the working state.
  • a data management group can include a master A data management module and at least one slave data management module.
  • S102 Determine the business scenario according to the working status of all the master data management modules
  • the master data management module in the data management group can set its own load type and working status according to the pressure allocated by read and write requests, and the master data management module can also set the load type and working status of the slave data management module according to the current business scenario.
  • the slave data management module does not have the authority to modify its own load type and working status.
  • the business scenarios mentioned in this embodiment include high-bandwidth scenarios and high-quality-of-service scenarios.
  • High-bandwidth scenarios refer to business scenarios with high throughput requirements
  • high-quality-of-service scenarios refer to business scenarios with low latency requirements.
  • the working state of the master data management module is a low-load working state; when the business pressure of the master data management module is greater than the critical value, the working state of the master data management module is High load working condition.
  • the above business pressure can be determined according to the amount of resources allocated for read and write requests per second, and the critical value can be set according to the application scenario, for example, it can be 16M/s.
  • the working status of all master data management modules is low-load working status, it is determined that the business scenario is a high-quality service scenario; when the working status of all master data management modules is high-load working status, it is determined that the business scenario is high bandwidth scenarios.
  • this step is based on the fact that the current business scenario is a high-bandwidth scenario, and at this time, as many data management modules as possible are required to process tasks.
  • Each CPU core corresponds to a data management module, and the data management module works in the corresponding CPU core.
  • the control master data management module and the slave data management module all work in the corresponding CPU core, and assign tasks for all the master data management modules and all the slave data management modules.
  • all data management The modules are all in working state, which can improve the throughput of data processing under the condition of limited hardware resources.
  • the recovery block management modules of the solid-state hard drives corresponding to all CPU cores are in working state, and the writing management module and the log management module of the solid-state hard drives only work in the CPU cores corresponding to the slave data management modules.
  • S104 When the business scenario is a high-quality-of-service scenario, control the master data management module to work in the corresponding CPU core, and use the master data management module to set the working state of the slave data management module to idle status, and assign tasks to all said master data management modules;
  • this step is based on the fact that the current business scenario is a high-quality-of-service scenario. At this time, it is necessary to reduce the delay of waiting for task processing. Therefore, in this step, the master data management module is controlled to work in the corresponding CPU core.
  • the master data management module sets the working state of the slave data management modules to an idle state, and assigns tasks to all the master data management modules.
  • the master data management module can be in the working state, and the slave data management module can be in the non-working state.
  • the writing management module, recovery block management module, and log management module of the solid state disk only work in the corresponding CPU core of the slave data management module.
  • the main data management module can be separated from the CPU core where other modules work, to ensure that the main data management module responds to the commands in the host submission queue in a timely manner, and ensures the timely processing of the internal context of the main data management module.
  • CPU7 Take the eight CPU cores from CPU0 to CPU7 as an example.
  • the data management modules of CPU0, CPU2, CPU4, and CPU6 are all working, and the data management modules of CPU1, CPU3, CPU5, and CPU7 are all working.
  • the write management modules in CPU1, CPU3, CPU5, and CPU7 are all in the working state
  • the log management modules in CPU1, CPU3, CPU5, and CPU7 are in the working state
  • the recovery blocks in CPU1, CPU3, CPU5, and CPU7 The management module is in working state.
  • the data management module of the solid state disk is divided into multiple data management groups, and the current business scenario is determined according to the working status of the master data management modules in all data management groups. If the business scenario is a high-bandwidth scenario, it indicates that the throughput of data should be increased at this time. In this embodiment, when the business scenario is a high-bandwidth scenario, both the master data management module and the slave data management module are controlled to work in the corresponding CPU cores, so that Assign tasks to all master data management modules and all slave data management modules. If the business scenario is a high-quality-of-service scenario, it means that the task processing delay should be reduced at this time.
  • This embodiment controls the master data management module to work in the corresponding CPU core, and utilizes the master data management module to The working state of the data management module is set to idle state, so as to only assign tasks to all the master data management modules. It can be seen that this embodiment controls all data management modules to jointly process data management tasks in a high-bandwidth scenario, which improves data throughput; this embodiment also controls the master data management module in a high-quality-of-service scenario Work in the kernel. At this time, the writing management module, recovery block management module and log management module of the solid state disk do not work in the same CPU core as the main data management module, which can improve the timeliness of the main data management module's processing tasks and reduce waiting time. delay. Therefore, this embodiment can set a task allocation method according to a business scenario, and realize data processing with high service quality or high bandwidth under the condition of limited hardware resources.
  • Fig. 2 is a flowchart of a method for determining the current business scenario provided by the embodiment of the present application.
  • This embodiment is a further introduction to S102 in the embodiment corresponding to Fig. 1, and this embodiment can be combined with Fig. 1
  • the corresponding embodiments are combined to obtain a further implementation mode, and this embodiment may include the following steps:
  • S201 Set the working state of the master data management module according to the load type of the master data management module in a continuous period of time;
  • S202 Determine whether the working states of all the master data management modules are the same; if the working states of all the master data management modules are the same, proceed to S203; if the working states of all the master data management modules are not the same, Then enter S204;
  • S203 Determine the business scenario according to the working status of all the master data management modules
  • the working state is set according to the load type of the master data management module in a continuous period of time.
  • the number of read and write request allocations received by the master data management module can be counted according to a time slice (ie, a preset period); When the number of read and write request allocations is greater than a preset value, set the load type of the master data management module in the current time slice to a high load type; when the number of read and write request allocations is less than or equal to the preset value, Set the load type of the master data management module in the current time slice to the low load type.
  • the working state of the master data management module is set to a high load working state; if the master data management module is in If the load types of all time slices in the continuous time period are low load types, then the working state of the master data management module is set to the low load working state.
  • the working status of a single master data management module After obtaining the working status of all master data management modules, if the working status of all the master data management modules is a high-load working status, then determine the business scenario It is the high-bandwidth scenario; if the working states of all the master data management modules are low-load working states, it is determined that the business scenario is the high-quality-of-service scenario.
  • the working states of all the master data management modules are the same in the following way: select a master data management module that meets the preset conditions from all the master data management modules; wherein, the preset The condition is that the working state changes, and the load type is different from the load type of the slave data management module of the same data management group; the working state of the master data management module that meets the preset condition is written into the query information, and the The query information forwards the query information to the next master data management module in a preset order, so that all the master data management modules write the working status into the query information; judge all the master data management modules according to the query information Whether the working status of the modules are the same.
  • the aforementioned preset order may be the order of the module numbers from small to large.
  • the first master data management module that writes the working status to the query information can also be set as the starting point module to control the starting point module Sending a business scene switching message to the writing management module, the recovery block management module and the log management module of the solid state disk.
  • DM and RBM work on CPU cores 0 to 7, and WM and JM work on CPU cores 1, 3, 5, and 7.
  • DM works on CPU cores 0, 2, 4, and 6, and WM, RBM, and JM work on CPU cores 1, 3, 5, and 7.
  • R means work run
  • NA means non applicable
  • S means slave slave
  • M means master control
  • IDLE means core task does not work.
  • Task layout switching is DM-aware and DM-initiated, so DM is initialized as follows: 1) Divide DMs into 4 groups according to the correspondence between DMs and WMs, and each group has 2 DMs. 2) The DM located on CPU 0, 2, 4, and 6 is called the master data management module DM_SM (Sibling Master), and the rest of the DMs are called the slave data management module DM_SS (Sibling Slave); 3) All DM initialization work Status is high load.
  • DM_SM Master
  • DM_SS slave data management module
  • the load type change control method of individual DM is as follows:
  • DM_SS passively accepts the commands of DM_SM.
  • DM saves its own current load type (DM_LOW_LOADING, DM_HIGH_LOADING) and working status (DM_LOW_LOADING_IDLE, DM_LOW_LOADING_WORKING, DM_HIGH_LOADING_WORKING), and DM_SM is responsible for managing the group's load type (DM_LOW_LOADING, DM_HIGH_LOADING).
  • DM_SS passively receives control commands to set the load type and working status, and will not actively make decisions.
  • the load type of the DM_SS can be set as the load type of the small group.
  • DM_LOW_LOADING_IDLE means not working at low load
  • DM_LOW_LOADING_WORKING means working at low load
  • DM_HIGH_LOADING_WORKING means working at high load
  • DM_LOW_LOADING means low load
  • DM_HIGH_LOADING means high load.
  • the current time slice is 10ms. If the allocation quantity within 10ms is lower than the threshold T_low, mark the current load type of itself as DM_LOW_LOADING. If it is higher than T_high Then mark the current load type of itself as DM_HIGH_LOADING.
  • DM_SM can change its working status to DM_HIGH_LOADING_WORKING. If the current working status of DM_SM is DM_HIGH_LOADING_WORKING, and the load type in consecutive T_filter time slices is DM_LOW_LOADING, DM_SM can change its working status to DM_LOW_LOADING_WORKING.
  • DM_SS will receive a load type switch from a certain DM_SM, and DM_SS will switch the working state according to the load type. The working state of DM_SS can only be switched between DM_LOW_LOADING_IDLE and DM_HIGH_LOADING_WORKING.
  • DM_LOW_LOADING_IDLE means that it is not working when the load is low.
  • the overall load type change control method is as follows:
  • the check of the DM load status is completed by the communication between DM_SMs, and the message transmission method adopts a ring structure.
  • DM_SMs are DM_HIGH_LOADIN_WORKING or DM_LOW_LOADING_WORKING, WM, JM and RBM are notified.
  • the query status of DM includes DM_LOADING_QUERY_IDLE (the current query information has not been sent), DM_LOADING_QUERY_WAIT_BACK (the current query information has been sent but not taken back) and DM_LOADING_QUERY_TAKE_BACK status (the query information has been taken back).
  • DM0, DM2, DM4 and DM6 all represent DM_SM.
  • the transmission direction of query information is DM0, DM2, DM4, DM6.
  • DM_SM When the working state of DM_SM changes and is inconsistent with the load type of the current group, a load query message is sent, and the load query state switches to DM_LOADING_QUERY_WAIT_BACK.
  • DM_SM initiate sends load query information to the next DM_SM next, and records the start node in the message. After the message is sent, the query status of this group is set to DM_LOADING_QUERY_WAIT_BACK.
  • DM_SM next adds its own working status and query status to the message and forwards it to the next DM_SM.
  • start node refers to the first DM_SM that sends load query information.
  • FIG. 4 is a schematic structural diagram of a solid-state hard disk task allocation system provided by an embodiment of the present application.
  • the solid-state hard disk includes a data management module for performing data management operations, and a data management module for performing data writing operations.
  • a write management module, a recovery block management module for recycling garbage data, and a log management module for saving snapshots, the system may include:
  • the grouping module 401 is used to divide the data management module of the solid state disk into a plurality of data management groups; wherein, each of the data management groups includes a master data management module and a slave data management module;
  • a scenario determination module 402 configured to determine a business scenario according to the working status of all the master data management modules
  • the first task allocation module 403 is used to control the master data management module and the slave data management module to work in the corresponding CPU cores when the business scenario is a high-bandwidth scenario, and provide The data management module and all the assigned tasks from the data management module; wherein, in the high-bandwidth scenario, the recovery block management modules of the solid-state hard drives corresponding to all CPU cores are in a working state
  • the second task allocation module 404 is used to control the master data management module to work in the corresponding CPU core when the business scenario is a high service quality scenario, and use the master data management module to manage the slave data
  • the working state of the module is set to an idle state, and tasks are assigned to all the master data management modules; wherein, in the high-quality-of-service scenario, the write management module, the recovery block management module and the log management module of the solid-state hard disk are all Only work in the CPU core corresponding to the slave data management module.
  • the data management module of the solid state disk is divided into multiple data management groups, and the current business scenario is determined according to the working status of the master data management modules in all data management groups. If the business scenario is a high-bandwidth scenario, it indicates that the throughput of data should be increased at this time. In this embodiment, when the business scenario is a high-bandwidth scenario, both the master data management module and the slave data management module are controlled to work in the corresponding CPU cores, so that Assign tasks to all master data management modules and all slave data management modules. If the business scenario is a high-quality-of-service scenario, it means that the task processing delay should be reduced at this time.
  • This embodiment controls the master data management module to work in the corresponding CPU core, and utilizes the master data management module to The working state of the data management module is set to idle state, so as to only assign tasks to all the master data management modules. It can be seen that this embodiment controls all data management modules to jointly process data management tasks in a high-bandwidth scenario, which improves data throughput; this embodiment also controls the master data management module in a high-quality-of-service scenario Work in the kernel. At this time, the writing management module, recovery block management module and log management module of the solid state disk do not work in the same CPU core as the main data management module, which can improve the timeliness of the main data management module's processing tasks and reduce waiting time. delay. Therefore, this embodiment can set a task allocation method according to a business scenario, and realize data processing with high service quality or high bandwidth under the condition of limited hardware resources.
  • the grouping module 401 is used to determine the corresponding relationship between the data management module and the write management module in the solid state disk, and set the data management modules corresponding to the same write management module as the same data management group .
  • the scene determination module 402 includes:
  • a working state determining unit configured to set the working state of the master data management module according to the load type of the master data management module in a continuous period of time
  • a state judging unit configured to judge whether the working states of all the master data management modules are the same
  • a scenario analysis unit configured to determine a business scenario according to the working states of all the master data management modules if the working states of all the master data management modules are the same.
  • the working state determination unit is used to count the times of allocation of read and write requests received by the master data management module according to the time slice;
  • the load type of the data management module is set to a high load type; it is also used to set the load type of the master data management module to a low load type when the number of read and write request allocations is less than or equal to the preset value; It is used to set the working state of the master data management module to a high load working state if the load types of the master data management module are all high load types in a continuous period of time; If the load types of the modules in the continuous period of time are all low load types, then the working state of the master data management module is set to the low load working state.
  • the scenario analysis unit is configured to determine that the business scenario is the high-bandwidth scenario if the working states of all the master data management modules are high-load working states; If the working states of the modules are both low-load working states, it is determined that the business scenario is the high-quality-of-service scenario.
  • the state judging unit is used to select a master data management module that meets a preset condition from all the master data management modules; wherein, the preset condition is that the working state changes, and the load type is the same as that of the slaves of the same data management group.
  • the load types of the data management modules are different; it is also used to write the working status of the master data management module that meets the preset conditions into the query information, and forward the query information to the next A master data management module, so that all the master data management modules write the working status into the query information; it is also used to judge whether the working status of all the master data management modules are the same according to the query information.
  • the message sending module is used to set the first master data management module that writes the working status to the query information as the starting point module after determining the business scenario according to the working status of all the main data management modules, and controls the starting point
  • the module sends a business scene switching message to the write management module, the recovery block management module and the log management module of the solid state disk.
  • the present application also provides a storage medium on which a computer program is stored. When the computer program is executed, the steps provided in the above-mentioned embodiments can be realized.
  • the storage medium may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • the present application also provides an electronic device, which may include a memory and a processor, where a computer program is stored in the memory, and when the processor invokes the computer program in the memory, the steps provided in the above embodiments can be implemented.
  • the electronic device may also include various network interfaces, power supplies and other components.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请公开了固态硬盘的任务分配方法、系统、电子设备及存储介质,所述任务分配方法包括:将数据管理模块划分为多个数据管理组;根据所有主数据管理模块的工作状态确定业务场景;当业务场景为高带宽场景时,控制主数据管理模块和从数据管理模块均在对应的CPU内核中工作,并为所有主数据管理模块和所有从数据管理模块分配任务;当业务场景为高服务质量场景时,控制主数据管理模块在对应的CPU内核中工作,并为所有主数据管理模块分配任务。本申请能够根据业务场景设置任务分配方式,在硬件资源有限的条件下实现高服务质量或高带宽的数据处理。本申请公开的一种固态硬盘的任务分配系统、一种电子设备及一种存储介质,也具有以上有益效果。

Description

固态硬盘的任务分配方法、系统、电子设备及存储介质
本申请要求在2021年07月13日提交中国专利局、申请号为202110791541.0、发明名称为“固态硬盘的任务分配方法、系统、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理技术领域,特别涉及固态硬盘的任务分配方法、系统、电子设备及存储介质。
背景技术
固态硬盘(Solid State Drive,SSD)内部通常根据功能的不同划分为不同的模块,比如负责数据管理的数据管理模块DM(Data Manager),负责写nand操作的写入管理模块WM(write manager),负责垃圾回收的回收块管理模块RBM(Reclaim Block Manager)以及负责快照保存的日志管理模块JM(Journal Manager)。系统的服务质量QOS(Quality of Service)受到读写操作的延时影响。系统的服务质量主要受读写操作延时影响,系统的带宽BW(Band Width)则主要受任务并发度的影响但是对延时不敏感。
通常NVME(Non-Volatile Memory express,一种用于PCI Express附加存储的寄存器接口和命令集)命令在数据管理模块处理完成,如果想要实现好的服务质量,需要保证数据管理模块及时响应host SQ(主提交队列)中的命令,以及数据管理模块内部上下文的及时处理。但是当CPU的硬件有限时,分配数据管理模块的CPU核数有限,当需要高带宽时,数据管理模块内部的并发度受限于CPU的处理能力,并发读就会受影响。
因此,如何根据业务场景设置任务分配方式,在硬件资源有限的条件下实现高服务质量或高带宽的数据处理是本领域技术人员目前需要解决的技术问题。
发明内容
本申请的目的是提供一种固态硬盘的任务分配方法、系统、一种电子设备及一种存储介质,能够根据业务场景设置任务分配方式,在硬件资源有限的条件下实现高服务质量或高带宽的数据处理。
为解决上述技术问题,本申请提供一种固态硬盘的任务分配方法,所述固态硬盘包括用于执行数据管理操作的数据管理模块、用于执行数据写操作的写入管理模块、用于回收垃圾数据的回收块管理模块以及用于保存快照的日志管理模块,所述固态硬盘的任务分配方法包括:
将所述数据管理模块划分为多个数据管理组;其中,每一所述数据管理组包括主数据管理模块和从数据管理模块;
根据所有所述主数据管理模块的工作状态确定业务场景;
当所述业务场景为高带宽场景时,则控制所述主数据管理模块和所述从数据管理模块均在对应的CPU内核中工作,并为所有所述主数据管理模块和所有所述从数据管理模块分配任务;其中,在所述高带宽场景下,所有CPU内核对应的所述固态硬盘的回收块管理模块均处于工作状态;
当所述业务场景为高服务质量场景时,则控制所述主数据管理模块在对应的CPU内核中工作,利用所述主数据管理模块将所述从数据管理模块的工作状态设置为闲置状态,并为所有所述主数据管理模块分配任务;其中,在所述高服务质量场景下所述固态硬盘的写入管理模块、回收块管理模块和日志管理模块均只在从数据管理模块对应的CPU内核中工作。
可选的,将所述固态硬盘的数据管理模块划分为多个数据管理组,包括:
确定所述固态硬盘中所述数据管理模块与写入管理模块的对应关系,并将同一所述写入管理模块对应的数据管理模块设置为同一所述数据管理组。
可选的,根据所有所述主数据管理模块的工作状态确定业务场景,包括:
根据所述主数据管理模块在连续时间段内的负载类型设置所述主数据管理模块的工作状态;
判断所有所述主数据管理模块的工作状态是否均相同;
若所有所述主数据管理模块的工作状态均相同,则根据所有所述主数据管理模块的工作状态确定业务场景。
可选的,根据所述主数据管理模块在连续时间段内的负载类型设置所述主数据管理模块的工作状态,包括:
按照时间片对所述主数据管理模块接收的读写请求分配次数进行统计;
在所述读写请求分配次数大于预设值时,将所述主数据管理模块的负载类型设置为高负载类型;
在所述读写请求分配次数小于或等于所述预设值时,将所述主数据管理模块的负载类型设置为低负载类型;
若所述主数据管理模块在连续时间段内的负载类型均为高负载类型,则将所述主数据管理模块的工作状态设置为高负载工作状态;
若所述主数据管理模块在连续时间段内的负载类型均为低负载类型,则将所述主数据管理模块的工作状态设置为低负载工作状态。
可选的,根据所有所述主数据管理模块的工作状态确定业务场景,包括:
若所有所述主数据管理模块的工作状态均为高负载工作状态,则判定所述业务场景为所述高带宽场景;
若所有所述主数据管理模块的工作状态均为低负载工作状态,则判定所述业务场景为所述高服务质量场景。
可选的,判断所有所述主数据管理模块的工作状态是否均相同,包括:
从所有所述主数据管理模块中选取符合预设条件的主数据管理模块;其中,所述预设条件为工作状态改变、且负载类型与同一数据管理组的从数据管理模块的负载类型不同;
将符合所述预设条件的主数据管理模块的工作状态写入查询信息中,并将所述查询信息按照预设顺序将所述查询信息转发至下一主数据管理模块,以便所有所述主数据管理模块将工作状态写入所述查询信息;
根据所述查询信息判断所有所述主数据管理模块的工作状态是否均相同。
可选的,在根据所有所述主数据管理模块的工作状态确定业务场景之后,还包括:
将第一个向所述查询信息写入工作状态的主数据管理模块设置为起点模块,控制所述起点模块向所述固态硬盘的写入管理模块、回收块管理模块和日志管理模块发送业务场景切换消息。
本申请还提供了一种固态硬盘的任务分配系统,所述固态硬盘包括用于执行数据管理操作的数据管理模块、用于执行数据写操作的写入管理模块、 用于回收垃圾数据的回收块管理模块以及用于保存快照的日志管理模块,所述固态硬盘的任务分配系统包括:
分组模块,用于将所述数据管理模块划分为多个数据管理组;其中,每一所述数据管理组包括主数据管理模块和从数据管理模块;
场景确定模块,用于根据所有所述主数据管理模块的工作状态确定业务场景;
第一任务分配模块,用于当所述业务场景为高带宽场景时,则控制所述主数据管理模块和所述从数据管理模块均在对应的CPU内核中工作,并为所有所述主数据管理模块和所有所述从数据管理模块分配任务;其中,在所述高带宽场景下,所有CPU内核对应的所述固态硬盘的回收块管理模块均处于工作状态;
第二任务分配模块,用于当所述业务场景为高服务质量场景时,则控制所述主数据管理模块在对应的CPU内核中工作,利用所述主数据管理模块将所述从数据管理模块的工作状态设置为闲置状态,并为所有所述主数据管理模块分配任务;其中,在所述高服务质量场景下所述固态硬盘的写入管理模块、回收块管理模块和日志管理模块均只在从数据管理模块对应的CPU内核中工作。
本申请还提供了一种存储介质,其上存储有计算机程序,所述计算机程序执行时实现上述固态硬盘的任务分配方法执行的步骤。
本申请还提供了一种电子设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器调用所述存储器中的计算机程序时实现上述固态硬盘的任务分配方法执行的步骤。
本申请提供了一种固态硬盘的任务分配方法,包括:将所述固态硬盘的数据管理模块划分为多个数据管理组;其中,每一所述数据管理组包括主数据管理模块和从数据管理模块;根据所有所述主数据管理模块的工作状态确定业务场景;当所述业务场景为高带宽场景时,则控制所述主数据管理模块和所述从数据管理模块均在对应的CPU内核中工作,并为所有所述主数据管理模块和所有所述从数据管理模块分配任务;其中,在所述高带宽场景下,所有CPU内核对应的所述固态硬盘的回收块管理模块均处于工作状态;当所 述业务场景为高服务质量场景时,则控制所述主数据管理模块在对应的CPU内核中工作,利用所述主数据管理模块将所述从数据管理模块的工作状态设置为闲置状态,并为所有所述主数据管理模块分配任务;其中,在所述高服务质量场景下所述固态硬盘的写入管理模块、回收块管理模块和日志管理模块均只在从数据管理模块对应的CPU内核中工作。
本申请将固态硬盘的数据管理模块划分为多个数据管理组,根据所有数据管理组中主数据管理模块的工作状态确定当前的业务场景。若业务场景为高带宽场景,则说明此时应该增加数据的吞吐量,本申请在业务场景为高带宽场景时控制主数据管理模块和从数据管理模块均在对应的CPU内核中工作,以便为所有主数据管理模块和所有从数据管理模块分配任务。若业务场景为高服务质量场景,则说明此时应该减小任务处理延时,本申请控制所述主数据管理模块在对应的CPU内核中工作,利用所述主数据管理模块将所述从数据管理模块的工作状态设置为闲置状态,以便只为所有所述主数据管理模块分配任务。由此可见,本申请在高带宽场景下控制所有的数据管理模块共同处理数据管理任务,提高了数据的吞吐量;本申请还在高服务质量场景下控制主数据管理模块在对应的CPU内核中工作,此时固态硬盘的写入管理模块、回收块管理模块和日志管理模块均不与主数据管理模块在同一CPU内核中工作,能够提高主数据管理模块处理任务的及时性,降低等待延时。因此,本申请能够根据业务场景设置任务分配方式,在硬件资源有限的条件下实现高服务质量或高带宽的数据处理。本申请同时还提供了一种固态硬盘的任务分配系统、一种电子设备和一种存储介质,具有上述有益效果,在此不再赘述。
附图说明
为了更清楚地说明本申请实施例,下面将对实施例中所需要使用的附图做简单的介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例所提供的一种固态硬盘的任务分配方法的流程图;
图2为本申请实施例所提供的一种确定当前业务场景的方法的流程图;
图3为本申请实施例所提供的一种查询信息传递方式的示意图;
图4为本申请实施例所提供的一种固态硬盘的任务分配系统的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
下面请参见图1,图1为本申请实施例所提供的一种固态硬盘的任务分配方法的流程图。
具体步骤可以包括:
S101:将数据管理模块划分为多个数据管理组;
其中,本实施例可以应用于计算机、服务器等电子设备,可以按照预设规则将固态硬盘的数据管理模块划分为多个数据管理组,每一数据管理组中可以包括主数据管理模块和从数据管理模块。
具体的,固态硬盘中可以包括多个数据管理模块、多个写入管理模块、多个回收块管理模块和多个日志管理模块,每一写入管理模块对应两个数据管理模块。数据管理模块(DM,Data Manager)负责处理NVMe(NVM Express,非易失性内存主机控制器接口规范)submission queue(提交队列)里的读写消息,以便完成写命令的host数据接收和读命令流程的处理。写入管理模块(WM,write Manager)又称NAND(计算机闪存设备)写入管理模块,负责管理NAND block(区块),还用于维护组织数据条带,维护LBA-PBA(logic block address-physical block address,逻辑区块地址-物理区块地址)的映射以及磨损均衡。回收块管理模块(RBM,reclaim block manager)又称NAND block回收模块,负责block的垃圾回收,将一个block中的有效数据搬移到新的block中。日志管理模块(JM,Journal Manager)又称日志模块,负责固态硬盘的元数据的运行时保存,以便利用保存的内容进行上电恢复。
作为一种可行的实施方式,本实施例可以确定所述固态硬盘中数据管理模块与写入管理模块的对应关系,并将同一所述写入管理模块对应的数据管理模块设置为同一所述数据管理组。即数据管理组中包括一个主数据管理模块和一个从数据管理模块。数据管理组中主从分配的比例可以根据一直处于工作状态的数据管理模块数量和可能进入工作状态的数据管理模块数量的比例确定,在其他的分组方式中,一个数据管理组中可以包括一个主数据管理模块和至少一个从数据管理模块。
S102:根据所有所述主数据管理模块的工作状态确定业务场景;
其中,数据管理组中的主数据管理模块可以根据读写请求分配的压力设置自身的负载类型和工作状态,主数据管理模块还可以根据当前业务场景设置从数据管理模块的负载类型和工作状态,从数据管理模块不具有修改自身的负载类型和工作状态的权限。
本实施例中提到的业务场景包括高带宽场景和高服务质量场景,高带宽场景指存在高吞吐量需求的业务场景,高服务质量场景指存在低等待延时需求的业务场景。
具体的,当主数据管理模块的业务压力小于或等于临界值时,主数据管理模块的工作状态为低负载工作状态;当主数据管理模块的业务压力大于临界值时,主数据管理模块的工作状态为高负载工作状态。上述业务压力可以根据每秒钟读写请求分配的资源量确定,该临界值可以根据应用场景设置,例如可以为16M/s。当所有的主数据管理模块的工作状态均为低负载工作状态,则判定业务场景为高服务质量场景;当所有的主数据管理模块的工作状态均为高负载工作状态,则判定业务场景为高带宽场景。
S103:当业务场景为高带宽场景时,则控制所述主数据管理模块和所述从数据管理模块均在对应的CPU内核中工作,并为所有所述主数据管理模块和所有所述从数据管理模块分配任务;
其中,本步骤建立在当前的业务场景为高带宽场景的基础上,此时需要尽量多的数据管理模块处理任务。每一CPU内核对应一个数据管理模块,数据管理模块在对应的CPU内核中工作。本步骤中控制主数据管理模块和所述从数据管理模块均在对应的CPU内核中工作,并为所有所述主数据管理模块和所有所述从数据管理模块分配任务,此时所有的数据管理模块均处于工作 状态,能够在硬件资源有限条件下提高数据处理的吞吐量。在高带宽场景下所有CPU内核对应的所述固态硬盘的回收块管理模块均处于工作状态,固态硬盘的写入管理模块和日志管理模块只在从数据管理模块对应的CPU内核中工作。
以存在CPU0至CPU7共八个CPU内核为例,在高带宽场景下,CPU0至CPU7所有内核中的数据管理模块均处于工作状态,CPU0至CPU7所有内核中的回收块管理模块均处于工作状态,CPU1、CPU3、CPU5和CPU7中的写入管理模块均处于工作状态,CPU1、CPU3、CPU5和CPU7中的日志管理模块处于工作状态。
S104:当所述业务场景为高服务质量场景时,则控制所述主数据管理模块在对应的CPU内核中工作,利用所述主数据管理模块将所述从数据管理模块的工作状态设置为闲置状态,并为所有所述主数据管理模块分配任务;
其中,本步骤建立在当前的业务场景为高服务质量场景的基础上,此时需要降低等待任务处理的延时,因此本步骤中控制所述主数据管理模块在对应的CPU内核中工作,利用所述主数据管理模块将所述从数据管理模块的工作状态设置为闲置状态,并为所有所述主数据管理模块分配任务。通过上述方式能够使主数据管理模块处于工作状态、从数据管理模块处于非工作状态。进一步的,在高服务质量场景下,固态硬盘的写入管理模块、回收块管理模块和日志管理模块只在从数据管理模块对应的CPU内核中工作。通过上述方式能够使主数据管理模块与其他模块工作所在的CPU内核分开,保证主数据管理模块及时响应host提交队列中的命令,保证主数据管理模块内部上下文的及时处理。
以存在CPU0至CPU7共八个CPU内核为例,在高服务质量场景下,CPU0、CPU2、CPU4和CPU6中的数据管理模块均处于工作状态,CPU1、CPU3、CPU5和CPU7中的数据管理模块均处于未工作状态,CPU1、CPU3、CPU5和CPU7中的写入管理模块均处于工作状态,CPU1、CPU3、CPU5和CPU7中的日志管理模块处于工作状态,CPU1、CPU3、CPU5和CPU7中的回收块管理模块处于工作状态。
本实施例将固态硬盘的数据管理模块划分为多个数据管理组,根据所有数据管理组中主数据管理模块的工作状态确定当前的业务场景。若业务场景 为高带宽场景,则说明此时应该增加数据的吞吐量,本实施例在业务场景为高带宽场景时控制主数据管理模块和从数据管理模块均在对应的CPU内核中工作,以便为所有主数据管理模块和所有从数据管理模块分配任务。若业务场景为高服务质量场景,则说明此时应该减小任务处理延时,本实施例控制所述主数据管理模块在对应的CPU内核中工作,利用所述主数据管理模块将所述从数据管理模块的工作状态设置为闲置状态,以便只为所有所述主数据管理模块分配任务。由此可见,本实施例在高带宽场景下控制所有的数据管理模块共同处理数据管理任务,提高了数据的吞吐量;本实施例还在高服务质量场景下控制主数据管理模块在对应的CPU内核中工作,此时固态硬盘的写入管理模块、回收块管理模块和日志管理模块均不与主数据管理模块在同一CPU内核中工作,能够提高主数据管理模块处理任务的及时性,降低等待延时。因此,本实施例能够根据业务场景设置任务分配方式,在硬件资源有限的条件下实现高服务质量或高带宽的数据处理。
请参见图2,图2为本申请实施例所提供的一种确定当前业务场景的方法的流程图,本实施例是对图1对应实施例中S102的进一步介绍,可以将本实施例与图1对应的实施例相结合得到进一步的实施方式,本实施例可以包括以下步骤:
S201:根据主数据管理模块在连续时间段内的负载类型设置所述主数据管理模块的工作状态;
S202:判断所有所述主数据管理模块的工作状态是否均相同;若所有所述主数据管理模块的工作状态均相同,则进入S203;若不是所有所述主数据管理模块的工作状态均相同,则进入S204;
S203:根据所有所述主数据管理模块的工作状态确定业务场景;
S204:判定业务场景未改变。
上述实施例根据主数据管理模块在连续时间段内的负载类型设置工作状态,具体可以按照时间片(即预设周期)对所述主数据管理模块接收的读写请求分配次数进行统计;在所述读写请求分配次数大于预设值时,将所述主数据管理模块在当前时间片的负载类型设置为高负载类型;在所述读写请求分配次数小于或等于所述预设值时,将所述主数据管理模块在当前时间片的 负载类型设置为低负载类型。若所述主数据管理模块在连续时间段内所有时间片的负载类型均为高负载类型,则将所述主数据管理模块的工作状态设置为高负载工作状态;若所述主数据管理模块在连续时间段内所有时间片的负载类型均为低负载类型,则将所述主数据管理模块的工作状态设置为低负载工作状态。通过上述方式能够针对单个主数据管理模块的工作状态,在得到所有主数据管理模块的工作状态之后,若所有所述主数据管理模块的工作状态均为高负载工作状态,则判定所述业务场景为所述高带宽场景;若所有所述主数据管理模块的工作状态均为低负载工作状态,则判定所述业务场景为所述高服务质量场景。
进一步的,本实施例可以通过以下方式判断所有所述主数据管理模块的工作状态是否均相同:从所有所述主数据管理模块中选取符合预设条件的主数据管理模块;其中,所述预设条件为工作状态改变、且负载类型与同一数据管理组的从数据管理模块的负载类型不同;将符合所述预设条件的主数据管理模块的工作状态写入查询信息中,并将所述查询信息按照预设顺序将所述查询信息转发至下一主数据管理模块,以便所有所述主数据管理模块将工作状态写入所述查询信息;根据所述查询信息判断所有所述主数据管理模块的工作状态是否均相同。上述预设顺序可以为模块编号从小到大的顺序。
进一步的,在根据所有所述主数据管理模块的工作状态确定业务场景之后,还可以将第一个向所述查询信息写入工作状态的主数据管理模块设置为起点模块,控制所述起点模块向所述固态硬盘的写入管理模块、回收块管理模块和日志管理模块发送业务场景切换消息。通过上述方式能够保证始终只有一个主数据管理模块发出敷在切换的控制信息,保证系统业务状态的一致性。
下面通过在实际应用中的实施例说明上述实施例描述的流程。
固态内部主控任务布局的方案如下:
请参见表1,在高服务质量场景下,工作的DM数量有8个变为4个,并且将这个4个DM上所在CPU内核的GC(Garbage Collection,SSD内部的垃圾回收行为)任务迁移到其他核上去,此时DM在内核CPU0、CPU2、CPU4和CPU6中工作,WM、RBM和JM在内核CPU1、CPU3、CPU5和CPU7中工作。
表1高服务质量场景下的任务分配表
  CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
DM R IDLE R IDLE R IDLE R IDLE
WM NA R NA R NA R NA R
RBM IDLE R IDLE R IDLE R IDLE R
JM NA S NA S NA M NA S
请参见表2,在高带宽场景下,DM和RBM的数量由4个变为8个,分布如下表:
表2高带宽场景下的任务分配表
  CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
DM R R R R R R R R
WM NA R NA R NA R NA R
RBM R R R R R R R R
JM NA S NA S NA M NA S
高带宽场景下,DM和RBM工作在CPU内核0至7,WM和JM工作在CPU内核1、3、5、7。高服务质量场景下DM工作在CPU内核0、2、4、6,WM、RBM和JM工作在CPU内核1、3、5、7。上表中R表示工作run,NA表示不适用non applicable,S表示从属slave,M表示主控master,IDLE表示核上任务不工作。
高负载场景和高服务质量场景需要在DM中进行监控,当监控到场景变化时需要及时作出任务布局切换动作。任务布局切换有DM感知和DM发起,因此对DM作出如下的初始化:1)把DM按照DM和WM的对应关系分为4个小组,每个小组2个DM。2)位于CPU 0,2,4,6上的DM被称为主数据管理模块DM_SM(Sibling Master),其余的DM被称为从数据管理模块DM_SS(Sibling Slave);3)所有的DM初始化工作状态为高负载。
个体DM的负载类型变化控制方式如下:
个体负载类型变化只在DM_SM进行,DM_SS被动接受DM_SM的命令。DM保存自身当前负载类型(DM_LOW_LOADING,DM_HIGH_LOADING)和工作状态(DM_LOW_LOADING_IDLE,DM_LOW_LOADING_WORKING,DM_HIGH_LOADING_WORKING),DM_SM负责管理小组的负载类型(DM_LOW_LOADING,DM_HIGH_LOADING)。DM_SS被动接收控制命令设置负载类型和工作状态,不会主动做出决策。本实施例可以将DM_SS 的负载类型设置为小组的负载类型。DM_LOW_LOADING_IDLE指低负载不工作,DM_LOW_LOADING_WORKING指低负载工作,DM_HIGH_LOADING_WORKING指高负载工作;DM_LOW_LOADING指低负载,DM_HIGH_LOADING指高负载。
按时间片对df(读写请求分配的资源)的分配数量进行统计,目前设置的时间片为10ms,如果10ms内分配数量低于阈值T_low,标记自身的当前负载类型为DM_LOW_LOADING,如果高于T_high则标记自身的当前负载类型为DM_HIGH_LOADING。
若DM_SM自身当前的工作状态为DM_LOW_LOADING_WORKING,且连续T_filter个时间片内的负载类型为DM_HIGH_LOADING,DM_SM可以把自身的工作状态改变为DM_HIGH_LOADING_WORKING。若DM_SM自身当前的工作状态为DM_HIGH_LOADING_WORKING,且连续T_filter个时间片内的负载类型为DM_LOW_LOADING,DM_SM可以把自身的工作状态改变为DM_LOW_LOADING_WORKING。DM_SS会收到某一个DM_SM发过来负载类型切换,DM_SS根据负载类型进行工作状态的切换,DM_SS的工作状态只能在DM_LOW_LOADING_IDLE和DM_HIGH_LOADING_WORKING之间切换。DM_LOW_LOADING_IDLE表示在低负载时处于不工作状态。
整体负载类型变化控制方式如下:
如图3所示DM负载状态的检查由DM_SM之间的通信完成,消息传递的方式采用环形结构,当所有DM_SM都为DM_HIGH_LOADIN_WORKING或为DM_LOW_LOADING_WORKING的时候,通知WM、JM和RBM。DM的查询状态包括DM_LOADING_QUERY_IDLE(当前查询信息未发送)、DM_LOADING_QUERY_WAIT_BACK(当前查询信息发出还未收回)以及DM_LOADING_QUERY_TAKE_BACK状态(查询信息已收回)。图3中DM0、DM2、DM4和DM6均表示DM_SM。查询信息的传递方向为DM0、DM2、DM4、DM6。
当DM_SM的工作状态发生变化且与当前小组的负载类型不一致时,发出负载查询信息,负载的查询状态切换为DM_LOADING_QUERY_WAIT_BACK。由DM_SM initiate给下一个DM_SM next发送负载查询信息, 并在消息中记录start node,消息发出后把本组的查询状态设置为DM_LOADING_QUERY_WAIT_BACK。DM_SM next把自身的工作状态和查询状态加入消息中,转发给下一个DM_SM。当DM_SM收到查询信息的start node为自身时,把查询状态设置为DM_LOADING_QUERY_TAKE_BACK,根据4个DM_SM的状态,判断所有的DM小组是否处于相同的工作状态。start node指第一个发出负载查询信息的DM_SM。
如果有小组的工作状态和自身的小组工作状态不一致,则不做任何处理;如果有小组的查询状态为DM_LOADING_QUERY_IDLE,不做任何处理;如果有小组的查询状态为DM_LOADING_QUERY_WAIT_BACK,且自身的node id小于该消息内查询状态为DM_LOADING_QUERY_WAIT_BACK的DM_SM的node id,不做任何处理。其余情况,所有DM_SM状态切换完成通知RBM,JM,WM任务负载变化。这个方法可以保证所有DM状态发生变化后,只由一个DM发出负载切换的控制消息,保证系统状态的一致性。通过上述方式可以在硬件资源有限的条件下,实现高服务质量或高吞吐量。
请参见图4,图4为本申请实施例所提供的一种固态硬盘的任务分配系统的结构示意图,所述固态硬盘包括用于执行数据管理操作的数据管理模块、用于执行数据写操作的写入管理模块、用于回收垃圾数据的回收块管理模块以及用于保存快照的日志管理模块,该系统可以包括:
分组模块401,用于将所述固态硬盘的数据管理模块划分为多个数据管理组;其中,每一所述数据管理组包括主数据管理模块和从数据管理模块;
场景确定模块402,用于根据所有所述主数据管理模块的工作状态确定业务场景;
第一任务分配模块403,用于当所述业务场景为高带宽场景时,则控制所述主数据管理模块和所述从数据管理模块均在对应的CPU内核中工作,并为所有所述主数据管理模块和所有所述从数据管理模块分配任务;其中,在所述高带宽场景下,所有CPU内核对应的所述固态硬盘的回收块管理模块均处于工作状态
第二任务分配模块404,用于当所述业务场景为高服务质量场景时,则控制所述主数据管理模块在对应的CPU内核中工作,利用所述主数据管理模块 将所述从数据管理模块的工作状态设置为闲置状态,并为所有所述主数据管理模块分配任务;其中,在所述高服务质量场景下所述固态硬盘的写入管理模块、回收块管理模块和日志管理模块均只在从数据管理模块对应的CPU内核中工作。
本实施例将固态硬盘的数据管理模块划分为多个数据管理组,根据所有数据管理组中主数据管理模块的工作状态确定当前的业务场景。若业务场景为高带宽场景,则说明此时应该增加数据的吞吐量,本实施例在业务场景为高带宽场景时控制主数据管理模块和从数据管理模块均在对应的CPU内核中工作,以便为所有主数据管理模块和所有从数据管理模块分配任务。若业务场景为高服务质量场景,则说明此时应该减小任务处理延时,本实施例控制所述主数据管理模块在对应的CPU内核中工作,利用所述主数据管理模块将所述从数据管理模块的工作状态设置为闲置状态,以便只为所有所述主数据管理模块分配任务。由此可见,本实施例在高带宽场景下控制所有的数据管理模块共同处理数据管理任务,提高了数据的吞吐量;本实施例还在高服务质量场景下控制主数据管理模块在对应的CPU内核中工作,此时固态硬盘的写入管理模块、回收块管理模块和日志管理模块均不与主数据管理模块在同一CPU内核中工作,能够提高主数据管理模块处理任务的及时性,降低等待延时。因此,本实施例能够根据业务场景设置任务分配方式,在硬件资源有限的条件下实现高服务质量或高带宽的数据处理。
进一步的,分组模块401用于确定所述固态硬盘中所述数据管理模块与写入管理模块的对应关系,并将同一所述写入管理模块对应的数据管理模块设置为同一所述数据管理组。
进一步的,场景确定模块402包括:
工作状态确定单元,用于根据所述主数据管理模块在连续时间段内的负载类型设置所述主数据管理模块的工作状态;
状态判断单元,用于判断所有所述主数据管理模块的工作状态是否均相同;
场景分析单元,用于若所有所述主数据管理模块的工作状态均相同,则根据所有所述主数据管理模块的工作状态确定业务场景。
进一步的,工作状态确定单元用于按照时间片对所述主数据管理模块接收的读写请求分配次数进行统计;还用于在所述读写请求分配次数大于预设值时,将所述主数据管理模块的负载类型设置为高负载类型;还用于在所述读写请求分配次数小于或等于所述预设值时,将所述主数据管理模块的负载类型设置为低负载类型;还用于若所述主数据管理模块在连续时间段内的负载类型均为高负载类型,则将所述主数据管理模块的工作状态设置为高负载工作状态;还用于若所述主数据管理模块在连续时间段内的负载类型均为低负载类型,则将所述主数据管理模块的工作状态设置为低负载工作状态。
进一步的,场景分析单元,用于若所有所述主数据管理模块的工作状态均为高负载工作状态,则判定所述业务场景为所述高带宽场景;还用于若所有所述主数据管理模块的工作状态均为低负载工作状态,则判定所述业务场景为所述高服务质量场景。
进一步的,状态判断单元用于从所有所述主数据管理模块中选取符合预设条件的主数据管理模块;其中,所述预设条件为工作状态改变、且负载类型与同一数据管理组的从数据管理模块的负载类型不同;还用于将符合所述预设条件的主数据管理模块的工作状态写入查询信息中,并将所述查询信息按照预设顺序将所述查询信息转发至下一主数据管理模块,以便所有所述主数据管理模块将工作状态写入所述查询信息;还用于根据所述查询信息判断所有所述主数据管理模块的工作状态是否均相同。
进一步的,还包括:
消息发送模块,用于在根据所有所述主数据管理模块的工作状态确定业务场景之后,将第一个向所述查询信息写入工作状态的主数据管理模块设置为起点模块,控制所述起点模块向所述固态硬盘的写入管理模块、回收块管理模块和日志管理模块发送业务场景切换消息。
由于系统部分的实施例与方法部分的实施例相互对应,因此系统部分的实施例请参见方法部分的实施例的描述,这里暂不赘述。
本申请还提供了一种存储介质,其上存有计算机程序,该计算机程序被执行时可以实现上述实施例所提供的步骤。该存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请还提供了一种电子设备,可以包括存储器和处理器,所述存储器中存有计算机程序,所述处理器调用所述存储器中的计算机程序时,可以实现上述实施例所提供的步骤。当然所述电子设备还可以包括各种网络接口,电源等组件。
说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的系统而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请权利要求的保护范围内。
还需要说明的是,在本说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的状况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。

Claims (10)

  1. 一种固态硬盘的任务分配方法,其特征在于,所述固态硬盘包括用于执行数据管理操作的数据管理模块、用于执行数据写操作的写入管理模块、用于回收垃圾数据的回收块管理模块以及用于保存快照的日志管理模块,所述固态硬盘的任务分配方法包括:
    将所述数据管理模块划分为多个数据管理组;其中,每一所述数据管理组包括主数据管理模块和从数据管理模块;
    根据所有所述主数据管理模块的工作状态确定业务场景;
    当所述业务场景为高带宽场景时,则控制所述主数据管理模块和所述从数据管理模块均在对应的CPU内核中工作,并为所有所述主数据管理模块和所有所述从数据管理模块分配任务;其中,在所述高带宽场景下,所有CPU内核对应的所述回收块管理模块均处于工作状态;
    当所述业务场景为高服务质量场景时,则控制所述主数据管理模块在对应的CPU内核中工作,利用所述主数据管理模块将所述从数据管理模块的工作状态设置为闲置状态,并为所有所述主数据管理模块分配任务;其中,在所述高服务质量场景下所述写入管理模块、所述回收块管理模块和所述日志管理模块均只在所述从数据管理模块对应的CPU内核中工作。
  2. 根据权利要求1所述任务分配方法,其特征在于,将所述数据管理模块划分为多个数据管理组,包括:
    确定所述固态硬盘中所述数据管理模块与所述写入管理模块的对应关系,并将同一所述写入管理模块对应的数据管理模块设置为同一所述数据管理组。
  3. 根据权利要求1所述任务分配方法,其特征在于,根据所有所述主数据管理模块的工作状态确定业务场景,包括:
    根据所述主数据管理模块在连续时间段内的负载类型设置所述主数据管理模块的工作状态;
    判断所有所述主数据管理模块的工作状态是否均相同;
    若所有所述主数据管理模块的工作状态均相同,则根据所有所述主数据管理模块的工作状态确定业务场景。
  4. 根据权利要求3所述任务分配方法,其特征在于,根据所述主数据管理模块在连续时间段内的负载类型设置所述主数据管理模块的工作状态,包括:
    按照时间片对所述主数据管理模块接收的读写请求分配次数进行统计;
    在所述读写请求分配次数大于预设值时,将所述主数据管理模块的负载类型设置为高负载类型;
    在所述读写请求分配次数小于或等于所述预设值时,将所述主数据管理模块的负载类型设置为低负载类型;
    若所述主数据管理模块在连续时间段内的负载类型均为高负载类型,则将所述主数据管理模块的工作状态设置为高负载工作状态;
    若所述主数据管理模块在连续时间段内的负载类型均为低负载类型,则将所述主数据管理模块的工作状态设置为低负载工作状态。
  5. 根据权利要求4所述任务分配方法,其特征在于,根据所有所述主数据管理模块的工作状态确定业务场景,包括:
    若所有所述主数据管理模块的工作状态均为高负载工作状态,则判定所述业务场景为所述高带宽场景;
    若所有所述主数据管理模块的工作状态均为低负载工作状态,则判定所述业务场景为所述高服务质量场景。
  6. 根据权利要求3所述任务分配方法,其特征在于,判断所有所述主数据管理模块的工作状态是否均相同,包括:
    从所有所述主数据管理模块中选取符合预设条件的主数据管理模块;其中,所述预设条件为工作状态改变、且负载类型与同一数据管理组的从数据管理模块的负载类型不同;
    将符合所述预设条件的主数据管理模块的工作状态写入查询信息中,并将所述查询信息按照预设顺序将所述查询信息转发至下一主数据管理模块,以便所有所述主数据管理模块将工作状态写入所述查询信息;
    根据所述查询信息判断所有所述主数据管理模块的工作状态是否均相同。
  7. 根据权利要求6所述任务分配方法,其特征在于,在根据所有所述主数据管理模块的工作状态确定业务场景之后,还包括:
    将第一个向所述查询信息写入工作状态的主数据管理模块设置为起点模块,控制所述起点模块向所述写入管理模块、所述回收块管理模块和所述日志管理模块发送业务场景切换消息。
  8. 一种固态硬盘的任务分配系统,其特征在于,所述固态硬盘包括用于执行数据管理操作的数据管理模块、用于执行数据写操作的写入管理模块、用于回收垃圾数据的回收块管理模块以及用于保存快照的日志管理模块,所述固态硬盘的任务分配系统包括:
    分组模块,用于将所述数据管理模块划分为多个数据管理组;其中,每一所述数据管理组包括主数据管理模块和从数据管理模块;
    场景确定模块,用于根据所有所述主数据管理模块的工作状态确定业务场景;
    第一任务分配模块,用于当所述业务场景为高带宽场景时,则控制所述主数据管理模块和所述从数据管理模块均在对应的CPU内核中工作,并为所有所述主数据管理模块和所有所述从数据管理模块分配任务;其中,在所述高带宽场景下,所有CPU内核对应的所述回收块管理模块均处于工作状态;
    第二任务分配模块,用于当所述业务场景为高服务质量场景时,则控制所述主数据管理模块在对应的CPU内核中工作,利用所述主数据管理模块将所述从数据管理模块的工作状态设置为闲置状态,并为所有所述主数据管理模块分配任务;其中,在所述高服务质量场景下所述写入管理模块、所述回收块管理模块和所述日志管理模块均只在所述从数据管理模块对应的CPU内核中工作。
  9. 一种电子设备,其特征在于,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器调用所述存储器中的计算机程序时实现如权利要求1至7任一项所述固态硬盘的任务分配方法的步骤。
  10. 一种存储介质,其特征在于,所述存储介质中存储有计算机可执行指令,所述计算机可执行指令被处理器加载并执行时,实现如权利要求1至7任一项所述固态硬盘的任务分配方法的步骤。
PCT/CN2021/127520 2021-07-13 2021-10-29 固态硬盘的任务分配方法、系统、电子设备及存储介质 WO2023284173A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/263,142 US12019889B2 (en) 2021-07-13 2021-10-29 Task allocation method and system for solid state drive, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110791541.0 2021-07-13
CN202110791541.0A CN113254222B (zh) 2021-07-13 2021-07-13 固态硬盘的任务分配方法、系统、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023284173A1 true WO2023284173A1 (zh) 2023-01-19

Family

ID=77191165

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/127520 WO2023284173A1 (zh) 2021-07-13 2021-10-29 固态硬盘的任务分配方法、系统、电子设备及存储介质

Country Status (3)

Country Link
US (1) US12019889B2 (zh)
CN (1) CN113254222B (zh)
WO (1) WO2023284173A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113254222B (zh) * 2021-07-13 2021-09-17 苏州浪潮智能科技有限公司 固态硬盘的任务分配方法、系统、电子设备及存储介质
CN114138178B (zh) * 2021-10-15 2023-06-09 苏州浪潮智能科技有限公司 一种io处理方法及系统
CN118642860A (zh) * 2024-08-15 2024-09-13 杭州嗨豹云计算科技有限公司 一种基于任务自适应匹配的多功能服务器及其应用方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019958A (zh) * 2012-10-31 2013-04-03 香港应用科技研究院有限公司 使用数据属性来管理固态存储器里的数据的方法
US9158713B1 (en) * 2010-04-07 2015-10-13 Applied Micro Circuits Corporation Packet processing with dynamic load balancing
CN110287152A (zh) * 2019-06-27 2019-09-27 深圳市腾讯计算机系统有限公司 一种数据管理的方法以及相关装置
US20190310892A1 (en) * 2018-04-04 2019-10-10 Micron Technology, Inc. Determination of Workload Distribution across Processors in a Memory System
CN111984184A (zh) * 2019-05-23 2020-11-24 浙江宇视科技有限公司 固态硬盘的数据管理方法、装置、存储介质及电子设备
CN113254222A (zh) * 2021-07-13 2021-08-13 苏州浪潮智能科技有限公司 固态硬盘的任务分配方法、系统、电子设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106201356B (zh) * 2016-07-14 2019-07-19 北京理工大学 一种基于链路可用带宽状态的动态数据调度方法
US10732895B2 (en) * 2017-03-22 2020-08-04 Burlywood, Inc. Drive-level internal quality of service
CN109614038A (zh) * 2018-11-23 2019-04-12 北京信息科技大学 一种多样化QoS约束的多速磁盘调度方法
US10866834B2 (en) * 2019-03-29 2020-12-15 Intel Corporation Apparatus, method, and system for ensuring quality of service for multi-threading processor cores

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9158713B1 (en) * 2010-04-07 2015-10-13 Applied Micro Circuits Corporation Packet processing with dynamic load balancing
CN103019958A (zh) * 2012-10-31 2013-04-03 香港应用科技研究院有限公司 使用数据属性来管理固态存储器里的数据的方法
US20190310892A1 (en) * 2018-04-04 2019-10-10 Micron Technology, Inc. Determination of Workload Distribution across Processors in a Memory System
CN111984184A (zh) * 2019-05-23 2020-11-24 浙江宇视科技有限公司 固态硬盘的数据管理方法、装置、存储介质及电子设备
CN110287152A (zh) * 2019-06-27 2019-09-27 深圳市腾讯计算机系统有限公司 一种数据管理的方法以及相关装置
CN113254222A (zh) * 2021-07-13 2021-08-13 苏州浪潮智能科技有限公司 固态硬盘的任务分配方法、系统、电子设备及存储介质

Also Published As

Publication number Publication date
US12019889B2 (en) 2024-06-25
CN113254222B (zh) 2021-09-17
CN113254222A (zh) 2021-08-13
US20240036755A1 (en) 2024-02-01

Similar Documents

Publication Publication Date Title
WO2023284173A1 (zh) 固态硬盘的任务分配方法、系统、电子设备及存储介质
JP5510556B2 (ja) 仮想マシンのストレージスペースおよび物理ホストを管理するための方法およびシステム
TWI684098B (zh) 記憶體系統及控制非揮發性記憶體之控制方法
JP7467593B2 (ja) リソース割振り方法、記憶デバイス、および記憶システム
US12038879B2 (en) Read and write access to data replicas stored in multiple data centers
CN115129621B (zh) 一种内存管理方法、设备、介质及内存管理模块
EP3865992A2 (en) Distributed block storage system, method, apparatus and medium
US20220253252A1 (en) Data processing method and apparatus
CN111857992A (zh) 一种Radosgw模块中线程资源分配方法和装置
US20240241826A1 (en) Computing node cluster, data aggregation method, and related device
US20220269427A1 (en) Method for managing namespaces in a storage device and storage device employing the same
US20070174836A1 (en) System for controlling computer and method therefor
JPH11143779A (ja) 仮想記憶装置におけるページング処理システム
WO2024027140A1 (zh) 一种数据处理方法、装置、设备、系统及可读存储介质
WO2022262345A1 (zh) 一种数据管理方法、存储空间管理方法及装置
CN115543222A (zh) 一种存储优化方法、系统、设备及可读存储介质
CN115168012A (zh) 一种线程池并发线程数确定方法及相关产品
CN114528123A (zh) 数据访问方法、装置、设备及计算机可读存储介质
CN115794368A (zh) 业务系统、内存管理方法及装置
CN117389485B (zh) 存储性能优化方法、装置、存储系统、电子设备和介质
WO2024066483A1 (zh) 一种硬盘管理方法、硬盘控制方法及相关设备
KR102565873B1 (ko) Numa 시스템에서 메모리 버스에 연결하여 사용하는 저장장치의 할당 방법
WO2024088150A1 (zh) 基于开放通道固态盘的数据存储方法、装置、设备、介质及产品
US20240311291A1 (en) Memory system and method of controlling the memory system
CN117539609A (zh) 一种io流量控制方法、系统以及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21949946

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18263142

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21949946

Country of ref document: EP

Kind code of ref document: A1