CN112416814A - Management method for garbage collection in solid state disk, storage medium and electronic device - Google Patents

Management method for garbage collection in solid state disk, storage medium and electronic device Download PDF

Info

Publication number
CN112416814A
CN112416814A CN202011336254.2A CN202011336254A CN112416814A CN 112416814 A CN112416814 A CN 112416814A CN 202011336254 A CN202011336254 A CN 202011336254A CN 112416814 A CN112416814 A CN 112416814A
Authority
CN
China
Prior art keywords
network
solid state
pcurr
neural network
execute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011336254.2A
Other languages
Chinese (zh)
Inventor
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Datang Storage Technology Co ltd
Original Assignee
Hefei Datang Storage Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Datang Storage Technology Co ltd filed Critical Hefei Datang Storage Technology Co ltd
Priority to CN202011336254.2A priority Critical patent/CN112416814A/en
Publication of CN112416814A publication Critical patent/CN112416814A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • G06F18/295Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

The embodiment of the application discloses a management method for garbage recovery in a solid state disk, a storage medium and an electronic device. The method comprises the following steps: determining configuration parameters required for judging whether to execute the garbage collection GC operation; taking the configuration parameters as input variables, taking the judgment result of whether to execute GC operation as output variables, and establishing a neural network model by using pre-acquired historical data; taking the value of the configuration parameter in the current state as input information, and calculating output information corresponding to the input information by using the neural network model to obtain a judgment result of whether to execute GC operation in the current state; and sending the output information to the solid state disk.

Description

Management method for garbage collection in solid state disk, storage medium and electronic device
Technical Field
The embodiment of the application relates to the field of information processing, and in particular, to a management method, a storage medium, and an electronic device for garbage collection in a solid state disk.
Background
Solid State Drives (SSD), a storage medium for large data, are gradually replacing Hard Disk Drives (HDDs) as a mainstream storage device. SSDs are hard disks made with arrays of solid state electronic memory chips. Functionally, the SDD includes a main control unit and a storage unit.
Garbage Collection (GC) is an important part of SSD firmware design and also an important factor affecting the performance of the host, and aims to release the space of the flash memory occupied by invalid data inside the SSD. The GC strategy in the related art includes two strategies as follows:
(1) the GC operation is not carried out in the idle state of the main control unit, and the GC operation is carried out again when the main control unit is in the non-idle state;
(2) GC operations are performed in advance in the master idle state.
In practical applications, performing GC operations in any of the above manners affects the operational performance of the SSD.
Disclosure of Invention
In order to solve any one of the above technical problems, an embodiment of the present application provides a management method for garbage collection in a solid state disk, a storage medium, and an electronic apparatus.
In order to achieve the purpose of the embodiment of the present application, an embodiment of the present application provides a management method for garbage collection in a solid state disk, including:
determining configuration parameters required for judging whether to perform a GC operation;
taking the configuration parameters as input variables, taking the judgment result of whether to execute GC operation as output variables, and establishing a neural network model by using pre-acquired historical data;
taking the value of the configuration parameter in the current state as input information, and calculating output information corresponding to the input information by using the neural network model to obtain a judgment result of whether to execute GC operation in the current state;
and sending the output information to the solid state disk.
A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method as described above when executed.
An electronic device comprising a memory having a computer program stored therein and a processor arranged to execute the computer program to perform the method as described above.
One of the above technical solutions has the following advantages or beneficial effects:
the method comprises the steps of determining configuration parameters required for judging whether to execute GC operation, taking the configuration parameters as input variables, taking judgment results of whether to execute GC operation as output variables, establishing a neural network model by using pre-acquired historical data, taking values of the configuration parameters in the current state as input information, calculating output information corresponding to the input information by using the neural network model, obtaining judgment results of whether to execute GC operation in the current state, sending the output information to a solid state disk, judging whether to execute GC operation based on the neural network model, more accurately acquiring output information conforming to the current state, obtaining more accurately calculated results, and improving judgment accuracy.
Additional features and advantages of the embodiments of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application. The objectives and other advantages of the embodiments of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the embodiments of the present application and are incorporated in and constitute a part of this specification, illustrate embodiments of the present application and together with the examples of the embodiments of the present application do not constitute a limitation of the embodiments of the present application.
Fig. 1 is a flowchart of a management method for garbage collection in a solid state disk according to an embodiment of the present application;
fig. 2 is a flowchart of a management method for solid state disk garbage collection based on reinforcement learning according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. It should be noted that, in the embodiments of the present application, features in the embodiments and the examples may be arbitrarily combined with each other without conflict.
In the process of implementing the present application, the inventor conducts technical analysis on the related art, and finds that the related art has at least the following problems, including:
if the garbage collection operation is executed only when the user operates the first strategy, the performance of the SSD is obviously reduced;
if the second strategy is adopted to perform GC operation in advance when the master is idle, GC operation will be frequently performed, which may result in too large Write Amplification (WA).
Based on the above analysis, the embodiments of the present application provide the following solutions, including:
fig. 1 is a flowchart of a management method for garbage collection in a solid state disk according to an embodiment of the present application. As shown in fig. 1, the method includes:
step 101, determining configuration parameters required for judging whether to execute garbage collection GC operation;
in one exemplary embodiment, the configuration parameters include:
1. task information of the GC operation including data amount and writing speed information, etc.;
2. the task information of the host end comprises the data volume and the writing speed which are required to be written by the host end;
3. free address information available for writing.
The information is a configuration parameter used for judging the GC operation, and the processing requirements of the host and the write operation of the GC operation can be considered by using the parameter, so that the accuracy of judging whether the GC operation is executed is improved.
Step 102, taking the configuration parameters as input variables, taking the judgment result of whether to execute GC operation as output variables, and establishing a neural network model by using pre-acquired historical data;
compared with the prior art that the GC operation is directly executed according to the preset strategy, the method has the advantages that whether the GC operation is executed or not is judged by establishing the neural network model through the artificial intelligence technology, and the judgment accuracy can be improved.
103, taking the value of the configuration parameter in the current state as input information, and calculating output information corresponding to the input information by using the neural network model to obtain a judgment result of whether to execute GC operation in the current state;
by collecting the values of the configuration parameters in the current state and obtaining the output information by using the neural network model, the output information according with the current state can be more accurately obtained, and a more accurate calculation result is obtained.
Step 104, sending the output information to a solid state disk;
and controlling whether the solid state disk executes the GC operation or not by sending the output information to the solid state disk.
The method provided by the embodiment of the application determines the configuration parameters required for judging whether to execute the garbage recovery GC operation, takes the configuration parameters as input variables, takes the judgment result of whether to execute the GC operation as output variables, utilizes the pre-acquired historical data to establish a neural network model, takes the value of the configuration parameters in the current state as input information, utilizes the neural network model to calculate the output information corresponding to the input information, obtains the judgment result of whether to execute the GC operation in the current state, sends the output information to the solid state disk, realizes the judgment of whether to execute the GC operation based on the neural network model, can more accurately acquire the output information conforming to the current state, obtains more accurate calculation results, and improves the judgment accuracy.
The method provided by the embodiments of the present application is explained as follows:
in one exemplary embodiment, the configuration parameters include at least one of:
the GC operation comprises a total amount a of data to be written, a total amount b of available free blocks, a total amount c of data contained in a host end instruction queue, a writing rate d of host end valid data and an average writing rate e of GC operation.
The idle block needed to be used by the appointed queue is the same as the idle block needed to be used by GC operation, the execution time determined by GC operation can be determined while data writing in of the host computer is guaranteed through calculation of the parameters, whether the main control end is idle or not does not need to be considered, and the defects of an original GC strategy can be balanced by using the scheme.
In one exemplary embodiment, the neural network model is a Markov decision process, wherein the computational expression of the Markov decision process is as follows:
Pnext=T(Pcurr,Acurr);
pnext is a state variable of the next moment, Pcurr is a current state variable, Acurr is a selected action variable, and Pcurr and Acurr are subsets of a state variable set P and an action variable set A respectively;
the subset in the state variable set P is the value of the configuration parameter at the same time in the historical data; the subset of the set of action variables a is to perform GC operations or not to perform GC operations.
The Markov decision process can be utilized to balance the defects of the class 2 GC strategy in the related technology, machine learning is introduced, and the machine learning is utilized to determine the opportunity for executing GC operation.
In an exemplary embodiment, the strategy goal in the markov decision process is that write amplification WA tends to be smaller and the value of the write performance (performance) of SSD tends to be more optimal.
The strategy target is set to be WA smaller, so that the occurrence probability of WA overlarge caused by a GC strategy in the related technology can be effectively reduced.
In an exemplary embodiment, the markov decision process is further configured with a reward function after meeting the policy objective, wherein the GC operation is performed when the reward value of the reward function is a temporary reward.
Under the constraint condition that the GC optimization strategy target is met, the reward and punishment mechanism whether the GC operation is executed at the current moment comprises the following steps: if R ═ R0, GC is not performed; if R ═ Rt, then GC is performed, where Rt is a temporary reward.
Based on the reward mechanism, the Markov decision process is further optimized, and the accuracy of judgment is improved.
In an exemplary embodiment, the sending the output information to the solid state disk, the method further includes:
collecting values of configuration parameters after the solid state disk executes operation according to the output information to obtain a collection result;
and optimizing the neural network model by using the acquisition result.
After the solid state disk is operated according to the output information, the real values of the configuration parameters in the solid state disk can be collected and compared with the predicted values of the configuration parameters at the next moment obtained in the Markov decision process, so that the Markov decision process is optimized, the specificity of the self-adaptive solid state disk is achieved, and the judgment accuracy is improved.
In an exemplary embodiment, the optimizing the neural network model using the acquisition result includes:
constructing a prediction Q network and a target Q network by utilizing the deep Q network, wherein the prediction Q network is used for executing a Markov decision process to obtain a Q value of each action under the current state, and the target Q network is used for executing a Markov chain containing the Markov decision process and a reward function to update the construction of the prediction Q network;
optimizing the predictive Q network using the Markov chain.
In one exemplary embodiment, the predictive Q network is optimized using the Markov chain using the computational expression comprising:
Qp(Pcurr,A,L)=Qp(Pcurr,A,L)+α(R+γmaxQt(Pcurr,A,Lt)-Qp(Pcurr,A,L))2;
wherein Qp is a predicted Q network, L is a predicted Q network parameter, and gamma is a reward decrement value; alpha is the neural network learning rate, Qt is the target Q network, and Lt is the parameter of the target Q network.
By adopting a reinforcement learning method, a Markov chain containing a Markov decision process and a reward function is constructed in the use process of the SSD to update the prediction Q network, so that the dynamic optimization of a GC optimization strategy can be solved.
The method provided by the embodiments of the present application is explained as follows:
fig. 2 is a flowchart of a management method for solid state disk garbage collection based on reinforcement learning according to an embodiment of the present application. As shown in fig. 2, the method includes:
step 201, determining a state input variable and an action output variable of a dynamically selected GC opportunity optimization strategy;
in one exemplary embodiment, the state input variables include the current time: the total amount a of data to be written in the GC is the total amount b of available free blocks, the total amount c of data contained in the instruction queue of the host end, the writing rate d of effective data of the host end and the average GC writing rate e; the action output variables include: whether to perform a GC operation.
Step 202, determining an optimization strategy Markov decision process according to input and output variables;
in one exemplary embodiment, the optimized GC strategy Markov decision process represents the next moment state in the form of a transfer function of the current state and the selected action, the transfer function being of the form:
pnext state variable at next time of Pnext in the formula, Pcurrent is the current state variable, Acurr is the selected action variable, wherein Pcurrent and Acurr are subsets of a state variable set P and an action variable set A respectively, and the sampling frequency is consistent with the firmware calculation performance frequency.
Step 203, establishing a reinforcement learning optimization strategy reward function according to the objective of the optimization strategy;
in an exemplary embodiment, the GC optimization strategy reward function is positively correlated with the strategy objective, which is: WA tends to be smaller and performance tends to be more optimal.
Under the constraint condition that the GC optimization strategy target is met, the reward and punishment mechanism whether the GC operation is executed at the current moment comprises the following steps:
if R ═ R0, GC is not performed; if R ═ Rt, then GC is performed, where Rt is a temporary reward.
Step 204, solving a deep reinforcement learning optimization strategy according to the Markov decision process and the reward function;
in an exemplary embodiment, a Markov chain is first calculated through an established Markov decision process and a reward function, the Markov chain is saved into an experience pool, and then a prediction Q network in a deep reinforcement learning GC optimization strategy is updated according to data in the experience pool, wherein the Markov chain is in the form of < Pcurr, Acurr, Rt, Pnext >;
the reinforcement learning method comprises two similar neural networks, namely a prediction Q network and a target Q network, wherein the prediction Q network is used for calculating Q values of actions in the current state, and the target Q network is used for updating the prediction Q network. The dynamic variable Acurr is selected through a greedy algorithm:
tan ═ argmaxQp (Psurr, A, L)// random number > 1-zeta
Wherein: qp is a predicted Q network, L is a predicted Q network parameter, and zeta is a greedy algorithm parameter. And storing the Markov chain into an experience pool, and then updating a prediction Q network in the reinforcement learning GC optimization strategy by data in the experience pool, wherein the Q network is used for calculating a Q value under an action variable set A under an action state Purr, and the output of the network is Qp (Purr, A, L).
Step 205, storing the predicted Q network to the SSD master controller, and dynamically selecting a GC opportunity according to the optimization strategy in the using process of the SSD master controller;
in the use process of the SSD, the main control controller selects a proper GC opportunity according to the predicted Q network.
And step 206, dynamically updating the prediction Q network in the use process of the SSD.
In an exemplary embodiment, relevant collected parameters are stored in an experience pool, a prediction Q network is updated periodically, the updated prediction Q network is placed in an SSD main controller, and dynamic optimization of a GC optimization strategy is achieved.
Collecting SSD parameters at a certain time comprises a, b, c, d and e. The updating method of the predicted Q network is a method for updating the Q network by adopting a target Q network:
Qp(Pcurr,A,L)=Qp(Pcurr,A,L)+α(R+γmaxQt(Pcurr,A,Lt)-Qp(Pcurr,A,L))2;
wherein γ is the prize decrement value; alpha is the neural network learning rate, Qt is the target Q network, and Lt is the parameter of the target Q network.
The method provided by the embodiment of the application adopts deep reinforcement learning, realizes the update of the predicted Q network according to the Markov chain constructed in the use process of the SSD, can solve the dynamic optimization problem of the GC optimization strategy, and has the characteristic of strong self-adaption capability; by adopting a reinforcement learning method, the algorithm has uniformity in the steps of solving and optimizing, the using group is wide in category, and the method has the characteristic of strong universality; the automatic training learning of the solid state disk GC opportunity is realized by adopting the prediction Q network in the deep reinforcement learning, and the defect that the WA is too large or the main control performance is greatly influenced in the traditional GC strategy is replaced.
An embodiment of the present application provides a storage medium, in which a computer program is stored, wherein the computer program is configured to perform the method described in any one of the above when the computer program runs.
An embodiment of the application provides an electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the method described in any one of the above.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.

Claims (10)

1. A management method for garbage recovery in a solid state disk comprises the following steps:
determining configuration parameters required for judging whether to execute the garbage collection GC operation;
taking the configuration parameters as input variables, taking the judgment result of whether to execute GC operation as output variables, and establishing a neural network model by using pre-acquired historical data;
taking the value of the configuration parameter in the current state as input information, and calculating output information corresponding to the input information by using the neural network model to obtain a judgment result of whether to execute GC operation in the current state;
and sending the output information to the solid state disk.
2. The method of claim 1, wherein the configuration parameter comprises at least one of:
the GC operation comprises a total amount a of data to be written, a total amount b of available free blocks, a total amount c of data contained in a host end instruction queue, a writing rate d of host end valid data and an average writing rate e of GC operation.
3. The method of claim 1 or 2, wherein the neural network model is a markov decision process, wherein the markov decision process is represented by the following computational expression:
Pnext=T(Pcurr,Acurr);
pnext is a state variable of the next moment, Pcurr is a current state variable, Acurr is a selected action variable, and Pcurr and Acurr are subsets of a state variable set P and an action variable set A respectively;
the subset in the state variable set P is the value of the configuration parameter at the same time in the historical data; the subset of the set of action variables a is to perform GC operations or not to perform GC operations.
4. A method according to claim 3, characterized in that the strategy goals in the markov decision process are that the write amplification WA tends to be smaller and the value of the write performance tends to be more optimal.
5. The method of claim 4, wherein the Markov decision process is further configured with a reward function after meeting the policy objective, wherein the GC is performed when the reward value of the reward function is a temporary reward.
6. The method of claim 1, wherein the sending the output information to a solid state disk, the method further comprising:
collecting values of configuration parameters after the solid state disk executes operation according to the output information to obtain a collection result;
and optimizing the neural network model by using the acquisition result.
7. The method of claim 6, wherein the optimizing the neural network model using the acquisition results comprises:
constructing a prediction Q network and a target Q network by utilizing the deep Q network, wherein the prediction Q network is used for executing a Markov decision process to obtain a Q value of each action under the current state, and the target Q network is used for executing a Markov chain containing the Markov decision process and a reward function to update the construction of the prediction Q network;
optimizing the predictive Q network using the Markov chain.
8. The method of claim 7, wherein optimizing the predictive Q network using the Markov chain using the computational expression comprising:
Qp(Pcurr,A,L)=Qp(Pcurr,A,L)+α(R+γmaxQt(Pcurr,A,Lt)-Qp(Pcurr,A,L))2
wherein Qp is a predicted Q network, L is a predicted Q network parameter, and gamma is a reward decrement value; alpha is the neural network learning rate, Qt is the target Q network, and Lt is the parameter of the target Q network.
9. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 8 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 8.
CN202011336254.2A 2020-11-25 2020-11-25 Management method for garbage collection in solid state disk, storage medium and electronic device Pending CN112416814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011336254.2A CN112416814A (en) 2020-11-25 2020-11-25 Management method for garbage collection in solid state disk, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011336254.2A CN112416814A (en) 2020-11-25 2020-11-25 Management method for garbage collection in solid state disk, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN112416814A true CN112416814A (en) 2021-02-26

Family

ID=74842325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011336254.2A Pending CN112416814A (en) 2020-11-25 2020-11-25 Management method for garbage collection in solid state disk, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN112416814A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641305A (en) * 2021-07-26 2021-11-12 武汉理工大学 Garbage recycling method and device for solid state disk, electronic equipment and storage medium
CN116700634A (en) * 2023-08-08 2023-09-05 苏州浪潮智能科技有限公司 Garbage recycling method and device for distributed storage system and distributed storage system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281458A (en) * 2008-05-14 2008-10-08 华为技术有限公司 Apparatus, system and for recycling rubbish
CN102981959A (en) * 2011-09-05 2013-03-20 建兴电子科技股份有限公司 Solid-state memory device and control method of rubbish collection action thereof
CN105892941A (en) * 2016-03-30 2016-08-24 联想(北京)有限公司 Waste recovering method and device and electronic equipment
CN111090595A (en) * 2019-11-19 2020-05-01 中国航空工业集团公司西安航空计算技术研究所 NAND FLASH garbage recovery balance optimization method
CN111104343A (en) * 2018-10-25 2020-05-05 三星电子株式会社 Memory device, method of operating the same, and non-volatile memory device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281458A (en) * 2008-05-14 2008-10-08 华为技术有限公司 Apparatus, system and for recycling rubbish
CN102981959A (en) * 2011-09-05 2013-03-20 建兴电子科技股份有限公司 Solid-state memory device and control method of rubbish collection action thereof
CN105892941A (en) * 2016-03-30 2016-08-24 联想(北京)有限公司 Waste recovering method and device and electronic equipment
CN111104343A (en) * 2018-10-25 2020-05-05 三星电子株式会社 Memory device, method of operating the same, and non-volatile memory device
CN111090595A (en) * 2019-11-19 2020-05-01 中国航空工业集团公司西安航空计算技术研究所 NAND FLASH garbage recovery balance optimization method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641305A (en) * 2021-07-26 2021-11-12 武汉理工大学 Garbage recycling method and device for solid state disk, electronic equipment and storage medium
CN113641305B (en) * 2021-07-26 2024-04-05 武汉理工大学 Garbage collection method and device for solid state disk, electronic equipment and storage medium
CN116700634A (en) * 2023-08-08 2023-09-05 苏州浪潮智能科技有限公司 Garbage recycling method and device for distributed storage system and distributed storage system
CN116700634B (en) * 2023-08-08 2023-11-03 苏州浪潮智能科技有限公司 Garbage recycling method and device for distributed storage system and distributed storage system

Similar Documents

Publication Publication Date Title
CN112416814A (en) Management method for garbage collection in solid state disk, storage medium and electronic device
CN110289994B (en) Cluster capacity adjusting method and device
CN111291894B (en) Resource scheduling method, device, equipment and medium in super-parameter optimization process
US11366714B2 (en) Behavior-driven die management on solid-state drives
CN110851079A (en) Adaptive storage device loss balancing method and system
US20210149805A1 (en) Method and Apparatus for Adjusting Cache Prefetch Policies Based on Predicted Cache Pollution From Dynamically Evolving Workloads
CN104881366B (en) Repair the method and system of homogenizing
CN114895846A (en) Data processing method, device and equipment
CN113850364A (en) Non-transitory computer-readable recording medium, learning method, and information processing apparatus
KR102280298B1 (en) Memory management system and method considering application usage patterns analysis
CN116931838A (en) Solid-state disk cache management method, system, electronic equipment and storage medium
CN117215789A (en) Resource allocation method and device for data processing task and computer equipment
CN116227579A (en) Optimization method for reinforcement learning training based on value of discrete environment
CN115866687A (en) Service cooperative caching method in vehicle-mounted edge computing
CN112433682B (en) Method for acquiring control parameters in solid state disk, storage medium and electronic device
CN112230964A (en) Application program development method, application program running method, device, equipment and medium
CN112446490A (en) Network training data set caching method, device, equipment and storage medium
KR20220049709A (en) System and Method of Adaptive Bach Selection for Accelerating Deep Neural Network Learning based on Data Uncertainty
Wei et al. Reinforcement learning-assisted management for convertible SSDs
CN115934007B (en) Data storage method, system, equipment and storage medium of distributed storage system
CN113535407B (en) Optimization method, system, equipment and storage medium of server
CN114384945B (en) Processor temperature control method and device, storage medium and electronic equipment
CN114356238B (en) Solid state disk data inspection method and device
CN117971438B (en) System power consumption management method and device, electronic equipment and readable storage medium
CN111177022B (en) Feature extraction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 230088 floor 7, block C, building J2, phase II, innovation industrial park, high tech Zone, Hefei, Anhui Province

Applicant after: HEFEI DATANG STORAGE TECHNOLOGY Co.,Ltd.

Address before: 100094 No. 6 Yongjia North Road, Beijing, Haidian District

Applicant before: HEFEI DATANG STORAGE TECHNOLOGY Co.,Ltd.