CN117976018A - Method, device, computer equipment and storage medium for predicting optimal read voltage - Google Patents

Method, device, computer equipment and storage medium for predicting optimal read voltage Download PDF

Info

Publication number
CN117976018A
CN117976018A CN202410149515.1A CN202410149515A CN117976018A CN 117976018 A CN117976018 A CN 117976018A CN 202410149515 A CN202410149515 A CN 202410149515A CN 117976018 A CN117976018 A CN 117976018A
Authority
CN
China
Prior art keywords
read voltage
rnn model
voltage value
input
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410149515.1A
Other languages
Chinese (zh)
Inventor
陈威畅
张睦
罗文全
卢清
兰学瑾
刘欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Xinyilian Information Technology Co Ltd
Original Assignee
Chengdu Xinyilian Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Xinyilian Information Technology Co Ltd filed Critical Chengdu Xinyilian Information Technology Co Ltd
Priority to CN202410149515.1A priority Critical patent/CN117976018A/en
Publication of CN117976018A publication Critical patent/CN117976018A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method, a device, computer equipment and a storage medium for predicting optimal read voltage, wherein the method comprises the following steps: training parameters of the RNN model to obtain a trained RNN model; collecting input data from solid state disk equipment, and preprocessing the input data; performing time sequence analysis and space sequence analysis on input data through a trained RNN model to obtain an optimal reading voltage value; and applying voltage to the flash memory unit of the solid state disk equipment according to the optimal reading voltage value, and reading storage data according to the conduction state of the flash memory unit. According to the method, the RNN model is utilized to analyze input data in the solid state disk device to obtain an optimal read voltage value, voltage is finally applied to the flash memory unit of the solid state disk device according to the optimal read voltage value, and stored data is read according to the conduction state of the flash memory unit, so that the data reading quality is remarkably improved, the adaptability of the solid state disk device is enhanced, and the service life of the device is prolonged.

Description

Method, device, computer equipment and storage medium for predicting optimal read voltage
Technical Field
The present invention relates to the field of data storage technologies, and in particular, to a method, an apparatus, a computer device, and a storage medium for predicting an optimal read voltage.
Background
Currently, techniques for handling read operations by Solid State Disk (SSD) devices rely primarily on preset read voltage tables and algorithms to determine an optimal read voltage value. However, this method has a major drawback in that the memory cells are damaged to some extent each time the NAND flash memory cells are erased and rewritten, and the reliability of the memory cells gradually decreases as the number of erasures increases, thereby reducing the accuracy of data and the performance of the overall device.
Disclosure of Invention
The invention aims to provide a method, a device, computer equipment and a storage medium for predicting optimal read voltage, which aim to solve the problems that the read voltage method in the prior art possibly causes the increase of read errors and reduces the accuracy of data.
In a first aspect, an embodiment of the present invention provides a method for predicting an optimal read voltage, including:
initializing parameters of an RNN model, wherein the parameters comprise weights and deviations;
Collecting sample data of solid state disk equipment, and preprocessing the sample data to obtain an input sequence;
the input sequence is positively input into the RNN model according to time steps, a predicted read voltage value is obtained, loss of the predicted read voltage value and a corresponding actual read voltage value is calculated by using a loss function, and iteration optimization is carried out on parameters of the RNN model according to the loss, so that a trained RNN model is obtained;
collecting input data from solid state disk equipment, and preprocessing the input data;
performing time sequence analysis and space sequence analysis on the input data through a trained RNN model to obtain an optimal reading voltage value;
And applying voltage to the flash memory unit of the solid state disk equipment according to the optimal reading voltage value, and reading storage data according to the conduction state of the flash memory unit.
In a second aspect, an embodiment of the present invention further provides an apparatus for predicting an optimal read voltage, including:
an initializing unit, configured to initialize parameters of an RNN model, where the parameters include weights and deviations;
The acquisition unit is used for acquiring sample data of the solid state disk equipment and preprocessing the sample data to obtain an input sequence;
The iterative optimization unit is used for inputting the input sequence into the RNN model in the forward direction according to time steps, obtaining a predicted read voltage value, calculating the loss of the predicted read voltage value and a corresponding actual read voltage value by using a loss function, and carrying out iterative optimization on parameters of the RNN model according to the loss to obtain a trained RNN model;
The preprocessing unit is used for collecting input data from the solid state disk equipment and preprocessing the input data;
the analysis unit is used for carrying out time sequence analysis and space sequence analysis on the input data through a trained RNN model to obtain an optimal read voltage value;
And the read data unit is used for applying voltage to the flash memory unit of the solid state disk device according to the optimal read voltage value and reading storage data according to the conduction state of the flash memory unit.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method for predicting an optimal read voltage according to the first aspect.
In a fourth aspect, embodiments of the present invention further provide a computer readable storage medium, where the computer readable storage medium stores a computer program, which when executed by a processor, causes the processor to perform the method for predicting an optimal read voltage according to the first aspect.
The embodiment of the invention provides a method, a device, computer equipment and a storage medium for predicting optimal read voltage.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for predicting an optimal read voltage according to an embodiment of the present invention;
FIG. 2 is a schematic sub-flowchart of a method for predicting an optimal read voltage according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a model architecture for training optimal read voltages for machine learning according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an RNN model training process according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another sub-flowchart of a method for predicting an optimal read voltage according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a sub-flowchart of a method for predicting an optimal read voltage according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a GRU algorithm model according to an embodiment of the invention;
FIG. 8 is a schematic structural diagram of a GRU algorithm model according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of an apparatus for predicting an optimal read voltage according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a subunit of an apparatus for predicting an optimal read voltage according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of another subunit of an apparatus for predicting an optimal read voltage according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of another subunit of an apparatus for predicting an optimal read voltage according to an embodiment of the present invention;
fig. 13 is a schematic diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, fig. 1 is a flowchart of a method for predicting an optimal read voltage according to an embodiment of the present invention, and the method includes steps S101 to S106:
S101, initializing parameters of an RNN model, wherein the parameters comprise weights and deviations;
In a Recurrent Neural Network (RNN) model, parameter initialization is very important, and good parameter initialization can speed up the convergence of the model and improve the performance of the model. Parameters of the RNN model in this embodiment include weights and biases. In RNN models, there are typically three weight matrices to be initialized, namely, a weight matrix input to a hidden state, a weight matrix from a previous time step hidden state to a current time step hidden state, and a weight matrix from a hidden state to an output, and these weight matrices may use different initialization methods, for example: random initialization: random initialization may be performed using a uniform distribution or a gaussian distribution; xavier initialization: xavier initialization is a common weight initialization method, which dynamically adjusts initial values according to the number of input and output nodes of a weight matrix, so as to ensure that signals are distributed as uniformly as possible during forward propagation and backward propagation; he initialization: he initialization is an initialization method for deep neural networks, which also initializes based on the number of input and output nodes, but is more suitable for activation functions such as ReLU than Xavier initialization. In the RNN model, the bias may be initialized by zero initialization, small random number initialization, or the like.
S102, collecting sample data of solid state disk equipment, and preprocessing the sample data to obtain an input sequence;
In this embodiment, the solid state disk device is connected to the computer system, and appropriate tools or software are used to read the data on the device, and a specific file, folder or whole hard disk can be selected for collection, so as to obtain the required sample data. The sample data is preprocessed after it is acquired, it is necessary to convert the sample data into a sequential form suitable for input to the RNN model, and if the data is time-series data, such as sensor data or log data, the data is arranged chronologically into a sequence.
Specifically, as shown in fig. 2, the step S102 includes S201 to S203:
s201, sorting the sample data according to time sequence to obtain sorted sample data;
S202, preprocessing the sample data, wherein the preprocessing comprises the following steps: normalization processing and denoising processing;
S203, dividing the preprocessed sample data to obtain an input sequence, wherein the input sequence comprises a training set, a testing set and a verification set.
In this embodiment, a model architecture for training an optimal read voltage by machine learning is shown in fig. 3. If the sample data contains time stamp information, the ordering may be based on the time stamps. The time stamp may be a data collection time, a file creation time or other fields indicating a time sequence, and the data may be sorted according to time stamp information of the sample data, which may be implemented using a sorting algorithm or a database query statement in a programming language, where the sample data is arranged in time sequence, so as to obtain sorted sample data. Next, the sorted sample data is preprocessed, including normalization and denoising.
Wherein the sample data is normalized such that the data is within a specific range. Common normalization methods are min-max normalization and Z-score normalization. Min-max normalization refers to linear mapping of sample data into the range of 0 to 1, while Z-score normalization is such that the data has zero mean and unit variance by subtracting the mean and dividing by the standard deviation.
If denoising exists in the sample data, a proper method can be adopted for denoising. Common denoising processing methods include smoothing filtering, outlier removal, interpolation filling, and the like.
Finally, before inputting the input sequence into the RNN model, in order to ensure that the training set, the validation set and the test set all contain a considerable amount of each sample data, the preprocessed sample data is divided to obtain the input sequence (training set, test set and validation set), specifically: the training set, the testing set and the verification set are divided according to the proportion of 8:1:1. The training set is used for training and adjusting the generated parameters, the verification set participates in model evaluation after each training iteration, is used for adjusting super parameters and optimizing the model, the test set is used for evaluating the final performance of the model, does not participate in training, and is finally stored according to the model with the best evaluation effect.
Finally, the divided input sequences (training set, test set and verification set) are stored in a proper format, and are input into the RNN model forward according to time steps.
S103, the input sequence is input into the RNN model in the forward direction according to time steps, a predicted read voltage value is obtained, loss of the predicted read voltage value and a corresponding actual read voltage value is calculated by using a loss function, and iteration optimization is carried out on parameters of the RNN model according to the loss, so that a trained RNN model is obtained;
The present embodiment is a process of training an RNN model, specifically, as shown in fig. 4, which is a flowchart of RNN model training. Sample data acquired from the solid state disk device is subjected to normalization processing and denoising processing, the obtained input sequence is input into the RNN model in the forward direction to obtain a predicted voltage value, and the RNN model is utilized to always obtain an optimal read voltage value, namely an actual read voltage value. In order to ensure readiness of the RNN model for reading data, a loss function is utilized to calculate the loss of a predicted read voltage value and a corresponding actual read voltage value, and the parameters of the RNN model are subjected to iterative optimization according to the loss, so that a trained RNN model is obtained.
Specifically, the forward propagation process of the RNN model can be regarded as calculating the input of each time step in an input sequence, and simultaneously transferring the hidden state of the previous time step as input to the current time step, so that the current output can be affected by memorizing the previous information. In each time step, the current input and the state of the previous time step are linearly weighted and summed, and non-linearly transformed by an activation function to produce the output of the current time step and a new hidden state (to produce the output of the current time step and the new hidden state). This hidden state will also be used as input for the next time step to participate in the calculation.
In the RNN model, the role of the activation function is to non-linearly transform the input signal so that the network can learn more complex features and patterns. Since the recurrent neural network will pass the state of the previous moment in each time step, the activation function can also help the network capture the information of the previous moment and pass it to the current moment, thus helping the network to memorize the previous state.
In one embodiment, as shown in fig. 5, the step S103 includes steps S301 to S303:
s104, collecting input data from solid state disk equipment, and preprocessing the input data;
S301, calculating the gradient of the loss function relative to the parameters of the RNN model according to a back propagation algorithm;
s302, updating parameters of the RNN model according to the calculated gradient by using a gradient descent algorithm;
s303, reducing the loss of the predicted read voltage value and the corresponding actual read voltage value by iteratively optimizing parameters of the RNN model to obtain a trained RNN model.
In this embodiment, the conventional RNN model takes as input the hidden state of the current input and the previous time step at each time step, and then outputs the hidden state of the current time step. However, since the parameters of the RNN model are the same at each time step, and the gradient is multiplied continuously during the back propagation, the gradient decays or the gradient explodes.
In long sequence tasks, problems of gradient decay or gradient explosion can make it difficult for RNN models to learn a dependency relationship with a large time interval, resulting in reduced model performance. In RNN (recurrent neural network) models, the Long-term dependency Problem (Long-TERM DEPENDENCY Problem) means that when an event affects the current prediction result, there is a Long time interval between it and the current time step, which makes it difficult for the conventional RNN model to capture such Long-term dependency.
To address the long-term dependency problem, an improved RNN model is presented in this embodiment, employing a gated loop unit (GRU). The RNN model can effectively control the flow of information by introducing a gating mechanism, so that the long-term dependency relationship is better captured. The GRU model controls the reading, writing and forgetting of information through gating Units (Gate Units) so that the RNN model can maintain the transfer of important information in long sequences.
By introducing an improved RNN model, the problem of long-term dependence can be solved to a certain extent, and the performance and generalization capability of the RNN model are improved, so that the RNN model is more effective in processing long-sequence tasks.
The training of RNN models is often slow, especially when processing long sequences or large data sets, and techniques such as batch training, parallel computing, GPU acceleration, etc. may be employed to increase the training speed when solving such problems.
Overfitting is a problem in which the model performs well on the training set but poorly on the test set, and methods such as regularization techniques (e.g., L1, L2 regularization), dropout, early-stop strategies, data enhancement, etc. can be employed to reduce the risk of overfitting.
The super-parameter selection in the RNN model is critical to the performance of the model, and the cross-validation, grid search, random search and other methods can be used for searching the optimal super-parameter combination so as to improve the performance of the model.
If the number of samples of different categories in the training set is unbalanced, the model may have better prediction performance for a plurality of categories and poorer prediction performance for a few categories, and methods such as sample resampling, category weight adjustment or artificial sample generation can be used for solving the problem of unbalanced training set.
In this embodiment, as shown in fig. 4, corresponding super parameters (i.e., parameters of machine learning) such as learning rate, hidden layer size, network layer number, etc. are set. The parameters of the RNN model are initialized, and a random initialization or pre-training mode can be used. And then inputting the input sequence into the RNN model one by one time step, and calculating the output and hidden state of the current time step by the RNN model according to the input of the current time step and the hidden state of the previous time step in each time step. And obtaining a predicted read voltage value according to the output of the RNN model at each time step. The difference between the predicted value and the actual read voltage value is calculated using the loss function. Common loss functions include Mean Square Error (MSE), cross entropy loss, and the like. And updating model parameters by using an optimization algorithm (such as a random gradient descent method) according to the gradient of the loss function, so that the loss function is gradually reduced until the preset training round number is reached or the loss function converges, and finally obtaining the trained RNN model. Throughout the training process, a validation set is typically used to monitor the generalization ability of the RNN model and prevent overfitting. Depending on the performance of the validation set, it may be necessary to adjust the hyper-parameters of the RNN model, once the RNN model training is complete and performs well on the validation set, the finally trained RNN model parameters may be saved for later use in the prediction of new data.
In particular, the counter-propagating algorithm calculates errors and gradients for model training and parameter optimization. And calculating the output error of each time step through comparing the predicted voltage value with the actual voltage value.
The error signal is passed from the output layer to the input layer in order to calculate the gradient of the parameter. Specifically, we will pass the error of each time step back to the previous time step according to the chain law until the input layer that is finally passed. From the error signal and intermediate variables, gradients for each parameter are calculated which tell us that if the parameters are fine tuned, the model can be better fitted to the data. The value of each parameter is updated according to the gradient and the learning rate. Therefore, the model can more accurately predict unknown data, and meanwhile, the problems of overfitting and the like are avoided.
In another embodiment, the training set is divided in time steps and the loss function is calculated by forward propagation and then the parameters are updated by backward propagation. At this time, the degree of confusion may be used to evaluate the merits of the RNN model, i.e., the smaller the degree of confusion the better the model's predictive ability to sequences.
Illustrating:
if the true tag for a sequence is [1,2,3] and the predicted probability distribution over the sequence is [0.4,0.5,0.1], then the sequence is a sequence of two or more tags
The degree of confusion of the sequences is:
where yi represents the true label at the i-th position (1, 2, and 3 in this example), and 0.5 represents the probability that the model predicts the true label at the i-th position.
In an embodiment, the sample data includes a life cycle of the solid state disk device, a location of the memory chip, erasure information, and a row address.
In an embodiment, the sample data includes a read error rate, a unit wear level, a temperature change, and a frequency of use of the solid state disk device.
In one embodiment, as shown in fig. 6, the step S103 further includes steps S401 to S403:
S401, sequentially entering the input sequence into a gating circulation unit of the RNN model according to time steps;
S402, for each time step of the input sequence, calculating an update gate and a reset gate corresponding to a current node of the gating cycle unit, and generating a current hidden state by combining the current input and a previous hidden state;
s403, inputting the current hidden state to the next node of the gate control circulation unit, and finally outputting a predicted read voltage value.
In this embodiment, the gated loop unit GRU of the RNN model is an improved model of the standard loop neural network, and in order to solve the gradient disappearance problem of the standard RNN model, the GRU uses so-called "update gate" and "reset gate". Basically, these two vectors determine which data information should be passed to the output. They are distinguished in that they can be trained to save information long before, rather than vanishing over time, or to delete information unrelated to predictions.
FIG. 7 is a flowchart showing the GRU algorithm model, wherein the sample data comprises the life cycle of the solid state disk device, the position of the memory chip, erasure information and row address; the GRU algorithm model is configured to receive and process feedback information from the solid state disk device in real time when predicting the optimal read voltage.
As shown in fig. 8, a schematic structural diagram of a GRU algorithm model is shown, in which the input X t of the current time step and the hidden state H t-1 of the previous time step are taken as inputs, the value of an update gate is calculated, the update gate is used for determining whether to update the hidden state of the current time step, in which the input X t of the current time step and the hidden state H t-1 of the previous time step are taken as inputs, the value of a reset gate is calculated, and the reset gate is used for controlling the influence degree of the previous hidden state on the current time step. The current hidden state H t is generated by combining the current input X t and the previous hidden state H t-1, the GRU model obtains the output Y t of the current hidden node and the hidden state H t transmitted to the next node, the current hidden state H t is input to the next node of the gate control loop unit GRU model, and finally the predicted read voltage value is output.
And S106, applying voltage to the flash memory unit of the solid state disk device according to the optimal reading voltage value, and reading storage data according to the conduction state of the flash memory unit.
In this embodiment, in order to read out the correct stored data, an appropriate voltage value needs to be applied during the reading process. Since the characteristics of the flash memory cells (NAND) may vary according to different process parameters, use time, etc., it is necessary to determine an optimal read voltage value according to specific conditions before reading, and after determining the optimal read voltage value, apply the voltage value as an input to the corresponding flash memory cell (NAND), and when a voltage is applied to the flash memory cell (NAND), if the cell is in a conductive state, data stored in the location can be read.
In another embodiment, the method for predicting the optimal read voltage further includes steps S501-S503:
s501, inputting an input sequence corresponding to the sample data into a first convolution network constructed in advance in a convolution layer to carry out convolution, so as to obtain a first convolution result;
S502, carrying out normalization processing on each value included in the first convolution result to obtain a first normalization result;
s503, activating the first normalization result through a first activation function to obtain a first output matrix.
In this embodiment, in order to improve the expression capability, feature extraction capability and generalization capability of the RNN model, so that the network can better understand and distinguish the input sequence, a method for predicting the optimal read voltage based on the partitionable convolutional network is provided. Deep convolution refers to the process of extracting features in a Convolutional Neural Network (CNN) using multiple convolutional layers. The following is a general procedure for depth convolution: first, the input sequence is passed to the first convolution layer, where it is convolved with a set of learnable convolution kernels (filters). The convolution operation may be obtained by multiplying a filter with different parts of the input data and adding all the products.
Activation function: the result of the convolution operation is input into the activation function to introduce nonlinearity. Common activation functions include ReLU (RECTIFIED LINEAR Unit), sigmoid, and tanh, among others. After some convolution layers, a pooling operation may be applied to reduce the size of the feature map and increase the translational invariance. Common pooling operations include maximum pooling (Max Pooling), average pooling (Average Pooling), and the like. Multiple convolutional layers may be used to construct deeper convolutional networks, as desired. Each convolution layer may extract features at different levels. Full tie layer: after passing through a series of convolution and pooling layers, the feature map is converted into one-dimensional vectors using the fully connected layer and input to the output layer for classification or regression tasks. Output layer: the last layer is the output layer, and an appropriate activation function is selected according to the requirements of a specific task, such as softmax for multi-class classification, sigmoid for two-class classification, and the like.
By stacking multiple convolution layers, deep convolution can progressively extract higher-level features of the input sequence, thereby enabling learning of complex patterns and structures. Such deep convolutional networks are widely used in the field of computer vision with many important achievements.
In a specific embodiment, the depth convolution kernel 3*3 is DEPTHWISE CONVOLUTION (DEPTHWISE CONVOLUTION is depth convolution, which is a basic idea of modeling, and can effectively reduce the computational complexity of the depth neural network). The convolution process can be understood as the use of a filter (convolution kernel) to filter individual small regions of the image to obtain the eigenvalues of those small regions. After normalization processing and activation function activation are carried out on the first convolution result, shallow convolution is achieved, and convolution of depth dimension in the pixel matrix is achieved.
For each input channel, convolution is performed by using 1 convolution kernel of d_k×d_k×1, M convolution kernels are used, and M times of operations are performed, so as to obtain feature maps of M d_f×d_f×1 (the first output matrix can be regarded as feature maps). The feature maps are respectively learned from different input channels, and independent of each other, and the obtained first output matrix can be used as a predicted voltage value reference.
As shown in fig. 9, the embodiment of the present invention further provides an apparatus 500 for predicting an optimal read voltage, including: an initialization unit 501, an acquisition unit 502, an iterative optimization unit 503, a preprocessing unit 504, an analysis unit 505, and a read data unit 506.
An initializing unit 501, configured to initialize parameters of the RNN model, where the parameters include weights and deviations;
the collection unit 502 is configured to collect sample data of the solid state disk device, and pre-process the sample data to obtain an input sequence;
The iterative optimization unit 503 is configured to forward input the input sequence into the RNN model according to a time step, obtain a predicted read voltage value, calculate a loss between the predicted read voltage value and a corresponding actual read voltage value by using a loss function, and perform iterative optimization on parameters of the RNN model according to the loss, so as to obtain a trained RNN model;
the preprocessing unit 504 is configured to collect input data from the solid state disk device, and preprocess the input data;
The analysis unit 505 is configured to perform time sequence analysis and space sequence analysis on the input data through a trained RNN model, so as to obtain an optimal read voltage value;
And a read data unit 506, configured to apply a voltage to a flash memory unit of the solid state disk device according to the optimal read voltage value, and read stored data according to a conductive state of the flash memory unit.
In one embodiment, as shown in fig. 10, the acquisition unit 502 includes:
A sorting unit 601, configured to sort the sample data according to a time sequence, so as to obtain sorted sample data;
A data processing unit 602, for preprocessing the sample data, wherein the preprocessing includes: normalization processing and denoising processing;
The dividing unit 603 is configured to divide the preprocessed sample data to obtain an input sequence, where the input sequence includes a training set, a test set, and a verification set.
In an embodiment, as shown in fig. 11, the iterative optimization unit 503 includes:
A gradient calculation unit 701 for calculating a gradient of the loss function with respect to parameters of the RNN model according to a back propagation algorithm;
an updating unit 702 for updating parameters of the RNN model according to the calculated gradient using a gradient descent algorithm;
And a loss reduction unit 703, configured to reduce the loss of the predicted read voltage value and the corresponding actual read voltage value by iteratively optimizing parameters of the RNN model, so as to obtain a trained RNN model.
In an embodiment, as shown in fig. 12, the iterative optimization unit 503 further includes:
an input unit 801 for sequentially entering the input sequence into a gating cycle unit of the RNN model according to time steps;
A calculating unit 802, configured to calculate, for each time step of the input sequence, an update gate and a reset gate corresponding to a current node of the gating cycle unit, and combine a current input and a previous hidden state to generate a current hidden state;
And the output unit 803 is configured to input the current hidden state to the next node of the gate control loop unit, and finally output a predicted read voltage value.
The device analyzes input data in the solid state disk equipment by utilizing the RNN model to obtain an optimal read voltage value, finally, voltage is applied to a flash memory unit of the solid state disk equipment according to the optimal read voltage value, and stored data is read according to the conduction state of the flash memory unit, so that the quality of data reading is remarkably improved, the adaptability of the solid state disk equipment is enhanced, and the service life of the equipment is prolonged.
It should be noted that, as those skilled in the art can clearly understand the specific implementation process of the foregoing apparatus and each unit, reference may be made to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, no further description is provided herein.
The means of predicting an optimal read voltage described above may be implemented in the form of a computer program which may be run on a computer device as shown in fig. 13.
Referring to fig. 13, fig. 13 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device 900 is a server, and the server may be a stand-alone server or a server cluster formed by a plurality of servers.
With reference to fig. 13, the computer device 900 includes a processor 902, a memory, and a network interface 905, which are connected by a system bus 901, wherein the memory may include a non-volatile storage medium 903 and an internal memory 904.
The non-volatile storage medium 903 may store an operating system 9031 and a computer program 9032. The computer program 9032, when executed, may cause the processor 902 to perform a method of predicting an optimal read voltage.
The processor 902 is operative to provide computing and control capabilities supporting the operation of the entire computer device 900.
The internal memory 904 provides an environment for the execution of a computer program 9032 in the non-volatile storage medium 903, which computer program 9032, when executed by the processor 902, may cause the processor 902 to perform a method of predicting an optimal read voltage.
The network interface 905 is used for network communication such as providing transmission of data information, etc. It will be appreciated by those skilled in the art that the architecture shown in fig. 13 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting of the computer device 900 to which the present inventive arrangements may be implemented, and that a particular computer device 900 may include more or less components than those shown, or may combine some components, or have a different arrangement of components.
Those skilled in the art will appreciate that the embodiment of the computer device shown in fig. 13 is not limiting of the specific construction of the computer device, and in other embodiments, the computer device may include more or less components than those shown, or certain components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may include only a memory and a processor, and in such embodiments, the structure and function of the memory and the processor are consistent with the embodiment shown in fig. 13, and will not be described again.
It should be appreciated that in an embodiment of the invention, the Processor 902 may be a central processing unit (Central Processing Unit, CPU), the Processor 902 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application SPECIFIC INTEGRATED Circuits (ASICs), off-the-shelf Programmable gate arrays (Field-Programmable GATEARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In another embodiment of the invention, a computer-readable storage medium is provided. The computer readable storage medium may be a non-volatile computer readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program when executed by a processor implements a method of predicting an optimal read voltage of an embodiment of the invention.
The storage medium is a physical, non-transitory storage medium, and may be, for example, a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus, device and unit described above may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. A method of predicting an optimal read voltage, comprising:
initializing parameters of an RNN model, wherein the parameters comprise weights and deviations;
Collecting sample data of solid state disk equipment, and preprocessing the sample data to obtain an input sequence;
the input sequence is positively input into the RNN model according to time steps, a predicted read voltage value is obtained, loss of the predicted read voltage value and a corresponding actual read voltage value is calculated by using a loss function, and iteration optimization is carried out on parameters of the RNN model according to the loss, so that a trained RNN model is obtained;
collecting input data from solid state disk equipment, and preprocessing the input data;
performing time sequence analysis and space sequence analysis on the input data through a trained RNN model to obtain an optimal reading voltage value;
And applying voltage to the flash memory unit of the solid state disk equipment according to the optimal reading voltage value, and reading storage data according to the conduction state of the flash memory unit.
2. The method of predicting an optimal read voltage of claim 1, wherein preprocessing the sample data to obtain an input sequence comprises:
sorting the sample data according to time sequence to obtain sorted sample data;
preprocessing the sample data, wherein the preprocessing comprises: normalization processing and denoising processing;
Dividing the preprocessed sample data to obtain an input sequence, wherein the input sequence comprises a training set, a testing set and a verification set.
3. The method of predicting optimal read voltages of claim 1, wherein the sample data comprises a lifecycle of a solid state disk device, a location of a memory chip, erasure information, and a row address.
4. The method of predicting optimal read voltage of claim 1, wherein the sample data comprises read error rate, unit wear level, temperature variation, and frequency of use of a solid state disk device.
5. The method of predicting an optimal read voltage according to claim 1, wherein iteratively optimizing parameters of the RNN model based on the loss results in a trained RNN model, comprising:
Calculating a gradient of the loss function with respect to parameters of the RNN model according to a back propagation algorithm;
Updating parameters of the RNN model according to the calculated gradient by using a gradient descent algorithm;
And reducing the loss of the predicted read voltage value and the corresponding actual read voltage value by iteratively optimizing the parameters of the RNN model to obtain a trained RNN model.
6. The method of predicting an optimal read voltage as set forth in claim 1, wherein said inputting the input sequence forward into the RNN model in time steps and predicting a predicted read voltage value comprises:
Sequentially entering the input sequence into a gating circulation unit of the RNN model according to time steps;
for each time step of the input sequence, calculating an update gate and a reset gate corresponding to a current node of the gating cycle unit, and generating a current hidden state by combining the current input and a previous hidden state;
And inputting the current hidden state to the next node of the gating circulating unit, and finally outputting a predicted read voltage value.
7. The method of predicting optimal read voltages of claim 1, wherein the RNN model comprises a plurality of hidden layers, each configured to capture dependencies in the input sequence over a loop connection.
8. An apparatus for predicting an optimal read voltage, comprising:
an initializing unit, configured to initialize parameters of an RNN model, where the parameters include weights and deviations;
The acquisition unit is used for acquiring sample data of the solid state disk equipment and preprocessing the sample data to obtain an input sequence;
The iterative optimization unit is used for inputting the input sequence into the RNN model in the forward direction according to time steps, obtaining a predicted read voltage value, calculating the loss of the predicted read voltage value and a corresponding actual read voltage value by using a loss function, and carrying out iterative optimization on parameters of the RNN model according to the loss to obtain a trained RNN model;
The preprocessing unit is used for collecting input data from the solid state disk equipment and preprocessing the input data;
the analysis unit is used for carrying out time sequence analysis and space sequence analysis on the input data through a trained RNN model to obtain an optimal read voltage value;
And the read data unit is used for applying voltage to the flash memory unit of the solid state disk device according to the optimal read voltage value and reading storage data according to the conduction state of the flash memory unit.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of predicting an optimal read voltage according to any one of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, causes the processor to perform the method of predicting an optimal read voltage according to any one of claims 1 to 7.
CN202410149515.1A 2024-02-02 2024-02-02 Method, device, computer equipment and storage medium for predicting optimal read voltage Pending CN117976018A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410149515.1A CN117976018A (en) 2024-02-02 2024-02-02 Method, device, computer equipment and storage medium for predicting optimal read voltage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410149515.1A CN117976018A (en) 2024-02-02 2024-02-02 Method, device, computer equipment and storage medium for predicting optimal read voltage

Publications (1)

Publication Number Publication Date
CN117976018A true CN117976018A (en) 2024-05-03

Family

ID=90864465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410149515.1A Pending CN117976018A (en) 2024-02-02 2024-02-02 Method, device, computer equipment and storage medium for predicting optimal read voltage

Country Status (1)

Country Link
CN (1) CN117976018A (en)

Similar Documents

Publication Publication Date Title
Tripathy et al. Deep UQ: Learning deep neural network surrogate models for high dimensional uncertainty quantification
US20200104688A1 (en) Methods and systems for neural architecture search
US11556778B2 (en) Automated generation of machine learning models
EP3620990A1 (en) Capturing network dynamics using dynamic graph representation learning
CN111428818B (en) Deep learning model test method and device based on neural pathway activation state
CN109817267B (en) Deep learning-based flash memory life prediction method and system and computer-readable access medium
US11650968B2 (en) Systems and methods for predictive early stopping in neural network training
KR20160041856A (en) Systems and methods for performing bayesian optimization
KR20210032140A (en) Method and apparatus for performing pruning of neural network
CN112925909B (en) Graph convolution literature classification method and system considering local invariance constraint
US20200265307A1 (en) Apparatus and method with multi-task neural network
Platt et al. A systematic exploration of reservoir computing for forecasting complex spatiotemporal dynamics
CN113065525A (en) Age recognition model training method, face age recognition method and related device
CN114118361A (en) Situation assessment method based on deep learning parameter anchoring
CN114169460A (en) Sample screening method, sample screening device, computer equipment and storage medium
CN114444668A (en) Network quantization method, network quantization system, network quantization apparatus, network quantization medium, and image processing method
CN116681945A (en) Small sample class increment recognition method based on reinforcement learning
CN116542701A (en) Carbon price prediction method and system based on CNN-LSTM combination model
CN110705631A (en) SVM-based bulk cargo ship equipment state detection method
CN117976018A (en) Method, device, computer equipment and storage medium for predicting optimal read voltage
CN115565115A (en) Outfitting intelligent identification method and computer equipment
CN113158806B (en) OTD (optical time Domain _ Logistic) -based SAR (synthetic Aperture Radar) data ocean target detection method
CN117999560A (en) Hardware-aware progressive training of machine learning models
CN113971423A (en) Method, system and computer readable medium for training a new neural network
Swaney et al. Efficient skin segmentation via neural networks: HP-ELM and BD-SOM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination