CN117744731A - Model training method, device, medium and equipment based on resistive random access memory - Google Patents

Model training method, device, medium and equipment based on resistive random access memory Download PDF

Info

Publication number
CN117744731A
CN117744731A CN202311686862.XA CN202311686862A CN117744731A CN 117744731 A CN117744731 A CN 117744731A CN 202311686862 A CN202311686862 A CN 202311686862A CN 117744731 A CN117744731 A CN 117744731A
Authority
CN
China
Prior art keywords
network weight
value
random access
access memory
initial network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311686862.XA
Other languages
Chinese (zh)
Inventor
张徽
时拓
高丽丽
崔狮雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202311686862.XA priority Critical patent/CN117744731A/en
Publication of CN117744731A publication Critical patent/CN117744731A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement Of Resistance Or Impedance (AREA)

Abstract

The specification discloses a model training method, device, medium and equipment based on a resistance random access memory. The method comprises the steps of pre-adjusting each initial network weight in a preset network weight range, converting each adjusted network weight into a conductance value of a resistance change memory according to a preset scale parameter and the maximum value in each adjusted network weight, controlling the output current of the resistance change memory by converting sample data into a voltage value, replacing each network weight in a target model by the network weight corresponding to each conductance value in the resistance change memory after the resistance change memory outputs the current, training the target model with a prediction result corresponding to the current value of the current output by the resistance change memory and an actual label corresponding to the sample data, mapping each initial network weight into each conductance value in the resistance change memory more reasonably, solving the influence of the writing error problem of the conductance value in the resistance change memory on model training, and improving the model training efficiency.

Description

Model training method, device, medium and equipment based on resistive random access memory
Technical Field
The present disclosure relates to the field of neural networks, and in particular, to a method, an apparatus, a medium, and a device for model training based on a resistive random access memory.
Background
With the continuous development of the field of the neural network, the model based on the neural network can predict target data through input data. The models can be divided into linear system models for linear data and nonlinear system models for nonlinear data, depending on the nature of the data processed by the different models. The model can be further divided into a classification model and a regression model according to the type of the model output value.
However, in the existing model, data for constructing the model and data to be calculated need to be continuously transferred in the model and the storage, so that the data is processed through the constructed model, and thus the model is trained through the processed data, but the model training based on the data transfer needs to occupy a large amount of performance space. Based on this, a resistive random access memory has been developed, by which the device can simulate the model by writing in the conductance values corresponding to the network weights of the model, and by which the transfer costs of the model data can be largely saved. However, since the range of conductance values of the resistive random access memory is often small, accurate simulation of the model is difficult to achieve, which results in lower training efficiency for the existing resistive random access memory-based model. Particularly, for training a regression model, if data is processed by a resistive random access memory, noise is present in the obtained prediction data due to random factors such as a writing error of a conductance value in the resistive random access memory, and if the regression model is trained by using the prediction data with noise, training efficiency of the model is greatly reduced.
Therefore, how to effectively improve the model training efficiency is a problem to be solved.
Disclosure of Invention
The present disclosure provides a model training method based on a resistive random access memory, so as to partially solve the above-mentioned problems in the prior art.
The technical scheme adopted in the specification is as follows:
the specification provides a model training method based on a resistive random access memory, which comprises the following steps: acquiring sample data and determining each initial network weight contained in a target model;
for each initial network weight, determining a conductance value corresponding to the initial network weight in the resistance change memory as a conductance value corresponding to the initial network weight;
writing the conductance value corresponding to each initial network weight into the resistance change memory, and converting the sample data according to a first conversion relation to obtain a voltage value corresponding to the sample data and used for controlling the resistance change memory;
controlling the resistive random access memory according to the voltage value to obtain a current value of the current output by the resistive random access memory under the voltage corresponding to the voltage value;
converting the current value according to a second conversion relation to obtain a prediction result corresponding to the sample data;
Obtaining gradient information according to the deviation between the actual label corresponding to the sample data and the prediction result, and reading each actual conductivity value when outputting the current according to the voltage of the voltage value from the resistance change memory to determine the network weight corresponding to each actual conductivity value;
and training the target model according to the network weight corresponding to each actual conductivity value and the gradient information.
Optionally, for each initial network weight, determining a conductance value corresponding to the initial network weight in the resistive random access memory, where the conductance value corresponding to the initial network weight specifically includes:
for each initial network weight, adjusting the value corresponding to the initial network weight to fall within a preset network weight range to obtain an adjusted network weight corresponding to the initial network weight;
and determining the conductance value corresponding to each adjusted network weight as the conductance value corresponding to each initial network weight according to the maximum value in each adjusted network weight and the preset scale parameter.
Optionally, the first transformation relation is determined by a first transformation scale parameter, and the first transformation scale parameter is determined by the sample data with the largest value in dimension and a preset proportion parameter in each sample data.
Optionally, the second transformation relationship is determined by a transformation relationship between each initial network weight and a conductance value corresponding to each initial network weight, and a transformation scale parameter.
Optionally, training the target model according to the network weight corresponding to each actual conductivity value and the gradient information, which specifically includes:
for each actual conductance value, determining a network weight corresponding to the actual conductance value;
replacing each initial network weight in the target model by the network weight corresponding to each actual conductivity value;
and training the target model according to the replaced network weights and the gradient information.
Optionally, the method further comprises:
determining each trained network weight in the trained model to obtain a conductance value corresponding to each trained network weight;
writing the conductance value corresponding to each trained network weight into the resistance change memory to obtain an updated resistance change memory;
receiving data to be processed, and determining a voltage value corresponding to the data to be processed according to the data to be processed;
controlling the updated resistive random access memory according to the voltage value corresponding to the data to be processed to obtain the current value of the current output by the updated resistive random access memory under the voltage of the voltage value corresponding to the data to be processed, wherein the current value is used as the current value corresponding to the data to be processed;
And obtaining a prediction result corresponding to the data to be processed according to the current value corresponding to the data to be processed.
The specification provides a model training device based on resistance random access memory, includes:
the acquisition module is used for acquiring sample data and determining each initial network weight contained in the target model;
the first determining module is used for determining a conductance value corresponding to each initial network weight in the resistance random access memory as the conductance value corresponding to the initial network weight;
the writing module is used for writing the conductance value corresponding to each initial network weight into the resistance change memory, and converting the sample data according to a first conversion relation to obtain a voltage value corresponding to the sample data and used for controlling the resistance change memory;
the control module is used for controlling the resistive random access memory according to the voltage value so as to obtain a current value of the current output by the resistive random access memory under the voltage corresponding to the voltage value;
the conversion module is used for converting the current value according to a second conversion relation to obtain a prediction result corresponding to the sample data;
the reading module is used for obtaining gradient information according to the deviation between the actual label corresponding to the sample data and the prediction result, and reading each actual conductivity value when the current is output according to the voltage of the voltage value from the resistance change memory so as to determine the network weight corresponding to each actual conductivity value;
And the training module is used for training the target model according to the network weight corresponding to each actual conductivity value and the gradient information.
Optionally, the first determining module is specifically configured to,
for each initial network weight, adjusting the value corresponding to the initial network weight to fall within a preset network weight range to obtain an adjusted network weight corresponding to the initial network weight; and determining the conductance value corresponding to each adjusted network weight as the conductance value corresponding to each initial network weight according to the maximum value in each adjusted network weight and the preset scale parameter.
Optionally, the first transformation relation is determined by a first transformation scale parameter, and the first transformation scale parameter is determined by the sample data with the largest value in dimension and a preset proportion parameter in each sample data.
Optionally, the second transformation relationship is determined by a transformation relationship between each initial network weight and a conductance value corresponding to each initial network weight, and a transformation scale parameter.
Optionally, the training module is specifically configured to,
for each actual conductance value, determining a network weight corresponding to the actual conductance value; replacing each initial network weight in the target model by the network weight corresponding to each actual conductivity value; and training the target model according to the replaced network weights and the gradient information.
Optionally, the apparatus further comprises:
the second determining module is used for determining each trained network weight in the trained model to obtain a conductance value corresponding to each trained network weight; writing the conductance value corresponding to each trained network weight into the resistance change memory to obtain an updated resistance change memory; receiving data to be processed, and determining a voltage value corresponding to the data to be processed according to the data to be processed; controlling the updated resistive random access memory according to the voltage value corresponding to the data to be processed to obtain the current value of the current output by the updated resistive random access memory under the voltage of the voltage value corresponding to the data to be processed, wherein the current value is used as the current value corresponding to the data to be processed; and obtaining a prediction result corresponding to the data to be processed according to the current value corresponding to the data to be processed.
The present specification provides a computer readable storage medium storing a computer program which when executed by a processor implements the resistive random access memory based model training method described above.
The present specification provides an electronic device comprising a processor and a computer program stored on a memory and executable on the processor, the processor implementing the resistive random access memory based model training method described above when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
in the model training method based on the resistive random access memory provided by the specification, sample data are acquired, initial network weights contained in a target model are determined, then the conductance value in the resistive random access memory corresponding to each initial network weight is determined through preset scale parameters, the determined conductance values are written into the resistive random access memory, the sample data are further converted into voltage values for controlling the resistive random access memory so as to control output current of the resistive random access memory, a prediction result corresponding to the sample data can be obtained through the current value of the output current of the resistive random access memory, gradient information is obtained according to deviation between an actual tag corresponding to the sample data and the prediction result corresponding to the sample data, then the initial network weights in the target model are replaced in a network ring corresponding to each actual conductance value in the resistive random access memory, and therefore the target model with the initial network weights replaced is trained according to the obtained gradient information.
According to the method, the initial network weights are pre-adjusted, the adjusted network weights are converted into the conductance values of the resistive random access memory according to the preset scale parameters and the maximum value of the adjusted network weights, the initial network weights are mapped into the conductance values in the resistive random access memory more reasonably, after the resistive random access memory outputs current, the network weights corresponding to the conductance values in the resistive random access memory are replaced with the network weights in the target model, then the target model with the network weights replaced is trained through the current output by the resistive random access memory, the influence of writing errors of the resistive random access memory when the conductance values are written into the target model is solved, and the training efficiency of the target model is improved in the model training process based on the resistive random access memory.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
FIG. 1 is a schematic flow chart of a model training method based on a resistive random access memory provided in the present specification;
FIG. 2 is a flow chart of simulation iteration of a model trained based on a resistive random access memory provided by the present specification;
FIG. 3 is a schematic structural diagram of a model training device based on a resistive random access memory provided in the present specification;
fig. 4 is a schematic structural view of an electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
The execution body of the model training method based on the resistive random access memory provided in the present specification may be a terminal device such as a notebook computer, a desktop computer, or the like, or may be a client installed in the terminal device, or may be a dedicated device for training a model, or may be a server.
In the existing neural network field, a large amount of sample data is often required for training a model based on the neural network, while in the existing model training method, a computing device needs to transmit a large amount of data, such as data for constructing the model, sample data, output data of the model, and the like, which needs to be used in model training, in a high frequency, which directly results in that the existing model training process needs to consume a large amount of computing resources.
Based on the above problems, resistive random access memories have emerged that can map network weights within a model by conductance values within the device. The sample data is converted into the voltage for controlling the resistive random access memory, and then the current value of the output current of the resistive random access memory which can be converted into the predicted result of the equivalent model can be obtained. This significantly reduces the amount of computational resources consumed in transferring large amounts of sample data during model training. However, the resistive random access memory cannot be used as a common model training tool at present. The reason is that the accuracy of the conductance values written in the resistive random access memory is low, which directly results in the resistive random access memory having difficulty in accurately mapping each network weight within the model. In addition, when the conductance value corresponding to each initial network weight in the target model is written into the resistive random access memory, a writing error including a conductance writing deviation and a conductance writing invalidity occurs, that is, the written conductance value is different from the target conductance value, which also affects the quality of the final prediction result.
As described above, the existing model training method based on the resistive random access memory has a large defect, and the model cannot obtain a high-quality prediction result due to the influence of the write-in error of the resistive random access memory in the training process, so that the training efficiency of the model is reduced.
Based on the above, the specification provides a model training method based on a resistive random access memory, and the special equipment processes sample data through the resistive random access memory by mapping each initial network weight in a target model into each conductance value in the resistive random access memory, then through a conversion relation between sample data and a voltage value used for controlling the resistive random access memory and a conversion relation between a current value of current output by the resistive random access memory and prediction data required by the training of the target model, and further obtains a prediction result aiming at the sample data, thereby obviously reducing the operation resource requirement of data transmission. After gradient information is obtained through a prediction result, the problem of randomness caused by writing errors of the conductance values in the resistive random access memory is considered, before a target model is trained according to the gradient information, each initial network weight in the target model is replaced by the network weight corresponding to each actual conductance value in the resistive random access memory, and then the gradient information of the replaced target model is updated through the obtained gradient information, so that the target model is trained. The problem of randomness caused by writing errors of the conductance values in the resistive random access memory is solved, the prediction accuracy of the target model trained based on the resistive random access memory is enhanced, and the training efficiency of the target model is improved.
The following will explain the scheme provided in the present specification in detail.
Fig. 1 is a schematic flow chart of a model training method based on a resistive random access memory provided in the present specification, including:
s101: sample data is acquired and initial network weights contained in the target model are determined.
In this specification, sample data acquired by a dedicated device is typically nonlinear data. With respect to linear data, nonlinear data cannot describe the correspondence between a group of data having correspondence through a simple linear relationship, and training a target model by using nonlinear data as sample data can enable the trained target model to have the capability of judging data corresponding to the sample data through the input sample data.
Next, a model training method using nonlinear data as sample data is introduced, in which, in the process of heating air by an electrothermal device, input voltage u and air temperature y in the electrothermal period are continuously sampled at fixed time intervals, so as to obtain 1000 sets of corresponding data as initial data. Then, 604 sets of initial data of the middle part which is stable are selected and combined to obtain each sample data, wherein one sample data can be represented by the following modes:
Where N' =604, a=4, b=3, c=1, this sample data is actually expressed as including the air temperature acquired at the last sampling point of the nth sampling point, the input voltage acquired at the third sampling point before the nth sampling point, and the input voltage acquired at the fourth sampling point before the nth sampling point, and is used to predict the air temperature acquired at the nth sampling point.
It should be noted that, the specific form of the sample data is not limited in this specification, and may be determined according to practical requirements, for example, in the above formula, it is fully possible to add an additional data, such as y, to the sample data n-5 Either the values of a, b or c are changed, and a model with prediction capability can be trained through sample data by the model training method described in the application.
Alternatively, the dedicated device may select the first 300 sets among 600 sets as sample data for use in training the target model. The back 300 groups can then be used as test data to test the trained target model.
After the special equipment acquires the sample data, each initial network weight contained in the target model to be trained needs to be determined. It should be noted that, before acquiring the sample data, each initial network weight included in the target model to be trained may be determined, or the sample data and each initial network weight included in the target model to be trained may be acquired simultaneously.
S102: for each initial network weight, determining a conductance value corresponding to the initial network weight in the resistance change memory as the conductance value corresponding to the initial network weight.
After determining each initial network weight contained in the target model, if each initial network weight is directly converted into a conductance value for controlling the resistive random access memory, the special equipment can cause the situation that the conductance values corresponding to a plurality of different initial network weights are the same in the process of converting each initial network weight into the conductance value in the resistive random access memory due to the fact that the accuracy of the resistive random access memory is poor, the difference between different writable conductance values is large, the writable conductance value range is small, the difference between part of initial network weights is small and the like, and the conductance values corresponding to most of the network weights are the maximum value or the minimum value in the conductance value range corresponding to the resistive random access memory. Under the condition that the resistive random access memory directly maps the target model, the training efficiency in the training process of the target model taking the resistive random access memory as the data processing device can be directly influenced.
Based on the above problems, the present specification proposes a method of adjusting each initial network weight, and then determining a conductance value corresponding to each adjusted network weight according to a maximum value in the adjusted network weights, so as to avoid the problem of uneven distribution of the conductance value corresponding to each adjusted network weight as much as possible.
Specifically, for each initial network weight, the value corresponding to the initial network weight is dropped into a preset network weight range, so as to obtain an adjusted network parameter corresponding to the initial network weight. Taking a preset network weight range of [ -1,1] as an example, for each initial network weight, if the value corresponding to the initial network weight is within the range, no change is made, and if the value corresponding to the initial network parameter exceeds the range, the initial network weight is adjusted to be the nearest value within the preset network weight range, that is, if the initial network weight is-1.5, the initial network weight is adjusted to be-1.
And after obtaining the adjusted network weights corresponding to all the initial network weights according to the adjustment method, determining the conductance value corresponding to each adjusted network weight as the conductance value corresponding to each initial network weight, wherein the maximum value in each adjusted network weight and the preset scale parameter. The preset scale parameter can be determined according to a preset network weight range and a conductivity value range of the resistive random access memory. The purpose of presetting the scale parameters is the same as that of taking the maximum value in the adjusted network weight values as a calculation element, and the purpose is to enable the conductance values corresponding to the initial network weights to be calculated to be distributed in a writable conductance value range in the resistance random access memory more evenly.
The formula for calculating the conductance value corresponding to each adjusted network weight is as follows:
wherein the maximum value W of each adjusted network weight needs to be determined first max Because the network weights have negative numbers, when the maximum value in the adjusted network weights is selected, the maximum value of the corresponding value of the adjusted network weights is selected. And 60 is a preset scale parameter for multiplying power calculation, and the round operation is to reserve the decimal place number of the calculation result in the bracket, and the bracket is 1, namely the decimal place later is reserved. According to W max And after the preset scale parameters are calculated, the calculated result is usually about 0 as the center point, and in order to average the conductance value, the center point of the result needs to be adjusted to be near the center value of the writable conductance value of the resistive random access memory, namely, the calculated result It is necessary to add a G preset according to the central value of the writable conductance value of the resistive random access memory Standard of And determining the conductance value corresponding to each adjusted network weight as the conductance value corresponding to each initial network weight. Currently, G Standard of Can also be set according to specific requirements, for example, G can be preset according to the distribution density of the writable conductance values of the resistance random access memory Standard of
It should be noted that, in the above-mentioned method for calculating the conductance value corresponding to each initial network weight, the formula is not limited, and the main method is still to limit each initial network weight through the writable conductance value range of the resistive random access memory and the preset scale parameter, so that each initial network weight is more evenly distributed to the writable conductance value range in the resistive random access memory after the conductance value corresponding to each initial network weight is calculated
S103: writing the conductance value corresponding to each initial network weight into the resistance change memory, and converting the sample data according to a first conversion relation to obtain a voltage value corresponding to the sample data and used for controlling the resistance change memory.
After determining the conductance value corresponding to each initial network weight, the special equipment writes the conductance value corresponding to each initial network weight into the resistive random access memory, so that the resistive random access memory can output current for being converted into a prediction result corresponding to sample data under the control of the voltage value converted according to the sample data. After each conductance value is written into each conductance in the resistive random access memory, the sample data needs to be converted into a corresponding voltage value for the resistive random access memory.
Specifically, the conversion relation between the sample data and the voltage value needs to be determined through conversion scale parameters, wherein the conversion scale parameters are determined through the sample data with the largest value in dimension and preset proportion parameters in the sample data.
The formula for determining the conversion scale parameter and the formula for controlling the voltage value of the resistive random access memory corresponding to the sample data according to the conversion scale parameter are as follows:
the scale is a conversion scale parameter, and in the process of calculating the conversion scale parameter, the sample data with the largest value in dimension in each sample data input needs to be determined first. In the present application, for 300 sets of sample data, each set of sample data includes three data, that is, 900 data in total, and the sample data having the largest value in dimension in each sample data is the largest value in the 900 data. The 0.3 in the formula is a preset proportional parameter, which is also determined according to the maximum value of the inputtable voltage of the resistive random access memory and other related parameters, for example, the proportional parameter is determined by the maximum value of the inputtable voltage of the resistive random access memory and the current value range of the outputtable current.
And after the conversion scale parameters are determined according to the sample data with the largest value in dimension and the preset proportion parameters in the sample data, the first conversion relation can be determined according to the conversion scale parameters. For the resistive random access memory, the control of the resistive random access memory can be realized at a plurality of corresponding interfaces through a plurality of voltage values at the same time.
Continuing with the example above, for the sample data mentioned in step S101, the sample data contains:
if the sample data is converted into voltage values for controlling the resistive random access memory, three voltage values are generated, wherein y n-c 、u n-b U n-a One for each voltage value. That is, for one sample data, three voltage values, i.e., three inputs, corresponding to the sample data can be determined by converting the scale parameters Voltage (V) After that, can pass through input Voltage (V) Three voltage values contained in the resistive random access memory control the output current of the resistive random access memory.
Note that physical quantities corresponding to a plurality of data included in one sample data may be different. For example, in the above example, the physical quantity denoted by y corresponds to the temperature, and the physical quantity denoted by u corresponds to the voltage, in this solution, when the two physical quantities are converted into the voltage values for controlling the resistive random access memory, the preset proportional parameters used for calculation are the same, but one of the proportional parameters for each physical quantity may be set completely, so that the accuracy of the prediction of the trained target model is improved. Of course, the same proportion parameter of the data amount in the sample data may be set, and the present scheme is not particularly limited.
S104: and controlling the resistive random access memory according to the voltage value to obtain a current value of the current output by the resistive random access memory under the voltage corresponding to the voltage value.
S105: and converting the current value according to a second conversion relation to obtain a prediction result corresponding to the sample data. After determining each input voltage value corresponding to each sample data, the special equipment can control the resistive random access memory according to each voltage value corresponding to each sample data through the interfaces of the corresponding number of the resistive random access memory to obtain the current output by the resistive random access memory, and the current value of the current can be converted into the prediction data corresponding to the sample data for the trainable target model.
Continuing the above example, three input voltage values corresponding to each set of sample data determined in step S103 are controlled for each input voltage value at the same time by one interface of the resistive random access memory. That is, the resistive random access memory is controlled by three voltage values at the same time, thereby outputting current.
Specifically, before converting the current value, the special device needs to determine a second conversion relation required by converting the current value through the conversion relation between each initial network weight and the conductance value corresponding to each initial network weight and the conversion scale parameter. And then realizing conversion between the current value and the corresponding predicted data according to the second conversion relation.
The conversion relation between the initial network weights and the conductance values corresponding to the initial network weights can be obtained:
w in this formula max Has been described in the above, and W scale Then it is the multiplying factor in the conversion between each initial network weight and the conductance value corresponding to each initial network weight.
The transformation scale parameters are the same as transformation parameter coefficients used in the process of confirming the first transformation relationship, and are the following formulas, which are not described in detail herein:
by multiplying factor W in conversion relation between each initial network weight and conductance value corresponding to each initial network weight scale And a conversion scale parameter scale, a second conversion relation required for converting the current value can be determined:
i is the current value of the current output by the resistive random access memory. In order to enhance the efficiency of model training, a value is often added to the predicted data in model training to increase the flexibility of a function used by the trained target model when the predicted result is determined according to sample data, so that the fitting capacity of neurons in the target model is improved, and the efficiency of model training is further improved. Therefore, in the present application, the following is calculatedAfter the result of the model training, bias is added in the same way as an optimization term of the model training, and the obtained final result Output is a prediction result corresponding to sample data acquired by special equipment based on a resistance random access memory and can be used for the model training.
S106: and obtaining gradient information according to the deviation between the actual label corresponding to the sample data and the prediction result, and reading each actual conductivity value when the current is output according to the voltage of the voltage value from the resistance change memory to determine the network weight corresponding to each actual conductivity value.
S107: and training the target model according to the network weight corresponding to each actual conductivity value and the gradient information.
After the special equipment obtains the prediction results corresponding to the sample data, gradient information can be obtained according to the deviation between the actual label corresponding to the sample data and the prediction results, and the gradient information can be used for training the target model.
But a writing error may occur due to the writing of the conductance value in the resistive random access memory. The write error includes a write offset and a write invalidate. The write bias is that there is a bias between the target conductance value at the time of writing and the actual conductance value written, for example, the determined conductance value is 1, the conductance value written into the resistive random access memory is 1.1, and the write disable is that the target conductance value at the time of writing and the actual conductance value written differ too much even if the conductance value does not change according to the write operation, for example, the determined conductance value is 10, the conductance value written into the resistive random access memory is 0.1 or less. The write errors described above may cause the resistive memory to potentially distinguish between internal and predicted conductance values at the actual output current. This directly results in lower training efficiency and lower accuracy of model training if the target model is trained by the gradient information directly passed by the dedicated device.
Therefore, in order to avoid the problem caused by the writing error, before training the target model by the determined gradient information, it is necessary to read each actual conductance value in the resistive random access memory, and then pass the multiplying factor in the conversion between each initial network weight and the conductance value corresponding to each initial network weightG preset according to the central value of the writable conductance value of the resistance change memory Standard of Each actual conductance value is converted into a corresponding network weight.
And replacing each initial network weight in the target model with the network weight corresponding to each determined actual conductivity value, so that the determined gradient information can be matched with each network weight in the target model. And training the target model according to the replaced network weights and gradient information.
After training the target model, each trained network weight in the trained model can be determined, and then the conductance value corresponding to each trained network weight is obtained. And writing the conductance value corresponding to each trained network weight into the resistance random access memory to obtain an updated resistance random access memory, and rapidly processing data through the updated resistance random access memory to save calculation resources.
Therefore, on the premise that the trained target model is deployed in the resistive random access memory, if the data prediction task is received subsequently, the received data can be converted into a voltage value, then the resistive random access memory is used for carrying out rapid data processing, and the current value of the current output by the resistive random access memory is converted to obtain the prediction result of the data prediction task. The method greatly reduces the transfer times of the model data and the input/output data between the storage and the model in the actual use of the existing model, obviously reduces the consumption of calculation resources and effectively increases the efficiency of the model on data processing.
It should be noted that, after the target model is trained by the method provided in the present specification, the training efficiency of the model may be further improved by iterating through the simulated neural network. In addition, the special equipment can learn the deviation caused by some systematic errors and randomness including the conductance writing errors in the current output process of the resistive random access memory through simulation neural network iteration, enrich the relation between sample data learned by the model and corresponding labels, and improve the prediction accuracy of the target model trained based on the resistive random access memory.
The flow of the simulation iteration described above will be described below with reference to the corresponding flow diagrams.
FIG. 2 is a flow chart of simulation iteration of a model trained based on a resistive random access memory provided in the present specification:
the special model trains the target model once based on the resistance random access memory through the sample data and the initial network weight of the target model, and carries out simulation iteration on the trained model. In the simulation iteration process, for each iteration training, the special equipment inputs sample data into the target model, obtains a prediction result corresponding to the sample data output by the target model, and trains the target model by taking a deviation between the prediction result corresponding to the sample data output by the minimized target model and an actual label corresponding to the sample data as an optimization target.
It should be noted that the model may be comprehensively trained in two training modes, for example. 200 simulation training steps are performed, and then 1 training step based on the resistive random access memory is performed. Taking the training process as a circulating target, taking 5 times as an example, training of 200+1 times of models is performed for each training, and 1005 times of training is performed in total. In practical application, multiple rounds of training are required to be performed on the target model, and in the execution process of each round, the network weight included in the target model at the beginning of the round of training is called initial network weight, then the initial network weight is adjusted, and the conductance value corresponding to the initial network weight used for writing the resistive random access memory in the round of training is determined through the adjusted network weight. The method of adjusting the initial network weights is specifically described in step S102. The present specification is not particularly limited.
By the method, the special equipment can perform a large amount of data calculation through the resistance random access memory in the training process, so that the calculation resource consumption in the data processing process is obviously reduced. Before the model is trained through gradient information, the initial weight of the target model is replaced by the network weight corresponding to each actual conductivity value in the resistance change memory, and on the basis of avoiding the influence of the reduction of training efficiency caused by writing errors, the relation between sample data learned by the model and corresponding labels can be enriched through randomness factors caused by the writing errors, so that the prediction accuracy of the target model is improved, and the model training efficiency is effectively improved.
The above is based on the same idea for one or more model training methods based on resistive random access memory in the present specification, and the present specification further provides a corresponding apparatus, a storage medium, and an electronic device.
Fig. 3 is a schematic structural diagram of a model training device based on a resistive random access memory according to an embodiment of the present disclosure, where the device includes:
an obtaining module 301, configured to obtain sample data and determine each initial network weight included in the target model;
a first determining module 302, configured to determine, for each initial network weight, a conductance value corresponding to the initial network weight in the resistive random access memory, as a conductance value corresponding to the initial network weight;
The writing module 303 is configured to write a conductance value corresponding to each initial network weight into the resistive random access memory, and convert the sample data according to a first conversion relationship, so as to obtain a voltage value corresponding to the sample data and used for controlling the resistive random access memory;
the control module 304 is configured to control the resistive random access memory according to the voltage value, so as to obtain a current value of a current output by the resistive random access memory under a voltage corresponding to the voltage value;
the conversion module 305 is configured to convert the current value according to a second conversion relationship, so as to obtain a prediction result corresponding to the sample data;
the reading module 306 is configured to obtain gradient information according to a deviation between an actual tag corresponding to the sample data and the prediction result, and read each actual conductance value when the current is output according to the voltage of the voltage value from the resistive random access memory, so as to determine a network weight corresponding to each actual conductance value;
and the training module 307 is configured to train the target model according to the network weights corresponding to the actual conductance values and the gradient information.
Optionally, the first determining module 302 is specifically configured to,
for each initial network weight, adjusting the value corresponding to the initial network weight to fall within a preset network weight range to obtain an adjusted network weight corresponding to the initial network weight; and determining the conductance value corresponding to each adjusted network weight as the conductance value corresponding to each initial network weight according to the maximum value in each adjusted network weight and the preset scale parameter.
Optionally, the first transformation relation is determined by a first transformation scale parameter, and the first transformation scale parameter is determined by the sample data with the largest value in dimension and a preset proportion parameter in each sample data.
Optionally, the second transformation relationship is determined by a transformation relationship between each initial network weight and a conductance value corresponding to each initial network weight, and a transformation scale parameter.
Optionally, the training module 307 is specifically configured to,
for each actual conductance value, determining a network weight corresponding to the actual conductance value; replacing each initial network weight in the target model by the network weight corresponding to each actual conductivity value; and training the target model according to the replaced network weights and the gradient information.
Optionally, the apparatus further comprises:
a second determining module 308, configured to determine each trained network weight in the trained model, and obtain a conductance value corresponding to each trained network weight; writing the conductance value corresponding to each trained network weight into the resistance change memory to obtain an updated resistance change memory; receiving data to be processed, and determining a voltage value corresponding to the data to be processed according to the data to be processed; controlling the updated resistive random access memory according to the voltage value corresponding to the data to be processed to obtain the current value of the current output by the updated resistive random access memory under the voltage of the voltage value corresponding to the data to be processed, wherein the current value is used as the current value corresponding to the data to be processed; and obtaining a prediction result corresponding to the data to be processed according to the current value corresponding to the data to be processed.
The present specification also provides a computer readable storage medium storing a computer program which when executed by a processor is operable to perform the resistive random access memory based model training method provided in fig. 1 above.
Based on the model training method based on the resistive random access memory shown in fig. 1, the embodiment of the present disclosure further provides a schematic structural diagram of the electronic device shown in fig. 4. At the hardware level, as in fig. 4, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, although it may include hardware required for other services. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to realize the model training method based on the resistive random access memory, which is described in the above figure 1.
Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.

Claims (10)

1. A resistive random access memory-based model training method, comprising:
acquiring sample data and determining each initial network weight contained in a target model;
for each initial network weight, determining a conductance value corresponding to the initial network weight in the resistance change memory as a conductance value corresponding to the initial network weight;
writing the conductance value corresponding to each initial network weight into the resistance change memory, and converting the sample data according to a first conversion relation to obtain a voltage value corresponding to the sample data and used for controlling the resistance change memory;
Controlling the resistive random access memory according to the voltage value to obtain a current value of the current output by the resistive random access memory under the voltage corresponding to the voltage value;
converting the current value according to a second conversion relation to obtain a prediction result corresponding to the sample data;
obtaining gradient information according to the deviation between the actual label corresponding to the sample data and the prediction result, and reading each actual conductivity value when outputting the current according to the voltage of the voltage value from the resistance change memory to determine the network weight corresponding to each actual conductivity value;
and training the target model according to the network weight corresponding to each actual conductivity value and the gradient information.
2. The method of claim 1, wherein for each initial network weight, determining the conductance value corresponding to the initial network weight in the resistive random access memory as the conductance value corresponding to the initial network weight, specifically comprises:
for each initial network weight, adjusting the value corresponding to the initial network weight to fall within a preset network weight range to obtain an adjusted network weight corresponding to the initial network weight;
And determining the conductance value corresponding to each adjusted network weight as the conductance value corresponding to each initial network weight according to the maximum value in each adjusted network weight and the preset scale parameter.
3. The method of claim 1, wherein the first transformation relationship is determined by a transformation scale parameter determined by a largest dimensional sample data among the sample data and a predetermined scale parameter.
4. The method of claim 1, wherein the second transformation relationship is determined by a transformation relationship between each initial network weight and a conductance value corresponding to each initial network weight and a transformation scale parameter.
5. The method of claim 1, wherein training the target model based on the network weights corresponding to each actual conductance value and the gradient information, specifically comprises:
for each actual conductance value, determining a network weight corresponding to the actual conductance value;
replacing each initial network weight in the target model by the network weight corresponding to each actual conductivity value;
and training the target model according to the replaced network weights and the gradient information.
6. The method of claim 1, wherein the method further comprises:
determining each trained network weight in the trained model to obtain a conductance value corresponding to each trained network weight;
writing the conductance value corresponding to each trained network weight into the resistance change memory to obtain an updated resistance change memory;
receiving data to be processed, and determining a voltage value corresponding to the data to be processed according to the data to be processed;
controlling the updated resistive random access memory according to the voltage value corresponding to the data to be processed to obtain the current value of the current output by the updated resistive random access memory under the voltage of the voltage value corresponding to the data to be processed, wherein the current value is used as the current value corresponding to the data to be processed;
and obtaining a prediction result corresponding to the data to be processed according to the current value corresponding to the data to be processed.
7. A resistive random access memory-based model training device, comprising:
the acquisition module is used for acquiring sample data and determining each initial network weight contained in the target model;
the first determining module is used for determining a conductance value corresponding to each initial network weight in the resistance random access memory as the conductance value corresponding to the initial network weight;
The writing module is used for writing the conductance value corresponding to each initial network weight into the resistance change memory, and converting the sample data according to a first conversion relation to obtain a voltage value corresponding to the sample data and used for controlling the resistance change memory;
the control module is used for controlling the resistive random access memory according to the voltage value so as to obtain a current value of the current output by the resistive random access memory under the voltage corresponding to the voltage value;
the conversion module is used for converting the current value according to a second conversion relation to obtain a prediction result corresponding to the sample data;
the reading module is used for obtaining gradient information according to the deviation between the actual label corresponding to the sample data and the prediction result, and reading each actual conductivity value when the current is output according to the voltage of the voltage value from the resistance change memory so as to determine the network weight corresponding to each actual conductivity value;
and the training module is used for training the target model according to the network weight corresponding to each actual conductivity value and the gradient information.
8. The apparatus of claim 7, wherein the first determining means is specifically configured to,
For each initial network weight, adjusting the value corresponding to the initial network weight to fall within a preset network weight range to obtain an adjusted network weight corresponding to the initial network weight; and determining the conductance value corresponding to each adjusted network weight as the conductance value corresponding to each initial network weight according to the maximum value in each adjusted network weight and the preset scale parameter.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-6.
10. An electronic device comprising a processor and a computer program stored on a memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-6 when executing the program.
CN202311686862.XA 2023-12-07 2023-12-07 Model training method, device, medium and equipment based on resistive random access memory Pending CN117744731A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311686862.XA CN117744731A (en) 2023-12-07 2023-12-07 Model training method, device, medium and equipment based on resistive random access memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311686862.XA CN117744731A (en) 2023-12-07 2023-12-07 Model training method, device, medium and equipment based on resistive random access memory

Publications (1)

Publication Number Publication Date
CN117744731A true CN117744731A (en) 2024-03-22

Family

ID=90276782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311686862.XA Pending CN117744731A (en) 2023-12-07 2023-12-07 Model training method, device, medium and equipment based on resistive random access memory

Country Status (1)

Country Link
CN (1) CN117744731A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045114A1 (en) * 2021-09-22 2023-03-30 清华大学 Storage and computation integrated chip and data processing method
CN115905546A (en) * 2023-01-06 2023-04-04 之江实验室 Graph convolution network document identification device and method based on resistive random access memory
CN116934981A (en) * 2023-08-10 2023-10-24 西北工业大学 Stripe projection three-dimensional reconstruction method and system based on dual-stage hybrid network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045114A1 (en) * 2021-09-22 2023-03-30 清华大学 Storage and computation integrated chip and data processing method
CN115905546A (en) * 2023-01-06 2023-04-04 之江实验室 Graph convolution network document identification device and method based on resistive random access memory
CN116934981A (en) * 2023-08-10 2023-10-24 西北工业大学 Stripe projection three-dimensional reconstruction method and system based on dual-stage hybrid network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴可川: "基于随机忆阻的神经网络及其在图像分类中的应用", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 March 2020 (2020-03-15), pages 1 - 75 *

Similar Documents

Publication Publication Date Title
CN116432778B (en) Data processing method and device, storage medium and electronic equipment
CN117390585B (en) Time sequence data prediction method and model training method based on three-dimensional full-connection fusion
CN114997472A (en) Model training method, business wind control method and business wind control device
WO2023113969A1 (en) Methods and apparatus for performing a machine learning operation using storage element pointers
CN116502679A (en) Model construction method and device, storage medium and electronic equipment
CN107038127A (en) Application system and its buffer control method and device
CN116382599B (en) Distributed cluster-oriented task execution method, device, medium and equipment
CN116882767A (en) Risk prediction method and device based on imperfect heterogeneous relation network diagram
CN117744731A (en) Model training method, device, medium and equipment based on resistive random access memory
CN116384505A (en) Data processing method and device, storage medium and electronic equipment
US5799172A (en) Method of simulating an integrated circuit
CN110008112B (en) Model training method and device, service testing method and device
CN117952182B (en) Mixed precision model training method and device based on data quality
CN116109008B (en) Method and device for executing service, storage medium and electronic equipment
CN116149865B (en) Method, device and equipment for executing task in variable frequency manner and readable storage medium
CN116777010B (en) Model training method and task execution method and device
CN117787358B (en) Model quantization method, device and equipment based on resistive random access memory
CN117057162B (en) Task execution method and device, storage medium and electronic equipment
CN118428456A (en) Model deployment method and device, storage medium and electronic equipment
CN117953258A (en) Training method of object classification model, object classification method and device
CN118199767A (en) Model parameter determining method and device and loss predicting method and device
CN116150673A (en) Method for establishing environmental facility and environmental factor influence relation model
CN117933707A (en) Wind control model interpretation method and device, storage medium and electronic equipment
CN117034926A (en) Model training method and device for multi-field text classification model
CN117592998A (en) Wind control method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Shi Tuo

Inventor after: Zhang Hui

Inventor after: Gao Lili

Inventor after: Cui Shiyu

Inventor before: Zhang Hui

Inventor before: Shi Tuo

Inventor before: Gao Lili

Inventor before: Cui Shiyu