CN118210445A - Data storage method, data reading method, and corresponding device, equipment and medium - Google Patents

Data storage method, data reading method, and corresponding device, equipment and medium Download PDF

Info

Publication number
CN118210445A
CN118210445A CN202410307483.3A CN202410307483A CN118210445A CN 118210445 A CN118210445 A CN 118210445A CN 202410307483 A CN202410307483 A CN 202410307483A CN 118210445 A CN118210445 A CN 118210445A
Authority
CN
China
Prior art keywords
data
storage
neural network
network
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410307483.3A
Other languages
Chinese (zh)
Inventor
祝夭龙
吴臻志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lynxi Technology Co Ltd
Original Assignee
Beijing Lynxi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lynxi Technology Co Ltd filed Critical Beijing Lynxi Technology Co Ltd
Priority to CN202410307483.3A priority Critical patent/CN118210445A/en
Publication of CN118210445A publication Critical patent/CN118210445A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The disclosure provides a data storage method, a data reading method, and corresponding devices, equipment and media, and belongs to the technical field of computers. The data reading method comprises the following steps: acquiring first data to be stored and a first storage tag corresponding to the first data; and according to the first data and the first storage label, adjusting a first network parameter of the storage neural network in the first state to obtain a second network parameter of the storage neural network in the second state, wherein the second network parameter is used for storing the first data. Embodiments according to the present disclosure provide a new data storage method in "compute substitute storage".

Description

Data storage method, data reading method, and corresponding device, equipment and medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data storage method, a data reading method, a data storage device, a data reading device, an electronic device, and a computer readable storage medium.
Background
In the related art, if data is to be stored, the data to be stored is generally stored directly into a storage medium. The stored data is read from the storage medium when needed.
Disclosure of Invention
The present disclosure provides a data storage method, a data reading method, a data storage device, a data reading device, an electronic apparatus, and a computer-readable storage medium.
In a first aspect, the present disclosure provides a data storage method, the data storage method comprising: acquiring first data to be stored and a first storage tag corresponding to the first data; and according to the first data and the first storage label, adjusting a first network parameter of the storage neural network in a first state to obtain a second network parameter of the storage neural network in a second state, wherein the second network parameter is used for storing the first data.
In a second aspect, the present disclosure provides a data reading method, the data reading method comprising: acquiring a second storage tag corresponding to second data to be read; and inputting the second storage label into a preset storage neural network to obtain target read data corresponding to the second data.
In a third aspect, the present disclosure provides a data storage device comprising: the first acquisition module is used for acquiring first data to be stored and a first storage tag corresponding to the first data; and the storage module is used for adjusting the first network parameters of the storage neural network in the first state according to the first data and the first storage label to obtain the second network parameters of the storage neural network in the second state, wherein the second network parameters are used for storing the first data.
In a fourth aspect, the present disclosure provides a data reading apparatus comprising: the second acquisition module is used for acquiring a second storage tag corresponding to second data to be read; and the reading module is used for inputting the second storage tag into a preset storage neural network to obtain target reading data corresponding to the second data.
In a fifth aspect, the present application provides an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores one or more computer programs executable by the at least one processor, one or more of the computer programs being executable by the at least one processor to enable the at least one processor to perform the data storage method or the data reading method of any one of the embodiments of the present disclosure.
In a sixth aspect, the present application provides a computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor/processing core implements the data storage method or the data reading method of any of the embodiments of the present disclosure.
In the embodiment provided by the disclosure, first data to be stored and a first storage tag corresponding to the first data are acquired; and according to the first data and the first storage label, adjusting a first network parameter of the storage neural network in the first state to obtain a second network parameter of the storage neural network in the second state, wherein the second network parameter is used for storing the first data. It can be known that, in the embodiment of the present disclosure, although the data to be stored is first data, the first data is not actually stored in the storage medium, but the storage process of the data is converted into the data calculation process by means of the storage neural network, the first network parameter of the storage neural network in the first state is adjusted by the first data, and the second network parameter of the storage neural network in the second state is obtained, so that the storage of the first data is realized by the adjustment of the network parameter of the storage neural network; and when the stored data is required to be read later, the stored data can be read conveniently through the storage neural network by utilizing the corresponding storage tag. In summary, the embodiments of the present disclosure provide a new data storage method with "compute substitute storage".
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. The above and other features and advantages will become more readily apparent to those skilled in the art by describing in detail exemplary embodiments with reference to the attached drawings, in which:
Fig. 1 is a flowchart of a data storage method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a data storage method according to an embodiment of the disclosure.
Fig. 3 is a schematic diagram of a data storage method according to an embodiment of the disclosure.
Fig. 4 is a schematic diagram of a storage neural network according to an embodiment of the disclosure.
Fig. 5 is a flowchart of a data reading method according to an embodiment of the disclosure.
Fig. 6 is a block diagram of a data storage device according to an embodiment of the present disclosure.
Fig. 7 is a block diagram of a data reading apparatus according to an embodiment of the present disclosure.
Fig. 8 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 9 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
For a better understanding of the technical solutions of the present disclosure, exemplary embodiments of the present disclosure will be described below with reference to the accompanying drawings, in which various details of the embodiments of the present disclosure are included to facilitate understanding, and they should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Embodiments of the disclosure and features of embodiments may be combined with each other without conflict.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In the related art, if first data is to be stored, it is generally necessary to determine a storage space required for storing the first data and write the first data in the corresponding storage space, thereby completing the storage of the first data. If the first data is needed, the first data is read out from the storage space. In this way, the data to be stored and the data actually stored are identical, and the data is stored in this way.
The embodiment of the disclosure provides a data storage method, which realizes a new data storage method by calculating instead of storing, namely, converting a data storage process into a calculation process by using a storage neural network. In the embodiment of the disclosure, although the data to be stored is first data, the first data is not actually stored in the storage medium, but the storage process of the data is converted into a data calculation process by means of the storage neural network, and the first network parameter of the storage neural network in a first state is adjusted by the first data to obtain the second network parameter of the storage neural network in a second state, so that the storage of the first data is realized by the adjustment of the network parameter of the storage neural network; and when the stored data is required to be read later, the stored data can be read conveniently through the storage neural network by utilizing the corresponding storage tag. In summary, the embodiments of the present disclosure provide a new data storage method with "compute substitute storage".
A first aspect of an embodiment of the present disclosure provides a data storage method.
Fig. 1 is a flowchart of a data storage method according to an embodiment of the present disclosure. Referring to fig. 1, the data storage method may include the following steps.
Step S11, first data to be stored and a first storage tag corresponding to the first data are obtained.
Step S12, according to the first data and the first storage label, adjusting a first network parameter of the storage neural network in the first state to obtain a second network parameter of the storage neural network in the second state, wherein the second network parameter is used for storing the first data.
For example, the first data to be stored may be any one or more of text data, image data, audio data, video data, and the like, to which the embodiments of the present disclosure are not limited.
For example, the first data to be stored may include at least one of: forms, text, numbers, voice, images, voltage signals (e.g., analog voltage signals, digital voltage signals), power signals.
In some alternative implementations, the first data corresponds to at least one storage tag, and the first storage tag is one of the storage tags to which the first data corresponds. The storage tag may be regarded as summary information of the first data, summary information, or feature information extracted from the first data, which may reflect the content of the first data to some extent.
In some alternative implementations, the storage neural network is a neural network for storing data, which is provided with data storage functionality. After determining the network structure of the storage neural network, it may be given initial network parameters (e.g., the initial network parameters may be determined by way of pre-training) to form the storage neural network in an initial state. Each time the process of storing data using the storage neural network can be regarded as a process of updating the network parameters of the storage neural network based on the data to be stored. And, such an update of network parameters may be characterized by storing the state of the neural network. In other words, each time data is stored, the network parameters of the storage neural network are correspondingly adjusted, and the state of the storage neural network is updated accordingly.
In some optional implementations, according to the first data and the first storage tag, a first network parameter of the storage neural network in the first state can be adjusted to obtain a second network parameter of the storage neural network in the second state, so that the storage of the first data is realized through the second network parameter.
That is, before the first data is stored, the storage neural network is in a first state, its corresponding network parameter is the first network parameter, after the first data is stored by the storage neural network in the first state, the storage neural network is adjusted to a second state, its network parameter is also adjusted from the first network parameter to a second network parameter, and it is through this adjustment of the network parameter that the storage of the first data is realized.
It should be noted that, the change of the state does not affect the network structure of the storage neural network itself, which is merely a data storage process characterizing the storage neural network. For example, if the storage neural network is composed of the first convolution layer, the second convolution layer, the first connection layer, the first activation layer and the second activation layer, the state of the storage neural network is changed due to the storage of the first data, and the structure of each network layer is not affected, but only the value of the network parameter in the storage neural network is changed (for example, the value of the weight value is changed).
In some alternative implementations, the first network parameter and the second network parameter include a learnable parameter of the stored neural network. The learnable parameters comprise parameters which can be adjusted in the neural network.
For example, the first network parameter and the second network parameter may include weights of a stored neural network.
For example, the first network parameter and the second network parameter may include weights and bias parameters for the stored neural network.
It should be noted that the above description is merely exemplary of the first network parameter and the second network parameter, and the embodiments of the present disclosure are not limited thereto.
It should be further noted that, the adjustment of the network parameters is essentially realized by data calculation, that is, the storage process of the data is converted into the calculation process of the data, and the storage of the first data is realized by updating the network parameters of the storage neural network. In other words, the disclosed embodiments implement a data storage method with "compute-replace-store".
In summary, in the embodiment of the present disclosure, although the data to be stored is first data, the first data is not actually stored in the storage medium, but the storage process of the data is converted into the data calculation process by means of the storage neural network, and the first network parameter of the storage neural network in the first state is adjusted by the first data, so that the second network parameter of the storage neural network in the second state is obtained, and therefore, the storage of the first data is realized by the adjustment of the network parameter of the storage neural network; and when the stored data is required to be read later, the stored data can be read conveniently through the storage neural network by utilizing the corresponding storage tag. In summary, the embodiments of the present disclosure provide a new data storage method with "compute substitute storage".
The data storage method according to the embodiment of the present disclosure is described below.
In some alternative implementations, the first data is data to be stored, and the first data may correspond to a plurality of storage tags.
Illustratively, the first data corresponds to a plurality of levels of storage tags, and as the levels of storage tags increase, the information characterizing capabilities of the storage tags correspondingly increase.
For example, the first data is text data, which corresponds to three levels of storage tags, wherein the first level of storage tag is a text title (title), the second level of storage tag is a text abstract (abstract), and the third level of storage tag is a text topic (theme/topic). The information characterization capability of the storage tag is sequentially incremented from the text title to the text abstract to the subject in the text.
It should be noted that the above is merely an example of the storage tag and the level thereof, and the embodiments of the present disclosure are not limited thereto.
In some optional implementations, in step S11, acquiring the first data to be stored and the first storage tag corresponding to the first data includes: and receiving first data to be stored and a first storage tag, wherein the first storage tag is selected from a plurality of levels of storage tags corresponding to the first data. That is, when the first data has a plurality of memory tags, one of them may be selected as the first memory tag.
After the first data and the first storage tag are acquired, the first data may be stored using a storage neural network in step S12. The storage neural network has a certain network structure and is configured with corresponding network parameters, so that a data storage function can be realized.
In some alternative implementations, the storage neural network is pre-trained, and the storage neural network has data storage capabilities. Through pre-training, the storage neural network can acquire various background knowledge, and the logic thinking capability of the storage neural network can be exercised, so that various different types of data can be stored.
In some alternative implementations, adjusting a first network parameter of the first state storage neural network according to the first data and the first storage tag to obtain a second network parameter of the second state storage neural network includes: inputting the first data and the first storage label into a storage neural network in a first state, performing data calculation by using the storage neural network in the first state, and adjusting the first network parameter to obtain a second network parameter; or inputting the first data and the first storage label into a storage neural network in the first state, performing data calculation by using the storage neural network in the first state, adjusting the first network parameters to obtain the variation parameters in the first network parameters, and determining the second network parameters according to the variation parameters. The variable parameter refers to a parameter with a value changed. For example, for the network parameter a1, the value of the network parameter a1 in the stored neural network in the first state is v1, and after the first data is stored, if the value of the network parameter a1 is changed from v1 to v2, it may be determined that the network parameter a1 belongs to the change parameter.
For example, the first data x1 corresponds to the second network parameter W2 (i.e., the second network parameter obtained by storing the first data x1 is W2), the first data x2 corresponds to the second network parameter W3, and the first data x3 corresponds to the second network parameter W4, where W2, W3, and W4 are all parameters of the total amount of the stored neural network.
For example, the first data x1 corresponds to the second network parameter W2, the first data x2 corresponds to the second network parameter W3', and the first data x3 corresponds to the second network parameter W4', where W2 is a total parameter of the storage neural network, and W3 'and W4' are different network parameters based on W2 (i.e. parameters that are changed compared with corresponding values in W2, and belong to the changed parameters).
Illustratively, the initial network parameter of the stored neural network is W1 (i.e., the network parameter before any data is not stored), the first data x1 corresponds to the second network parameter W2', the first data x2 corresponds to the second network parameter W3', the first data x3 corresponds to the second network parameter W4', where W1 is the overall parameter of the stored neural network, and W2', W3', and W4' are different network parameters (i.e., variation parameters) based on W1.
It will be appreciated that in determining the second network parameter, the network parameter may be determined that stores the full amount of the neural network, or only the portion of the parameter that changes (i.e., the change parameter) may be determined, which is not limited by embodiments of the present disclosure.
In summary, the storage neural network learns some general logic processing capacity and storage capacity through pre-training to obtain relatively accurate initial network parameters; when the first data are stored by the storage neural network, the network parameters of the storage neural network are correspondingly adjusted, the adjustment of the network parameters can be regarded as fine adjustment of the network parameters, and the storage of the first data can be realized through the fine adjustment of the network parameters.
This method of storing data by trimming the network parameters of the storage neural network corresponds to storing data in the network parameters of the storage neural network. As data is continuously stored such that network parameters of the storage neural network are continually fine-tuned, it may result in the storage neural network "forgetting" some general knowledge (e.g., general knowledge learned during the pre-training phase). In addition, when the network parameter scale of the storage neural network is large, when the storage neural network is used for storing data with small data volume, the problem that the data can not be quickly converged and is difficult to accurately store may exist.
Based on this, in some alternative implementations, when adjusting the first network parameter to obtain the second network parameter, the following manner may be adopted: for the initial state of the stored neural network, on one hand, the original full-scale network parameters are kept unchanged, on the other hand, a part of network parameters are selected from the full-scale network parameters, and when data are stored each time, only the selected part of network parameters are adjusted, which is equivalent to carrying out additional fine adjustment on only the selected part of network parameters. Accordingly, when reading stored data, the data may be read together using the original full network parameters and the portion of the network parameters that were adjusted. The original full network parameters are reserved, so that the common knowledge of the stored neural network is not forgotten even if the data is stored continuously, and in addition, only part of the network parameters are updated when the data is stored, so that the quantity of the network parameters required to be updated is small when the small data quantity data is stored, the rapid convergence is facilitated, and the small data quantity data can be stored effectively and accurately.
Illustratively, the original entitlement network parameter of the storage neural network is W0 (W0 includes N network parameters), N network parameters W00 are selected from the N network parameters as network parameters (i.e., W00 e W0) to be adjusted when storing the first data, N and N are integers greater than or equal to 1, and N < N. When the first data x1 is stored by the storage neural network, the network parameter W00 corresponds to the first network parameter of the storage neural network in the first state, and therefore, the first network parameter W00 is adjusted according to the first data x1 and the first storage label1 thereof to obtain the second network parameter W01, and the network parameters related to the storage neural network include W0 and W01. When the first data x2 is stored by the storage neural network, the network parameter W01 corresponds to the first network parameter of the storage neural network in the first state, and therefore, the first network parameter W01 is adjusted according to the first data x2 and the first storage label2 thereof to obtain the second network parameter W02, and the network parameters related to the storage neural network include W0 and W02. And so on, multiple data stores may be implemented using the storage neural network.
The full network parameters and the adjusted network parameters can be represented in various modes, and can be represented independently or by related information such as value differences of the full network parameters and the adjusted network parameters. For example, where the overall network parameter W0 is {1,1,2,4,3}, and the network parameter W00 to be adjusted is {1, 2}, then after storing the first data x1, W0 and W01 can be characterized as: w0= {1,1,2,4,3}, w01= {2,5,1}, in addition to this, W0 and W01 can also be characterized as: {1+1,1+4,2-1,4,3}.
Typically, neural networks include many dense layers that can perform data processing such as matrix multiplication, and weight matrices in these network layers typically have a full rank characteristic. Further, consider that a pre-trained model may exhibit a particular attribute when oriented to some tasks: low "intrinsic dimensionality" and thus fine-tuning of parameters of the pre-trained model can be achieved by means of low-rank decomposition. In other words, while these models may contain millions to billions of parameters, with extremely high dimensional parameter space, only a relatively small subset of parameters is required when they are adapted to a new specific task.
Returning to the disclosed embodiment, for a storage neural network, which includes a plurality of network layers of full rank, the full amount of initial network parameters of the storage neural network can be obtained through pre-training. For the first stored data, the full amount of initial network parameters corresponds to the first network parameters of the stored neural network. Each time the storage neural network performs a data storage process, it is equivalent to performing a specific task, and considering that the storage neural network has a low "intrinsic dimension" attribute, therefore, when storing data based on the storage neural network, only a relatively small subset of parameters is required, and the storage of the data can be achieved by using the subset of parameters.
In some alternative implementations, the network parameters of the storage neural network include a weight matrix, and when storing data, the storage neural network may be made to perform the data storage task more efficiently by adding a low rank adaptation layer (e.g., a low rank matrix) to the weight matrix of the storage neural network. When the first data is stored, only the low-rank adaptation layer is required to be updated, and meanwhile the original weight of the storage neural network is kept unchanged, so that the storage neural network can be prevented from forgetting to learn general knowledge. In addition, the storage mode concentrates the parameter updating process on the low-rank adaptation layer, so that the parameter adjustment amount is reduced, the parameter adjustment efficiency can be improved, and the calculation and memory overhead can be reduced.
Illustratively, the initial network parameter that stores the full quantity of the neural network is W0, and W0 ε R d×k, d and k represent the two dimensions of W0, based on the low-rank decomposition approach, it can be: w0+Δw=w0+ba, where B and a can be considered as two low rank matrices, and B e R d×r,A∈Rr×k, and rank R < < min (d, k). Accordingly, the process of storing the first data x and adjusting the corresponding network parameters can be characterized as: w0x+Δwx=w0x+ BAx. In the storage process, W0 is frozen, gradient update is not received, and A and B belong to adjustable network parameters, and value adjustment is carried out, so that corresponding second network parameters are obtained.
It should be noted that, after the low-rank decomposition, the actual parameter adjustment amount is d×r+r×k, and r is far smaller than d and k, so that the parameter amount required to be adjusted for storing the first data is effectively reduced.
In some alternative implementations, the process of storing the first data by the neural network is as follows: assuming that a first network parameter of a first-state storage neural network is W1, the acquired first data is x1, a first storage label corresponding to the first data x1 is label1, the first data x1 and the first storage label1 are input into the first-state storage neural network, the first-state storage neural network performs data calculation based on the first network parameter W1 and the first storage label1, first prediction data x11' is output, the first storage evaluation value ev11 is obtained by performing calculation according to the first data x1 and the prediction data x11' through a preset storage evaluation function, if the storage evaluation value ev11 is larger than or equal to a preset storage evaluation threshold, the first network parameter W1 is adjusted according to the storage evaluation value ev11 to obtain W11', the network parameter of the storage neural network is updated from W1' to W11' as a second network parameter, if the storage evaluation value ev11 is smaller than a preset storage evaluation threshold, the first network parameter W1 is adjusted according to the storage evaluation value ev11, and then the first data label is output according to the first storage evaluation value ev11 ' and the first data label is calculated according to the first data x11', and the first label is output 12.
According to the first data x1 and the predicted data x12', a second stored evaluation value ev12 is obtained by calculation through a stored evaluation function, if the stored evaluation value ev12 is greater than or equal to a preset stored evaluation threshold value, the first network parameter W11' is adjusted according to the stored evaluation value ev12 to obtain W12', the W12' is taken as a second network parameter, the network parameter of the stored neural network is updated from W11 'to W12', if the stored evaluation value ev12 is smaller than the preset stored evaluation threshold value, the first network parameter W11 'is adjusted according to the stored evaluation value ev12 to obtain W12', then data calculation is carried out based on the W12 'and the first storage label1, third predicted data x13' is output, and a third stored evaluation value ev13 is calculated.
Repeating the above process until the stored evaluation value ev1i is greater than or equal to a preset stored evaluation threshold value (i is greater than or equal to 1), determining the corresponding W1i 'as a second network parameter, and updating the network parameter of the stored neural network to W1i'. Wherein the storage evaluation function is a function for evaluating the storage effect (e.g., storage accuracy, integrity, etc.) of the storage neural network.
It should be noted that, the above network parameter adjustment manner for the storage neural network is merely illustrative, and the embodiments of the present disclosure are not limited thereto.
When the first data is stored by the storage neural network, the corresponding relation between the first data and the first storage label can be stored in addition to the first data, and based on the corresponding relation, when the data is read by the storage neural network, if the second storage label corresponding to the data to be read is obtained, the accurate data can be read according to the corresponding relation between the second storage label and the corresponding relation.
In some alternative implementations, after obtaining the second network parameter of the stored neural network of the second state, the method may further include: updating the first network parameters stored in the first storage space based on the second network parameters to store the first data and the second state of the stored neural network; the second state storage neural network is used for storing new data to be stored and/or reading the stored data.
That is, the first network parameters of the storage neural network are stored in the first storage space in advance, and after the first data is stored, the network parameters of the storage neural network are updated to the second network parameters, based on which the first network parameters in the first storage space can be updated to the second network storage, on the one hand, the storage of the first data is realized, and on the other hand, the second state of the storage neural network can also be stored. The second state storage neural network may also store new data to be stored and/or read data already stored.
It should be noted that, the first state and the second state are relative to the first data to be stored currently, and for the new data to be stored, the second network parameter obtained by storing the first data is equivalent to the new first network parameter as long as the second network obtained by storing the first data is equivalent to the new first network parameter.
For example, the first network parameter of the storage neural network in the first state may be represented as W1, after the first data x1 and the corresponding first storage label1 are acquired, the first network parameter W1 may be adjusted according to the first data x1 and the first storage label1 to obtain the second network parameter W2 of the storage neural network in the second state, that is, after the first data x1 is stored, the current network parameter of the storage neural network is changed to W2.
Further, if the new first data x2 to be stored and the corresponding first storage label2 are obtained, for x2, the first network parameter of the first state storage neural network is W2, and the first network parameter W2 may be adjusted according to the data x2 to be stored and the first storage label2, so as to obtain the second network parameter W3 of the new second state storage neural network. At this time, the current network parameter of the storage neural network is changed to W3. And so on, multiple data stores may be implemented through the storage neural network.
In some alternative implementations, after obtaining the second network parameter of the stored neural network of the second state, the method may further include: and storing the second network parameters in a second storage space to store the first data and a second state of the storage neural network. In other words, the second network parameters may be stored in the second storage space for use (e.g., the corresponding second network parameters may be used when reading the first data), at which time the corresponding second network parameters may be read from the second storage space).
For example, if the first network parameter of the first-state storage neural network is W1, the first data is x1, the corresponding first storage label is label1, after the first data x1 is stored by using the first-state storage neural network, the second network parameter of the second-state storage neural network is W2, and the second network parameter W2 may be stored in the second storage space.
Further, when the new first data x2 to be stored and the corresponding first storage label2 are acquired, the original second-state storage neural network corresponds to the new first-state storage neural network (the corresponding network parameter is W2), after the first data x2 is stored by using the new first-state storage neural network, the second network parameter of the new second-state storage neural network is W3, and the new second network parameter W3 may be stored in the second storage space.
And so on, after each time of storing the first data, the corresponding second network parameters can be stored in the second storage space for standby.
Fig. 2 is a schematic diagram of a data storage method according to an embodiment of the disclosure. Referring to fig. 2, before storing the first data x1, the network parameter of the storage neural network is W1, and W1 is stored in the first storage space. For x1, the current storage neural network corresponds to the storage neural network in the first state, and the first network parameter at this time is W1.
After the first data x1 and the corresponding first storage label1 are acquired, inputting the x1 and the label1 into a first-state storage neural network (the first network parameter is W1), performing data calculation by using the first-state storage neural network, and adjusting the first network parameter W1 to obtain a second network parameter W2 of a second-state storage neural network. Further, the first network parameter W1 stored in the first storage space may also be updated to the second network parameter W2. In addition, W2 may also be stored in the second storage space.
It should be noted that, through the above processing procedure, the storage neural network realizes storage of x1, and the network parameter thereof is updated from W1 to W2. For new data to be stored, the first network parameter of the storage neural network in the first state is updated to W2, and new data can be stored on the basis of the updated first network parameter.
After the first data x2 and the corresponding first storage label2 are acquired, inputting the x2 and the label2 into a first-state storage neural network (the first network parameter at the moment is W2), performing data calculation by using the first-state storage neural network, and adjusting the first network parameter W2 to obtain a second network parameter W3 of a second-state storage neural network. Further, the first network parameter W2 stored in the first storage space may also be updated to the second network parameter W3. In addition, W3 may be stored in the second storage space, where W2 and W3 are stored in the second storage space.
It should be noted that, through the above processing procedure, the storage neural network realizes storage of x2, and the network parameter thereof is updated from W2 to W3. For new data to be stored, the first network parameter of the storage neural network in the first state is updated to W3, and new data can be stored on the basis of the updated first network parameter.
After the first data x3 and the corresponding first storage label3 are acquired, inputting the x3 and the label3 into a first-state storage neural network (the first network parameter at this time is W3), performing data calculation by using the first-state storage neural network, and adjusting the first network parameter W3 to obtain a second network parameter W4 of a second-state storage neural network. Further, the first network parameter W3 stored in the first storage space may also be updated to the second network parameter W4. In addition, W4 may be stored in the second storage space, where W2, W3, and W4 are stored in the second storage space.
It should be noted that, through the above processing procedure, the storage neural network realizes storage of x3, and the network parameter thereof is updated from W3 to W4. For new data to be stored, the first network parameter of the storage neural network in the first state is updated to W4, and new data can be stored on the basis of the updated first network parameter.
And so on, the storage of each first data to be stored can be realized.
In some alternative implementations, the first data may correspond to a plurality of data types, and the storage neural network may have a plurality of storage paths corresponding thereto, each for storing data to be stored of one data type. Wherein the data type includes at least one of: text data, image data, audio data, video data.
It should be noted that the above is merely an example of a data type, and the embodiments of the present disclosure are not limited thereto.
Illustratively, the first data includes N data types, N is greater than or equal to 1 and N is an integer, the storage neural network has a storage path matching each data type, and each storage path includes at least one storage neural network in a first state; accordingly, before adjusting the first network parameter of the storage neural network in the first state according to the first data and the first storage tag to obtain the second network parameter of the storage neural network in the second state, the method may further include: determining a first data type of the first data; a stored neural network of a first state under a storage path that matches a first data type is obtained.
It can be seen that, for the first data to be stored, before storing, a storage path matching the first data type needs to be determined according to the first data type of the first data, and then the storage neural network in the first state under the storage path is acquired, so that the first data of the first data type is stored under the correct storage path.
If a certain first data to be stored includes two or more data types (for example, the first data is a text data and includes picture data), the respective storage paths may be determined for the data corresponding to each data type in the first data, and the data may be stored separately, or the storage paths of the data types corresponding to the data with the largest content ratio may be selected, and the first data may be stored in the storage paths (for example, the text data of the first data occupies a relatively large area, and the first data may be stored in the storage paths corresponding to the text data). When the first data are stored respectively, an association relation needs to be established among all the part of data stored under different paths, so that the accurate and complete first data can be read later.
In some alternative implementations, acquiring a stored neural network of a first state under a storage path that matches a first data type includes: and under the condition that the read storage path of the storage neural network is not matched with the first data type, acquiring a second network parameter matched with the first data type from a second storage space, and obtaining the storage neural network in the matched first state according to the matched second network parameter, wherein the second storage space is used for storing the second network parameter. Wherein a read storage neural network can be understood as a storage neural network that has been loaded into the respective processing device.
For example, the first data stored for the ith time is a text type, and the storage neural network under the storage path of the text data is obtained after the text data is stored, which corresponds to that the storage path of the read storage neural network is a text data storage path. If the first data stored in the (i+1) th time is the picture type, it is obvious that the current and read storage path of the storage neural network is not matched with the picture type, so that the second network parameter obtained when the picture type data is stored last time can be obtained from the second storage space, and the first state storage neural network matched with the first data in the (i+1) th time can be obtained by utilizing the second network parameter.
Fig. 3 is a schematic diagram of a data storage method according to an embodiment of the disclosure. Referring to fig. 3, the first data may include text data, picture data, … …, and audio data for N data types; correspondingly, the network parameter of the initial storage neural network is W1, and the storage neural network corresponds to N storage paths, namely a text data storage path, a picture data storage path, … … and an audio data storage path.
As shown in fig. 3, when the first data is text data (e.g., first data t1, first data t2, first data t3, etc.), storing the portion of the first data through a storage path corresponding to the text data, and adjusting a first network parameter of a storage neural network in a first state under the storage path (e.g., adjusting W1 to W12, and adjusting W12 to W13); when the first data is picture data (e.g., first data p1, first data p2, first data p3, etc.), storing the part of the first data through a storage path corresponding to the picture data, and adjusting a first network parameter of a storage neural network in a first state under the storage path (e.g., W1 is adjusted to W22, W22 is adjusted to W23); … …; when the first data is audio data (e.g., the first data a1, the first data a2, the first data a3, etc.), the portion of the first data is stored through a storage path corresponding to the audio data, and a first network parameter of the storage neural network in a first state in the storage path is adjusted (e.g., W1 is adjusted to WN2, and WN2 is adjusted to WN 3). N is the sequence number identification of the storage path, N is more than or equal to 1, and N is an integer.
For example, in the initial case, the read storage neural network is an initial storage neural network, and the network parameter thereof is W1. After the first data t1 and the first storage tag labelt to be stored are acquired, since the initial storage neural network has not stored data, the storage paths corresponding to different data types are not split, so that the storage neural network in the first state under the corresponding storage paths is not required to be acquired, and the initial storage neural network is regarded as the storage neural network in the first state under each storage path. Based on this, t1 and labelt1 are input into the initial storage neural network, data calculation is performed using the initial storage neural network, the initial network parameter W1 is adjusted (the initial network parameter W1 corresponds to the first network parameter with respect to t 1), the second network parameter W12 is obtained, and a storage path for text data is split. And if the text data need to be stored later, processing is carried out on the basis of the second network parameters W12.
Further, if the first data to be stored is the picture data p1 after the first data t1 is stored, the processing manner is similar to that of the first data t1, and the storage path of the picture data may be split. Similarly, if the first data to be stored is the audio data a1, the storage path of the audio data may be split similarly to the processing manner of the first data t1 or p 1.
After the storage neural network stores at least the first data t1 (for example, stores the first data t1, the first data p1, and the first data a 1), if the new first data to be stored is the text data t2, the data type of the first data t2 needs to be determined first. After determining that t2 belongs to the text type, it can be determined that t2 should be stored in a storage path corresponding to the text data, and at this time, it is required to determine whether the read storage path of the storage neural network is the storage path of the text data.
If the first data stored last time is t1, it is determined that the storage path of the read storage neural network is the storage path of the text data, and therefore, t2 storage is simply performed directly on the basis of the read storage neural network. For example, since the second network parameter obtained by storing t1 is W12, W12 is used as the first network parameter for storing t2, t2 and the first storage tag labelt2 thereof are input into the storage neural network having the network parameter of W12, data calculation is performed by using the storage neural network, and the first network parameter W12 is adjusted to obtain the second network parameter W13, thereby realizing storage of t 2.
If the last stored first data is non-text data (e.g., the first data p1 or the first data a1, etc.), the storage path of the read storage neural network should also be the storage path of the non-text data, so that the second network parameter matched with the data type of t2 can be obtained from the second storage space, and the matched first state storage neural network can be obtained based on the obtained second network parameter. For example, it is determined that the first data stored last time is p1, the storage path of the read storage neural network is the storage path of the picture data, the last time stored text data is determined to be t1 through the second storage space, the corresponding second network parameter is W12, based on this, W12 is taken as the first network parameter for storing t2, W12 is read to obtain a storage neural network in the first state in the storage path matched with the data type of t2, t2 and the first storage label labelt thereof are input into the storage neural network with the network parameter of W12, data calculation is performed by using the storage neural network, and the first network parameter W12 is adjusted to obtain the second network parameter W13, so that t2 is stored.
Similarly, if the first data t3 of the text data to be stored is received, it is determined that the first data t3 of the text data needs to be stored in a storage path of the text data, if the storage path of the currently read storage neural network corresponds to the text data, then t3 is directly stored on the basis of the read storage neural network, if the storage path of the currently read storage neural network does not correspond to the text data, it is determined that the last text data stored in the storage neural network is t2, and the second network parameter obtained by storing t2 is W12, therefore, W12 is obtained from the second storage space, and W12 is taken as the first network parameter for storing t3, t3 and the first storage label labelt thereof are input into the storage neural network with the network parameter of W12, data calculation is performed by using the storage neural network, and the first network parameter W12 is adjusted, so as to obtain the second network parameter W13, and thus the storage of t2 is realized. And storing the new text data to be stored in a similar manner.
As for the first data of the picture type, the audio type, the storage process thereof is similar to that of the first data of the text type, and will not be described again.
In some alternative implementations, a plurality of storage subnetworks may be included in the storage neural network, each storage subnetwork having a particular network structure adapted to store first data of certain particular task types.
Illustratively, the first data includes M task types, M is greater than or equal to 1, M is an integer, the storage neural network includes M storage sub-networks corresponding to the M task types one by one, and the storage sub-networks are used for executing storage of the first data of the corresponding task types.
For example, the storage neural network includes a convolutional sub-network, a round robin sub-network, a pulse sub-network, and a graph neural sub-network. Wherein the convolution sub-network includes at least one convolution layer adapted to store first data of a computer vision task (e.g., image data in the computer vision task); the cyclic subnetwork is constructed based on a cyclic neural network (Recurrent Neural Network, RNN) and is suitable for storing first data in tasks such as natural language processing, voice recognition and the like; the impulse sub-network is constructed based on an impulse neural network (Spiking Neural Network, SNN) and is suitable for storing first data of time sequence tasks; the graph neural sub-network is constructed based on the graph neural network and is suitable for storing first data (such as a knowledge graph) of the relationship class task.
In some optional implementations, the task type or the task type identifier of the first data may be input into the storage neural network, so that the storage of the first data may be implemented through a corresponding storage sub-network, or the task type may be used as part of the content of the first storage tag, where the storage neural network determines the task type of the first data through the first storage tag, and further stores the first data through the corresponding storage sub-network.
Fig. 4 is a schematic diagram of a storage neural network according to an embodiment of the disclosure. Referring to fig. 4, a plurality of storage sub-networks such as a convolution sub-network, a circulation sub-network, a pulse sub-network, … …, a graph neural sub-network and the like are arranged in the storage sub-network, and each storage sub-network is used for storing first data of a corresponding task type so as to preserve the content of the first data as much as possible by utilizing the characteristics of the storage sub-network.
It should be noted that the foregoing description is merely illustrative of a storage subnetwork, and embodiments of the present disclosure are not limited in this regard.
In some alternative implementations, the first data already stored in the storage neural network may be updated in addition to the first data that may be stored by the storage neural network.
In some optional implementations, the first state of the storage neural network has stored third data, the third data including a plurality of data segments, the first data being updated data for a portion of the third data; correspondingly, according to the first data and the first storage label, adjusting a first network parameter of the storage neural network in the first state to obtain a second network parameter of the storage neural network in the second state, including: comparing the first data with the third data, and determining a first data segment in the first data and a second data segment in the third data, wherein the positions of the first data segment and the second data segment correspond and the data are different; and inputting the first data segment and the first storage label into a storage neural network in the first state, and adjusting a first network parameter corresponding to the second data segment in the storage neural network in the first state to obtain a second network parameter corresponding to the first data segment, wherein the second network parameter corresponding to the first data segment is used for updating the stored third data into the first data.
Therefore, when the data is updated, the corresponding second network parameters can be adjusted only based on the updated data fragments, and compared with the network parameters with the total updated data, the data processing capacity is relatively smaller, so that the data processing pressure can be relieved.
Illustratively, the first data includes data segments s11, s12, s13 and s14, the third data includes data segments s21, s22, s23 and s24, and by comparing the first data with the third data, s11 is identical to s21, s12 is identical to s22, s13 is different from s23, and s14 is identical to s 24. From this, it is known that the first data segment is s13, the second data segment is s23, based on this, s13 and the first storage label are input into the first state storage neural network, the data calculation is performed by using the first state storage neural network, the first network parameter corresponding to s23 is adjusted, and the second network parameter corresponding to s13 is obtained, so that the update of the third data is realized.
It should be noted that, whether the first data is stored for the first time or the first data is updated, there may be two cases where the adjustment of the network parameter related to the first data is performed, where the first network parameter is directly adjusted or updated, and where the first low rank decomposition matrix corresponding to the first network parameter is adjusted or updated. In either case, the first network parameter may be adjusted to obtain a second network parameter corresponding to the first data for storing the first data.
In some alternative implementations, the first network parameter corresponds to a plurality of first low rank decomposition matrices and the second network parameter corresponds to a plurality of second low rank decomposition matrices; correspondingly, according to the first data and the first storage label, adjusting a first network parameter of the storage neural network in the first state to obtain a second network parameter of the storage neural network in the second state, including: and inputting the first data and the first storage label into a storage neural network in a first state, performing data calculation by using the storage neural network in the first state, and adjusting matrix elements of a plurality of first low-rank decomposition matrices to obtain a plurality of second low-rank decomposition matrices.
It should be noted that, the network parameters are updated by adjusting the low-rank decomposition matrix, mainly considering that in some implementations, if the network parameter matrix (e.g. the weight matrix) of the neural network is relatively large, and when the data size of the first data to be stored at a time is relatively small, adjusting the large network parameter matrix with a small amount of data easily causes problems such as over-fitting and serious forgetting of the matrix. Because the low-rank decomposition matrix is reduced in scale compared with the matrix before decomposition, the first network parameter matrix corresponding to the first network parameter can be decomposed into a plurality of first low-rank decomposition matrices, and a plurality of second low-rank decomposition matrices corresponding to the second network parameter can be obtained by adjusting matrix elements of the first low-rank decomposition matrices, so that the problems of matrix overfitting, serious forgetting and the like can be effectively relieved.
In some alternative implementations, the storage neural network may also support different storage modes to achieve different storage accuracies, thereby meeting diverse storage requirements.
In some alternative implementations, the storage neural network may also support N storage modes, corresponding to N storage accuracies, respectively. For example, the storage neural network supports 3 storage modes, and the corresponding storage accuracies are respectively: 32-bit floating point number (float 32), 16-bit floating point number (float 16) and 8-bit integer (int 8), namely, for the first data to be stored, a storage mode can be selected and stored based on the storage precision corresponding to the storage mode; in addition, the first data may be divided into a plurality of sub-data, different storage modes may be adopted for different sub-data, and each sub-data may be stored based on the corresponding storage precision.
In some alternative implementations, the first data includes at least one first sub-data corresponding to a first storage mode and at least one second sub-data corresponding to a second storage mode, and the storage precision of the first storage mode is different from the storage precision of the second storage mode; the storage neural network in the first state is used for adjusting first sub-network parameters corresponding to the first sub-data based on the first storage mode according to the first sub-data and the first storage label to obtain second sub-network parameters corresponding to the first sub-data; the storage neural network in the first state is further used for adjusting third sub-network parameters corresponding to the second sub-data based on the second storage mode according to the second sub-data and the first storage label to obtain fourth sub-network parameters corresponding to the second sub-data; the first sub-network parameter and the third sub-network parameter form a first network parameter, and the second sub-network parameter and the fourth sub-network parameter form a second network parameter.
The first storage mode is an accurate storage mode, the second storage mode is an approximate storage mode, after the first data is acquired, first sub-data adopting the accurate storage mode in the first data and second sub-data adopting the approximate storage mode in the first data can be determined, a first sub-network parameter corresponding to the first sub-data is adjusted by adopting the accurate storage mode according to the first sub-data and the first storage label by utilizing the storage neural network in the first state, so as to obtain a second sub-network parameter corresponding to the first sub-data, and meanwhile, a third sub-network parameter corresponding to the second sub-data is adjusted by adopting the approximate storage mode according to the second sub-data and the first storage label by utilizing the storage neural network in the first state, so as to obtain a fourth sub-network parameter corresponding to the second sub-data.
It should be noted that, there is a certain positive correlation between the storage precision corresponding to the storage mode and the calculation amount required for storage, that is, for the same first data, if the storage precision of the storage mode is higher, the calculation amount required for storing the first data is relatively larger, whereas if the storage precision of the storage mode is lower, the calculation amount required for storing the first data is relatively smaller. The calculation amount required for storing the first data mainly comprises calculation amount generated in the process of adjusting network parameters by the neural network for storing the first data.
In some optional implementation manners, for the part of the data with lower fault tolerance in the first data, an accurate storage mode can be adopted, so that when the part of the data is read later, the reading result is more accurate, and further data processing is not influenced; for the partial data with higher fault tolerance in the first data, an approximate storage mode can be adopted, so that the calculated amount generated by storing the partial data is reduced, and even if a certain error exists between a reading result and the original stored data when the partial data is read later, the further data processing is not influenced.
For example, the first data to be stored is promotion data of a company (the promotion data may be in at least one of text form, voice form, video form, and the like) including a company profile, a company address, a company telephone, a company website, and a company zip code. The company profile may be stored in a near memory mode, and the company address, company telephone, company website, and company zip code may be stored in an accurate memory mode.
It should be noted that the foregoing description is only illustrative of each storage mode, and the embodiments of the present disclosure are not limited thereto.
It should be further noted that, in the case where the first data amount of the first data is larger than the second data amount of the network parameter of the storage neural network, if the first data is stored using the related technology, the required storage space should be equal to or closer to the first data amount, and in the embodiment of the disclosure, since the calculation process of the storage neural network is used instead of the storage process, the required storage space should be equal to or closer to the second data amount. Further, since the second data amount is smaller than the first data amount, occupation of the storage space can be reduced.
In addition, the data can be stored for multiple times by the storage neural network, and each time of data storage can be realized by only adjusting or updating the network parameters of the storage neural network, so that in some optional realization modes, the storage space occupied by the network parameters of the storage neural network is always the storage space occupied by the network parameters of the storage neural network, and when the data volume of the data stored for multiple times is large, the occupation of the storage space can be effectively reduced. For example, 10 times of data are stored using the storage neural network, the sum of the first data amounts of the above data is 100G (Gigabyte gigabytes), and the second data amount of the network parameters of the storage neural network is 1G, and thus, a storage effect of storing 100G data through the 1G storage space can be achieved. Therefore, the data storage method of the embodiment of the disclosure effectively reduces the occupied amount of the storage space.
A second aspect of the disclosed embodiments provides a data reading method.
Fig. 5 is a flowchart of a data reading method according to an embodiment of the disclosure. Referring to fig. 5, the data reading method may include the following steps.
Step S51, a second storage tag corresponding to second data to be read is obtained.
And step S52, inputting the second storage label into a preset storage neural network to obtain target read data corresponding to the second data.
In some alternative implementations, the storage neural network may be used to store data, and when the stored data needs to be read, the corresponding data may be read from the storage neural network.
In some alternative implementations, considering that the neural network may adopt different storage modes when storing data, and correspond to different storage accuracies, the second data may or may not be identical to the corresponding target read data, but should have a higher similarity, and will not affect or have less effect on subsequent data processing.
In some alternative implementations, the target read data is the same as the second data, or the similarity between the target read data and the second data is greater than a preset similarity threshold. The preset similarity threshold may be set according to experience, statistics, and processing requirements, which is not limited by the embodiments of the present disclosure.
The data reading method according to the embodiment of the present disclosure will be described below.
In some optional implementations, in step S51, acquiring a second storage tag corresponding to second data to be read includes: and receiving a second storage label input by a user or receiving the second storage label sent by the preset terminal.
In some alternative implementations, in step S52, inputting the second storage tag to the preset storage neural network, to obtain target read data corresponding to the second data, including: inputting the second storage label into a storage neural network for the storage neural network to perform data calculation according to the first target network parameter and the second storage label so as to obtain target read data; or, inputting the second storage label into the storage neural network, so that the storage neural network can acquire a second target network parameter corresponding to the second data according to the second storage label, and performing data calculation according to the second target network parameter and the second storage label to acquire target read data; the first target network parameter is a current network parameter of the storage neural network, and the second target network parameter is a network parameter obtained by storing second data in the storage neural network.
It follows that at least two implementations may be employed when reading the second data based on the second memory tag and the memory neural network. In a first implementation, the second storage tag is input to the storage neural network, and the storage neural network directly uses the current network parameter (i.e., the first target network parameter) and the second storage tag to perform data calculation and output target read data; in a second implementation manner, after the second storage tag is input into the storage neural network, the storage neural network does not directly perform data calculation based on the current network parameter and the second storage tag, but acquires the network parameter (i.e., the second target network parameter) obtained when the second storage neural network stores the second data, performs data calculation by using the second target network parameter and the second storage tag, and then outputs the target read data.
The first target network parameter of the preset storage neural network is W1, the second storage tag corresponding to the second data x1 to be read is label1, when the second data is read by using the storage neural network, the label1 can be directly input into the storage neural network, and the storage neural network performs data calculation according to the first target network parameter W1 and the label1 to obtain target read data x1'; in addition, the label1 may be input into a storage neural network, where the storage neural network searches a second network parameter W1' generated when x1 is initially stored in a second storage space according to the label1 to obtain a second target network parameter W1', and then performs data calculation according to the second target network parameter W1' and the label1 to obtain target read data x1". The second storage space is used for storing a second network parameter.
It should be noted that if the first target network parameter is directly used to read the data, the second target network parameter does not need to be acquired, the reading speed is faster, and if the second target network parameter is used to read the data, it may take a certain time to acquire the second target network parameter, but the accuracy of the read target data may be relatively higher. When reading data, any data reading mode can be selected according to requirements, and the embodiment of the disclosure is not limited to this.
In some optional implementations, the second data corresponds to a plurality of levels of storage tags, and as the level of the storage tag increases, the information characterization capability of the storage tag correspondingly increases, and the multi-level storage tag is stored in the third storage space; the storage neural network is further used for acquiring a new second storage tag from the third storage space according to the second storage tag and acquiring target read data according to the new second storage tag under the condition that the level of the second storage tag is not the highest level; the new second storage label is a storage label corresponding to the second data and having a higher level than the second storage label.
That is, if the second memory tag of the input memory neural network is not the highest-level memory tag of the second data, a new second memory tag having a higher level than the second memory tag may be acquired from the third memory space, and the target read data may be obtained using the new second memory tag.
Illustratively, inputting the second storage tag to a preset storage neural network to obtain target read data corresponding to the second data, including: and inputting the second storage label into a storage neural network, and reading a new second storage label with higher level from a third storage space according to the second storage label under the condition that the storage neural network recognizes that the level of the second storage label is not the highest level, and performing data calculation according to a first target network parameter and the new second storage label to obtain target read data, wherein the first target network parameter is the current network parameter of the storage neural network.
For example, the first target network parameter of the preset storage neural network is W1, the second storage label corresponding to the second data x1 to be read is label11, and the level of label11 is not the highest level. When the second data is read by using the storage neural network, the label11 may be input into the storage neural network, where the storage neural network first identifies whether the level of the label11 is the highest level, and reads out a second storage tag label12 corresponding to the second data x1 and having a higher level from the third storage space when it is identified that the level of the label11 is not the highest level, and then performs data calculation according to the first target network parameter W1 and the label12 to obtain target read data x1'.
Illustratively, inputting the second storage tag to a preset storage neural network to obtain target read data corresponding to the second data, including: inputting the second storage label into a storage neural network, and under the condition that the storage neural network recognizes that the level of the second storage label is not the highest level, reading a new second storage label with higher level from a third storage space according to the second storage label, acquiring a second target network parameter corresponding to second data according to the second storage label, and performing data calculation according to the second target network parameter and the new second storage label to obtain target read data; the second target network parameter is a network parameter obtained by storing second data in the storage neural network.
For example, the first target network parameter of the preset storage neural network is W1, the second storage label corresponding to the second data x1 to be read is label11, and the level of label11 is not the highest level. When the second data is read by using the storage neural network, the label11 may be input into the storage neural network, where the storage neural network first identifies whether the level of the label11 is the highest level, and if it is identified that the level of the label11 is not the highest level, reads out a second storage tag label12 corresponding to the second data x1 and having a higher level from the third storage space, searches the second storage space for the second network parameter W1' generated when the first data x1 is stored according to the label11, and obtains the second target network parameter W1', and then performs data calculation according to the second target network parameter W1' and the label12, so as to obtain the target read data x 1.
It should be noted that, the process of reading data by using the storage neural network may be understood as an inference process (or a prediction process) of the storage neural network, that is, after the second storage label is input, the storage neural network performs inference calculation based on the first target network parameter or the second target network parameter and the second storage label, and the output result is the target read data corresponding to the second data. In other words, the network parameters of the stored neural network are not typically changed during the reading of the data, but rather the data calculation is performed using their network parameters (first target network parameters or second target network parameters).
In some alternative implementations, after the target read data is read, the target read data may also be checked to determine whether the complete and accurate data is read from the storage neural network.
For example, for the second data adopting the accurate storage mode, the target read data corresponding to the second data can be verified based on a hard verification mode; for the second data adopting the approximate storage mode, the target read data corresponding to the second data can be verified based on a soft verification mode. The hard verification mode comprises a verification mode based on verification information such as verification codes, and the soft verification mode comprises a verification mode based on verification information such as similarity and space distance.
In some alternative implementations, the second data corresponds to the first verification information; correspondingly, after the second storage tag is input to the preset storage neural network to obtain the target read data corresponding to the second data, the method may further include: determining second check information corresponding to the target read data; and determining a first check result of the target read data according to the first check information and the second check information. The first check information and the second check information may be check codes obtained by respectively processing the second data and the target read data based on a preset check algorithm, where the check algorithm includes parity check, exclusive or check, cyclic redundancy check, md5 (message-digest algorism 5, md5 information summary algorithm) check, digital signature, hamming code check, and the embodiment of the disclosure is not limited thereto.
Illustratively, a first check Code1 of the second data is calculated in advance according to a check algorithm, and the first check Code1 is stored in a preset space for standby. After the target read data corresponding to the second data is obtained, calculating a second check Code2 of the target read data according to the check algorithm, reading a first check Code1 of the second data from a preset space, comparing the first check Code1 with the second check Code2, if the first check Code1 and the second check Code2 are consistent, the target read data is identical or similar to the second data, the target read data passes the data check, otherwise, if the target read data and the second data are inconsistent, the target read data is different, the difference is large, and the target read data does not pass the data check.
In some optional implementations, after inputting the second storage tag to the preset storage neural network to obtain the target read data corresponding to the second data, the method may further include: determining the overall similarity between the second data and the target read data; and determining a second check result of the target read data according to the overall similarity.
After the target read data corresponding to the second data is obtained, the overall similarity between the second data and the target read data is calculated according to a similarity algorithm, if the overall similarity is greater than or equal to a preset similarity threshold, the target read data is identical or similar to the second data, and the target read data passes the data verification, otherwise, if the overall similarity is less than the preset similarity threshold, the target read data is different from the second data, and the difference is larger, and the target read data does not pass the data verification. The similarity algorithm includes cosine similarity algorithm, jacaded similarity coefficient algorithm, pearson correlation coefficient algorithm and the like, which are not limited by the embodiment of the disclosure.
After obtaining the target read data corresponding to the second data, the spatial distance between the second data and the target read data is calculated according to a distance algorithm, if the spatial distance is smaller than or equal to a preset distance threshold, the target read data is identical or similar to the second data, and the target read data passes the data verification, otherwise, if the spatial distance is greater than the preset distance threshold, the target read data is different from the second data, and the difference is larger, and the target read data fails the data verification. The distance algorithm includes euclidean distance algorithm, manhattan distance algorithm, chebyshev distance algorithm, and the like, which are not limited by the embodiments of the present disclosure.
In some alternative implementations, the target read data may also be verified in a piecewise verification manner.
In some optional implementations, the second data includes at least one third data segment corresponding to the first storage mode and at least one fourth data segment corresponding to the second storage mode, the third data segment corresponding to the first data segment check information, the third check result of the target read data includes a first check sub-result corresponding to the first storage mode and a second check sub-result corresponding to the second storage mode, and the storage precision of the first storage mode is different from the storage precision of the second storage mode; correspondingly, after the second storage tag is input to the preset storage neural network to obtain the target read data corresponding to the second data, the method may further include: dividing the target read data into at least one fifth data segment corresponding to the first storage mode and at least one sixth data segment corresponding to the second storage mode; determining second data segment verification information corresponding to each fifth data segment; obtaining a first syndrome result related to the fifth data segment according to the first data segment verification information and the second data segment verification information with the corresponding relation; determining the segment similarity between the fourth data segment and the sixth data segment with the corresponding relation; and obtaining a second syndrome result about the sixth data segment according to the segment similarity. The first syndrome result is used for representing whether the fifth data segment passes the data verification or not, and the second syndrome result is used for representing whether the sixth data segment passes the data verification or not.
Illustratively, the first storage mode is a precise storage mode, the second storage mode is an approximate storage mode, the second data includes a third data segment s1 corresponding to the precise storage mode, and two fourth data segments s2 and s3 corresponding to the approximate storage mode, the third data segment s1 corresponds to a first data segment check code scode1, and the first data segment check code scode is obtained based on a preset check algorithm; the target read data includes one fifth data segment s4 corresponding to the exact storage mode, and two sixth data segments s5 and s6 corresponding to the approximate storage mode, where s1 and s4 have a correspondence, s2 and s5 have a correspondence, and s3 and s6 have a correspondence.
Further, the second data segment check code scode of the fifth data segment s4 is calculated according to a preset check algorithm, the first data segment check code scode1 and the second data segment check code scode are compared, if the two codes are identical, it is indicated that the fifth data segment s4 is identical to or similar to the third data segment s1, the fifth data segment s4 passes the data check, otherwise, if the two codes are inconsistent, it is indicated that the fifth data segment s4 is different from the third data segment s1, the difference is larger, and the fifth data segment s4 fails the data check.
Calculating the segment similarity between the fourth data segment s2 and the sixth data segment s5 and the segment similarity between the fourth data segment s3 and the sixth data segment s6 according to a preset similarity algorithm, if the two segment similarities are both greater than or equal to a preset segment similarity threshold, the fourth data segment s2 is identical to or similar to the sixth data segment s5, and the fourth data segment s3 is identical to or similar to the sixth data segment s6, and the sixth data segments s5 and s6 pass through data verification; if the two segment similarity values are smaller than the preset segment similarity threshold value, the fourth data segment s2 is different from the sixth data segment s5 and has larger difference, the fourth data segment s3 is different from the sixth data segment s6 and has larger difference, and therefore, the sixth data segments s5 and s6 do not pass the data verification; in addition, if only one of the two segment similarities is smaller than the preset segment similarity threshold, the sixth data segment smaller than the preset segment similarity threshold fails the data verification, and only the other data segment fails the data verification.
In some optional implementations, a plurality of data may be selected from the second data as the first reference feature point, after the target read data corresponding to the second data is obtained, the second reference feature point is obtained by selecting the data at the corresponding position from the target read data, and the similarity between the second data and the target read data is determined by comparing the first reference feature point and the second reference feature point, so as to determine whether the target read data passes the data verification.
For example, the second data is a piece of binary data 1100001011001011, from which the 3 rd bit to the 5 th bit 000 are selected as the first reference feature point, and the 10 th bit to the 12 th bit 100 are selected as the second first reference feature point. After the corresponding target read data is read, the 3 rd bit to the 5 th bit are intercepted from the target read data to obtain a first second reference feature point, the value is 000, and the 10 th bit to the 12 th bit are intercepted from the target read data to obtain a second reference feature point, and the value is 101. Since the reference feature points correspond to 6 bits in total, and only the 12 th bit corresponds to a different value, the similarity between the second data and the target read data can be approximately obtained as (5++6) ×100+=83%. If the preset similarity threshold is 80%, it can be determined that the target read data passes the data verification because the similarity is greater than the similarity threshold.
It should be noted that the above calculation of the reference feature points and the similarity thereof is merely illustrative, and the embodiments of the present disclosure are not limited thereto.
It should be further noted that, if the target read data is identical to the corresponding second data or has a higher similarity, no or less influence is exerted on the processing result when the target read data replaces the second data to perform corresponding data processing, so that a verification result passing the verification can be obtained; if the target read data and the corresponding second data have higher difference or lower similarity, the corresponding data processing result may be greatly affected when the target read data is used for replacing the second data to process the corresponding data, so that a verification result that the verification fails can be obtained. After the verification result is obtained, the verification result may be transmitted to an external device or provided to a corresponding user. For the external device or the user, it may refer to an accuracy requirement for data processing or the like, and determine whether to perform corresponding data processing with the target read data according to the verification result.
For example, if the target read data is identical to or has a high similarity to the corresponding second data, the data processing with a high accuracy requirement may be performed based on the target read data, and if the target read data has a low similarity to the corresponding second data, the data processing with a low accuracy requirement may be performed based on the target read data, or the target read data may not be used for subsequent data processing, which is not limited in the embodiments of the present disclosure.
It will be appreciated that the above embodiments mentioned in the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the descriptions of the embodiments are omitted. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps and arrangement of functional blocks should be determined by their function and possible inherent logic.
A third aspect of an embodiment of the present disclosure provides a data storage device.
Fig. 6 is a block diagram of a data storage device according to an embodiment of the present disclosure. Referring to fig. 6, the data storage device 600 may include the following modules.
A first obtaining module 601, configured to obtain first data to be stored and a first storage tag corresponding to the first data;
The storage module 602 is configured to adjust a first network parameter of the storage neural network in the first state according to the first data and the first storage tag, to obtain a second network parameter of the storage neural network in the second state, where the second network parameter is used to store the first data.
According to the embodiment provided by the disclosure, first data to be stored and a first storage tag corresponding to the first data are acquired through a first acquisition module; and adjusting a first network parameter of the storage neural network in the first state according to the first data and the first storage label through the storage module to obtain a second network parameter of the storage neural network in the second state, wherein the second network parameter is used for storing the first data. It can be known that, in the embodiment of the present disclosure, although the data to be stored is first data, the first data is not actually stored in the storage medium, but the storage process of the data is converted into the data calculation process by means of the storage neural network, the first network parameter of the storage neural network in the first state is adjusted by the first data, and the second network parameter of the storage neural network in the second state is obtained, so that the storage of the first data is realized by the adjustment of the network parameter of the storage neural network; and when the stored data is required to be read later, the stored data can be read conveniently through the storage neural network by utilizing the corresponding storage tag. In summary, the embodiments of the present disclosure provide a new data storage method with "compute substitute storage".
A fourth aspect of the disclosed embodiments provides a data reading apparatus.
Fig. 7 is a block diagram of a data reading apparatus according to an embodiment of the present disclosure. Referring to fig. 7, the data reading apparatus 700 may include the following modules.
A second obtaining module 701, configured to obtain a second storage tag corresponding to second data to be read;
And the reading module 702 is configured to input the second storage tag to a preset storage neural network, so as to obtain target read data corresponding to the second data.
According to the embodiment provided by the disclosure, a second storage tag corresponding to second data to be read is acquired through a second acquisition module; and inputting the second storage label into a preset storage neural network through a reading module to obtain target read data corresponding to the second data. Therefore, in the embodiment of the disclosure, the data can be stored in advance through the storage neural network, and the storage mode is realized by utilizing the data to be stored and the storage label thereof and adjusting the network parameters of the storage neural network, and belongs to a data storage method using calculation to replace storage.
Furthermore, the disclosure also provides an electronic device and a computer readable storage medium.
Fig. 8 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Referring to fig. 8, an embodiment of the present disclosure provides an electronic device including: at least one processor 801; at least one memory 802, and one or more I/O interfaces 803, coupled between the processor 801 and the memory 802; the memory 802 stores one or more computer programs executable by the at least one processor 801, and the one or more computer programs are executed by the at least one processor 801 to enable the at least one processor 801 to perform a data storage method or a data reading method of an embodiment of the present disclosure.
Fig. 9 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Referring to fig. 9, an embodiment of the present disclosure provides an electronic device including a plurality of processing cores 901 and a network-on-chip 902, wherein the plurality of processing cores 901 are each connected to the network-on-chip 902, and the network-on-chip 902 is configured to interact data between the plurality of processing cores and external data.
Wherein one or more processing cores 901 have one or more instructions stored therein that are executed by the one or more processing cores 901 to enable the one or more processing cores 901 to perform a data storage method or a data reading method of an embodiment of the present disclosure.
In some embodiments, the electronic device may be a brain-like chip, and since the brain-like chip may employ a vectorization computing manner, parameters such as weight information of a neural network model need to be called into through an external memory, for example, double Data Rate (DDR) synchronous dynamic random access memory. Therefore, the operation efficiency of batch processing is high in the embodiment of the disclosure.
The present disclosure also provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the data storage method or the data reading method of the embodiments of the present disclosure. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
The present disclosure also provides a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when executed in a processor of an electronic device, performs a data storage method or a data reading method of the embodiments of the disclosure.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer-readable storage media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable program instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, random Access Memory (RAM), read Only Memory (ROM), erasable Programmable Read Only Memory (EPROM), static Random Access Memory (SRAM), flash memory or other memory technology, portable compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical disc storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable program instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and may include any information delivery media.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
The computer program product described herein may be embodied in hardware, software, or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, it will be apparent to one skilled in the art that features, characteristics, and/or elements described in connection with a particular embodiment may be used alone or in combination with other embodiments unless explicitly stated otherwise. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure as set forth in the appended claims.

Claims (24)

1. A method of data storage, comprising:
Acquiring first data to be stored and a first storage tag corresponding to the first data;
And according to the first data and the first storage label, adjusting a first network parameter of the storage neural network in a first state to obtain a second network parameter of the storage neural network in a second state, wherein the second network parameter is used for storing the first data.
2. The method of claim 1, wherein after the obtaining the second network parameter of the stored neural network of the second state, the method further comprises:
updating the first network parameters stored in a first storage space based on the second network parameters to store the first data and the second state of the stored neural network;
The storage neural network in the second state is used for storing new data to be stored and/or reading the stored data.
3. The method according to claim 1 or 2, wherein after the obtaining of the second network parameter of the stored neural network of the second state, the method further comprises:
and storing the second network parameters in a second storage space to store the first data and the second state of the storage neural network.
4. The method of claim 1, wherein adjusting the first network parameter of the first state storage neural network based on the first data and the first storage tag to obtain the second network parameter of the second state storage neural network comprises:
Inputting the first data and the first storage label into the first-state storage neural network, performing data calculation by using the first-state storage neural network, and adjusting the first network parameter to obtain the second network parameter; or alternatively
Inputting the first data and the first storage label into the first-state storage neural network, performing data calculation by using the first-state storage neural network, adjusting the first network parameters to obtain variation parameters in the first network parameters, and determining the second network parameters according to the variation parameters.
5. The method of claim 1 or 4, wherein the first data comprises N data types, N is greater than or equal to 1 and N is an integer, the storage neural network has a storage path matching each of the data types, and each storage path includes at least one first state storage neural network;
Before the first network parameter of the first state storage neural network is adjusted according to the first data and the first storage label, and the second network parameter of the second state storage neural network is obtained, the method further comprises:
Determining a first data type of the first data;
And acquiring a storage neural network of a first state under a storage path matched with the first data type.
6. The method of claim 5, wherein the retrieving the stored neural network of the first state in the storage path matching the first data type comprises:
and under the condition that the read storage path of the storage neural network is not matched with the first data type, acquiring a second network parameter matched with the first data type from a second storage space, and obtaining the storage neural network in the matched first state according to the matched second network parameter, wherein the second storage space is used for storing the second network parameter.
7. The method of claim 1 or 4, wherein the first data includes M task types, M is equal to or greater than 1, and M is an integer, the storage neural network includes M storage sub-networks corresponding to the M task types one to one, and the storage sub-networks are used to perform storage of the first data of the corresponding task types.
8. The method of claim 1, wherein the first state of the stored neural network has stored third data, the third data comprising a plurality of data segments, the first data being updated data for a portion of the third data;
And according to the first data and the first storage label, adjusting a first network parameter of the storage neural network in a first state to obtain a second network parameter of the storage neural network in a second state, including:
Comparing the first data with the third data, and determining a first data segment in the first data and a second data segment in the third data, wherein the first data segment corresponds to the second data segment in position and is different in data;
And inputting the first data segment and the first storage label into the first-state storage neural network, and adjusting a first network parameter corresponding to the second data segment in the first-state storage neural network to obtain a second network parameter corresponding to the first data segment, wherein the second network parameter corresponding to the first data segment is used for updating the stored third data into the first data.
9. The method of claim 1, wherein the first network parameter and the second network parameter comprise a learnable parameter of the stored neural network.
10. The method of claim 1, wherein the first network parameter corresponds to a plurality of first low rank decomposition matrices and the second network parameter corresponds to a plurality of second low rank decomposition matrices;
And according to the first data and the first storage label, adjusting a first network parameter of the storage neural network in a first state to obtain a second network parameter of the storage neural network in a second state, including:
And inputting the first data and the first storage label into the storage neural network in the first state, performing data calculation by using the storage neural network in the first state, and adjusting matrix elements of the plurality of first low-rank decomposition matrices to obtain the plurality of second low-rank decomposition matrices.
11. The method of claim 1, wherein the first data comprises at least one first sub-data corresponding to a first storage mode and at least one second sub-data corresponding to a second storage mode, and wherein the storage accuracy of the first storage mode is different from the storage accuracy of the second storage mode;
The storage neural network in the first state is used for adjusting a first sub-network parameter corresponding to the first sub-data based on the first storage mode according to the first sub-data and the first storage label to obtain a second sub-network parameter corresponding to the first sub-data; and
The storage neural network in the first state is further used for adjusting a third sub-network parameter corresponding to the second sub-data based on the second storage mode according to the second sub-data and the first storage label to obtain a fourth sub-network parameter corresponding to the second sub-data;
wherein the first and third sub-network parameters constitute the first network parameter, and the second and fourth sub-network parameters constitute the second network parameter.
12. The method of claim 1, wherein the first data corresponds to a plurality of levels of storage tags, and as the levels of the storage tags increase, the information characterization capabilities of the storage tags correspondingly increase;
the obtaining the first data to be stored and the first storage tag corresponding to the first data includes:
and receiving the first data to be stored and the first storage tag, wherein the first storage tag is selected from a plurality of levels of storage tags corresponding to the first data.
13. The method of claim 1, wherein the storage neural network is pre-trained and the storage neural network has data storage capabilities.
14. A data reading method, comprising:
acquiring a second storage tag corresponding to second data to be read;
and inputting the second storage label into a preset storage neural network to obtain target read data corresponding to the second data.
15. The method of claim 14, wherein inputting the second storage tag into a preset storage neural network results in target read data corresponding to the second data, comprising:
inputting the second storage label into the storage neural network to enable the storage neural network to perform data calculation according to a first target network parameter and the second storage label so as to obtain the target read data; or alternatively, the first and second heat exchangers may be,
Inputting the second storage label into the storage neural network to enable the storage neural network to acquire a second target network parameter corresponding to the second data according to the second storage label, and performing data calculation according to the second target network parameter and the second storage label to acquire the target read data;
The first target network parameter is a current network parameter of the storage neural network, and the second target network parameter is a network parameter obtained by storing the second data by the storage neural network.
16. The method of claim 14 or 15, wherein the second data corresponds to a plurality of levels of storage labels, and as the level of the storage labels increases, the information characterizing ability of the storage labels correspondingly increases, and a plurality of levels of the storage labels are stored in a third storage space;
the storage neural network is further configured to obtain a new second storage tag from the third storage space according to the second storage tag, and obtain the target read data according to the new second storage tag, if the level of the second storage tag is not the highest level;
The new second storage label is a storage label corresponding to the second data and having a higher level than the second storage label.
17. The method of claim 14, wherein the target read data is the same as the second data or a similarity between the target read data and the second data is greater than a preset similarity threshold.
18. The method according to claim 14 or 17, wherein the second data corresponds to first verification information;
the second storage tag is input to a preset storage neural network, and after target read data corresponding to the second data is obtained, the method further comprises:
Determining second verification information corresponding to the target read data;
and determining a first check result of the target read data according to the first check information and the second check information.
19. The method according to claim 14 or 17, wherein after the second storage tag is input to a preset storage neural network to obtain target read data corresponding to the second data, the method further comprises:
Determining an overall similarity between the second data and the target read data;
And determining a second check result of the target read data according to the overall similarity.
20. The method according to claim 14 or 17, wherein the second data comprises at least one third data segment corresponding to a first storage mode and at least one fourth data segment corresponding to a second storage mode, the third data segment corresponding to first data segment check information, the third check result of the target read data comprising a first check sub-result corresponding to the first storage mode and a second check sub-result corresponding to the second storage mode, the storage accuracy of the first storage mode being different from the storage accuracy of the second storage mode;
the second storage tag is input to a preset storage neural network, and after target read data corresponding to the second data is obtained, the method further comprises:
dividing the target read data into at least one fifth data segment corresponding to the first storage mode and at least one sixth data segment corresponding to the second storage mode;
determining second data segment verification information corresponding to each fifth data segment;
Obtaining a first syndrome result related to the fifth data segment according to the first data segment verification information and the second data segment verification information with the corresponding relation;
determining the segment similarity between the fourth data segment and the sixth data segment with the corresponding relation;
And obtaining a second syndrome result about the sixth data segment according to the segment similarity.
21. A data storage device, comprising:
the first acquisition module is used for acquiring first data to be stored and a first storage tag corresponding to the first data;
And the storage module is used for adjusting the first network parameters of the storage neural network in the first state according to the first data and the first storage label to obtain the second network parameters of the storage neural network in the second state, wherein the second network parameters are used for storing the first data.
22. A data reading apparatus, comprising:
The second acquisition module is used for acquiring a second storage tag corresponding to second data to be read;
And the reading module is used for inputting the second storage tag into a preset storage neural network to obtain target reading data corresponding to the second data.
23. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores one or more computer programs executable by the at least one processor to enable the at least one processor to perform the data storage method of any one of claims 1-13 or the data storage method of any one of claims 14-20.
24. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the data storage method according to any of claims 1-13 or the data storage method according to any of claims 14-20.
CN202410307483.3A 2024-03-18 2024-03-18 Data storage method, data reading method, and corresponding device, equipment and medium Pending CN118210445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410307483.3A CN118210445A (en) 2024-03-18 2024-03-18 Data storage method, data reading method, and corresponding device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410307483.3A CN118210445A (en) 2024-03-18 2024-03-18 Data storage method, data reading method, and corresponding device, equipment and medium

Publications (1)

Publication Number Publication Date
CN118210445A true CN118210445A (en) 2024-06-18

Family

ID=91454956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410307483.3A Pending CN118210445A (en) 2024-03-18 2024-03-18 Data storage method, data reading method, and corresponding device, equipment and medium

Country Status (1)

Country Link
CN (1) CN118210445A (en)

Similar Documents

Publication Publication Date Title
CN107608970B (en) Part-of-speech tagging model generation method and device
US10691537B2 (en) Storing deep neural network weights in non-volatile storage systems using vertical error correction codes
JP2010262730A (en) Method and system for increasing capacity of heterogeneous storage elements
CN111310464B (en) Word vector acquisition model generation method and device and word vector acquisition method and device
CN112084301B (en) Training method and device for text correction model, text correction method and device
CN117056351B (en) SQL sentence generation method, device and equipment
KR20200095789A (en) Method and apparatus for building a translation model
JP2020046871A (en) Memory system
CN116402108A (en) Model training and graph data processing method, device, medium and equipment
KR20190019798A (en) Efficient survivor memory architecture for successive cancellation list decoding of channel polarization codes
KR20210002817A (en) Method for constructing parity-check concatenated polar codes and apparatus therefor
Mei et al. Neural network-based dynamic threshold detection for non-volatile memories
CN112329470B (en) Intelligent address identification method and device based on end-to-end model training
KR20070058430A (en) Method for iteratively decoding block codes and decoding device therefor
CN118210445A (en) Data storage method, data reading method, and corresponding device, equipment and medium
Huang et al. Functional error correction for reliable neural networks
US8892985B2 (en) Decoding and optimized implementation of SECDED codes over GF(q)
CN118210446A (en) Data storage system, electronic device, and storage medium
US10699799B2 (en) Method of training artificial intelligence to estimate sensing voltages for storage device
US20200104741A1 (en) Method of training artificial intelligence to correct log-likelihood ratio for storage device
CN112232445A (en) Training method and device for multi-label classification task network
US11996861B2 (en) Decoding method and decoding device
CN113033819B (en) Heterogeneous model-based federated learning method, device and medium
US20240097706A1 (en) Decoding method and decoding device
CN113630126B (en) Polar code decoding processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination