CN108053029A - A kind of training method of the neutral net based on storage array - Google Patents

A kind of training method of the neutral net based on storage array Download PDF

Info

Publication number
CN108053029A
CN108053029A CN201711446484.2A CN201711446484A CN108053029A CN 108053029 A CN108053029 A CN 108053029A CN 201711446484 A CN201711446484 A CN 201711446484A CN 108053029 A CN108053029 A CN 108053029A
Authority
CN
China
Prior art keywords
modification
storage array
centrifugal pump
value
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711446484.2A
Other languages
Chinese (zh)
Other versions
CN108053029B (en
Inventor
张睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Hill Electronic Technology Co Ltd
Original Assignee
Ningbo Hill Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Hill Electronic Technology Co Ltd filed Critical Ningbo Hill Electronic Technology Co Ltd
Priority to CN201711446484.2A priority Critical patent/CN108053029B/en
Publication of CN108053029A publication Critical patent/CN108053029A/en
Application granted granted Critical
Publication of CN108053029B publication Critical patent/CN108053029B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Read Only Memory (AREA)
  • Semiconductor Memories (AREA)

Abstract

The present invention provides a kind of training method of the neutral net based on storage array, in the parameter modification of the connection weight to each storage array, respectively discretization is carried out by the input data discretization of the storage array propagated forward and by the error information of storage array backpropagation, so as to obtain input centrifugal pump and error variance value respectively, by inputting centrifugal pump and error variance value, the modification condition of connection weight is determined.In this method, when changing connection weight, it is adjusted according to default modification condition, it is random to be equivalent to modification amplitude, is not to be carried out according to specific weight modification desired value, so, the gap between the requirement of neural network algorithm actual modification and the peculiar characteristic of memory device can be made up, by repeatedly training and random modification, reach the output convergent purpose of result, obtain satisfied training result.

Description

A kind of training method of the neutral net based on storage array
Technical field
The present invention relates to Artificial neural network ensemble circuit design field, more particularly to a kind of neutral net based on storage array Training method.
Background technology
Neutral net (Neuron Network, NN) is to imitate animal nerve network behavior feature, carries out distributed parallel The algorithm mathematics model of information processing, this algorithm model are widely used in speech recognition, image identification and automatic Pilot etc. Artificial intelligence field.
In neutral net, by the complexity of system, closed by adjusting the interconnection between internal great deal of nodes System, so as to achieve the purpose that handle information., it is necessary to be trained to neutral net in the realization algorithm of neutral net, so as to Convergent neural network algorithm is obtained, refering to what is shown in Fig. 1, in the training process, during propagated forward, the information of input layer passes through hidden It hides layer and is successively transferred to output layer;The handling result of output layer carries out comparison with answer label and generates error, in back-propagating, The error of generation is successively back to input layer by hidden layer, and carries out the parameter modification of each layer of connection weight, until calculating Method restrains.
In neural network algorithm processing procedure, the data processing between layer is matrix operation, with neutral net function Complication and scale, matrix operation amount also drastically expands, and passes through CPU (central processing unit) or application specific processor and storage Device realizes the mode of calculating process, to expend substantial amounts of processing time and cost of equipment is expensive, at present, it is proposed that utilize storage Array realizes the mode of matrix operation, and storage array is made of nonvolatile memory, due to the storage of nonvolatile memory Characteristic, can be with the parameter of the data characterization connection weight stored in memory, so as to fulfill the matrix operation between layer.It is this Mode can effectively improve the computing scale and processing speed of neutral net, but in the training process, by memory into Row write wipes to change connection weight, yet with the characteristic of storage component part in itself, under specific connection weight modified values, obtains The modification numerical value arrived is often random distribution, it is difficult to obtain satisfied training result.
The content of the invention
In view of this, it is an object of the invention to provide a kind of training method of the neutral net based on storage array, gram The problem of taking storage array weight modification numerical value randomness, obtains satisfied training result.
To achieve the above object, the present invention has following technical solution:
A kind of training method of the neutral net based on storage array carries out the instruction of neutral net using multiple storage arrays Practice, each storage array is respectively used to the matrix operation between each layer of neutral net, and each storage array is non-easy by including The storage unit of the property lost memory is formed, and the storage data in the storage array are used to characterize the connection weight between layer, institute Stating training method includes:
Multiple sample training is carried out, until output error restrains;
Wherein, the parameter modification of the connection weight of each storage array includes in each sample training:
According to default first continuum and the mapping relations of the first centrifugal pump, by the defeated of the storage array propagated forward Enter data and carry out discretization, to obtain input centrifugal pump;
According to default second continuum and the mapping relations of the second centrifugal pump, by the mistake of the storage array backpropagation Difference data carries out discretization, to obtain error variance value, one in first continuum or second continuum Including at least three continuous sections;
The input data of the propagated forward and the error information of the backpropagation are proportional to according to weight variable quantity The opposite number of product by the input centrifugal pump and the error variance value, determines the modification condition of connection weight, described to repair Change condition as default wiping operation biasing, write operation biasing or do not operate biasing;
According to the modification condition, corresponding nonvolatile memory is biased.
Optionally, first continuum include a zero section and it is at least one positive value section, described first from The symbol that the numerical value of scattered value increases with the increase of the section numerical value of first continuum and numerical symbol is respective bins Number;Second continuum includes at least one positive value section, zero section and at least one negative value section, described second from The numerical value for dissipating value increases with the increase of the section numerical value of second continuum and has numerical symbol for respective bins Symbol.
Optionally, first continuum further includes at least one negative value section.
Optionally, it is described by the input centrifugal pump and error variance value, determine the modification condition of connection weight, Including:
By the opposite number of the input centrifugal pump and the error variance value product, the modification item of connection weight is determined Part.
Optionally, the wiping operation biasing and write operation biasing include multiple grades, and grade is more high, and it is bigger to change amplitude; Then,
By the opposite number of the input centrifugal pump and the error variance value product, the modification item of connection weight is determined Part, including:
By the opposite number of the input centrifugal pump and the error variance value product, the modification condition of connection weight is determined Type;
According to the absolute value of the input centrifugal pump and the product of the error variance value, the class of the modification condition is selected Grade in type, larger absolute value correspond to higher grade.
Optionally, the grade of different wiping operation biasings corresponds to different wiping operation voltage pulse value and/or different Wipe operation voltage pulse duration and/or different wipings operation voltage pulse number;The grade of different write operation biasings corresponds to Different write operation voltage pulse values and/different write operation voltage pulse duration and/or different write operation voltage pulses Number.
Optionally, the modification condition so that changing variable quantity is less than the total conductance excursion of the nonvolatile memory 10.
Optionally, according to the modification condition, corresponding nonvolatile memory is biased, including:
According to the modification condition, by the nonvolatile storage in the storage array with equivalent modifications condition simultaneously It is biased.
Optionally, in each storage array, the first source-drain electrode of each nonvolatile memory on first direction It is electrically connected the first electrical wiring, the second source-drain electrode of each nonvolatile memory is electrically connected the second electrical wiring in second direction, The grid of each nonvolatile memory is electrically connected the 3rd electrical wiring on first direction or second direction;
First electrical wiring is used to load the input signal in propagated forward, and second electrical wiring is described for exporting Output signal in propagated forward;Second electrical wiring is used to load the input signal in backpropagation, and described first is electrically connected Line is used to export the output signal in the backpropagation.
Optionally, the storage unit further includes MOS device, the first source-drain electrode of the nonvolatile memory with it is described The second source-drain electrode electrical connection of MOS device, the first source-drain electrode of the MOS device are electrically connected the first electrical wiring, in a first direction Or the grid of each field-effect transistor is electrically connected the 4th electrical wiring in second direction.
Optionally, the storage unit further includes the MOS device that raceway groove is shared with the nonvolatile memory, first party To or second direction on each MOS device grid be electrically connected the 4th electrical wiring.
The training method of neutral net provided in an embodiment of the present invention based on storage array, in the company to each storage array When connecing the parameter modification of weight, reversely passed by the input data discretization of the storage array propagated forward and by storage array respectively The error information broadcast carries out discretization, so as to obtain input centrifugal pump and error variance value respectively, by inputting centrifugal pump and mistake Poor centrifugal pump determines the modification condition of connection weight.In this method, when changing connection weight, according to default modification condition It is adjusted, is equivalent to modification amplitude to be random, is not to be carried out according to specific weight modification desired value, in this way, can be with The gap between the requirement of neural network algorithm actual modification and the peculiar characteristic of memory device is made up, by repeatedly training and at random Modification, reach output the convergent purpose of result, obtain satisfied training result.
Description of the drawings
It in order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is the present invention Some embodiments, for those of ordinary skill in the art, without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 shows the schematic diagram of neural network training process;
Fig. 2 shows the schematic diagram of neural network algorithm;
Fig. 3 shows the propagated forward of neutral net and the schematic diagram of backpropagation;
Fig. 4 shows the structure diagram of storage array according to embodiments of the present invention;
Fig. 5 shows the flow diagram of weight modification according to embodiments of the present invention;
The structure diagram of Fig. 6-9 storage arrays of different embodiments according to the present invention.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, below in conjunction with the accompanying drawings to the present invention Specific embodiment be described in detail.
Many details are elaborated in the following description to facilitate a thorough understanding of the present invention, still the present invention can be with Implemented using other different from other manner described here, those skilled in the art can be without prejudice to intension of the present invention In the case of do similar popularization, therefore the present invention is from the limitation of following public specific embodiment.
Secondly, combination schematic diagram of the present invention is described in detail, when describing the embodiments of the present invention, for purposes of illustration only, table Show that the sectional view of device architecture can disobey general proportion and make partial enlargement, and the schematic diagram is example, should not herein Limit the scope of protection of the invention.In addition, the three-dimensional space of length, width and depth should be included in actual fabrication.
As the description in background technology, put forward to carry out the solution of matrix operation in neutral net using storage array at present Certainly scheme, in the solution, storage array is made of nonvolatile memory, since the storage of nonvolatile memory is special Property, it can be with the parameter of the data characterization connection weight stored in memory, so as to fulfill the matrix operation between layer.However, During neural metwork training is carried out, by changing connection weight into row write wiping to memory, yet with memory The characteristic of device in itself, under specific connection weight modified values, obtained modification numerical value is often random distribution, it is difficult to be obtained Satisfied training result.For this purpose, present applicant proposes a kind of training method of the neutral net based on storage array, realization meets The training result of neural network algorithm.
Technical solution for a better understanding of the present invention and technique effect, first to neural network algorithm, training process And basic calculating is described.Refering to what is shown in Fig. 1, be the process schematic of neural metwork training, in training process each time, One group of input data for representing training sample is input to neural network algorithm and is calculated, and exports result of calculation, the result of calculation It is compared with answer label, if the output error after comparing does not restrain, by output error feedback to neural network algorithm, Each layer of error amount is obtained, and the weighted value in neural network algorithm between each layer is changed according to error amount, this was trained Cheng Yizhi is repeated, and until output error restrains, that is, result of calculation is close to answer label.In each training process, from The process of input to output is referred to as propagated forward, is backpropagation from output error feedback to the process of neutral net.
Refering to what is shown in Fig. 2, for the example of three-layer neural network, neural network algorithm is described.In this example, wrap Input layer, a hidden layer and output layer are included, their number of nodes is respectively m, n, k, and the activation primitive of node is θ.So, exist In propagated forward, for adjacent two layers, the input data of each node of current layer is the output vector and current layer of preceding layer Numerical value after connection weight weighted sum, the computing of the weighted sum are the matrix operation process in propagated forward, this is each After node activation of the input data through current layer, the output vector of current layer is obtained, by calculating layer by layer, so as to be exported As a result.As shown in Fig. 2, the preceding layer of hidden layer is input layer, the output vector of input layer is input vector, input layer it is defeated Incoming vector is (x1,x2....xm), then the input of hidden layerFor input layer input vector xiWith the connection weight of hidden layer's Weighted sum value, i.e.,For j from 1 to n, the computing of the weighted sum is a matrix operation in propagated forward, The inputAfter each node activation of hidden layer, output vector is obtainedThe input data of output layerFor hidden layer Output vectorWith the connection weight of output layerWeighted sum value, i.e.,Q is from 1 to k, the weighting The computing of summation be in propagated forward again matrix operation, the input dataAfter each node activation of output layer, Obtain output resultThis completes propagated forwards.The output resultAfter being compared with answer label, output Error is e1,e2....ekIf output error does not restrain, output error propagates back to other each layers from output layer, reversely Propagation is different only in that the direction of propagation and input data with propagated forward, and in backpropagation, input vector is error (e1, e2....ek), propagate back to output layer and each hidden layer, error e from output layerqAfter exporting node layer reverse transfer, Output errorThe errorConnection weight between output layer and hidden layerIt is weighted after summation, by hiding After node layer transmission, output errorBy computing layer by layer, by error propagation to each node, thus the mistake according to each node Difference is attached the modification of weight, and the circular of weight modification can be determined according to gradient descent method.
For the ease of understanding the propagated forward in neutral net, backpropagation and matrix operation, refering to what is shown in Fig. 3, being The propagated forward of neutral net and the schematic diagram of backpropagation in Fig. 2, wherein, the node of circle expression layer is connection between layer The matrix of weight composition, it is recognised that the matrix being all made of between each two adjacent layer there are one connection weight, is used for In propagated forward in the weighted sum and backpropagation of input vector error vector weighted sum.As shown in figure 3, at this In example, the first weight matrix including input layer to hidden layerAnd hidden layer is to the second weight matrix of output layerWeight matrix is for the matrix operation of propagated forward input data and the matrix operation of reverse propagated error data.With Two weight matrixExemplified by, in propagated forward, the input of the matrix operation is the output from hiding node layerWithAfter being weighted the matrix operation of summation, it is transferred at the node of output layer, it, should in backpropagation The input of matrix operation comes from the output of output node layerWithAfter being weighted the matrix operation of summation, transfer To the node of hidden layer.That is, to same weight matrix, in sample training, propagated forward input data is carried out And the sum operation with coefficient of back-propagating error information, input data is the output data of preceding node layer, after error information is The error information of node layer.
Above-mentioned matrix fortune can be realized by the storage array being made of the storage unit for including nonvolatile memory It calculates, above-mentioned matrix operation is realized between every layer by a storage array, refering to what is shown in Fig. 4, in each storage unit Nonvolatile memory is used to store connection weight Wij, nonvolatile memory can be the list that a nonvolatile storage is formed One memory or the composite memory being made of multiple nonvolatile storages, connection weight WijNumerical Equivalent in the electricity of memory Lead the combination of value or electric conductivity value, then, each storage array is used for the matrix operation between adjacent layer, before which includes The sum operation with coefficient of the error vector of sum operation with coefficient and backpropagation to the input vector of propagation, for the ease of retouching It states and the sum operation with coefficient is denoted as matrix operation, one end of matrix is used to load the data-signal X of propagated forwardi, matrix The other end is used to load the error information δ of backpropagationj.It is understood that except matrix operation, neural network algorithm training When also need to other computings, such as activate computing, the realization of other devices may be employed in these, and the present invention is to the reality of these devices Now and do not limited with the connection of storage array.
Based on above-mentioned storage array, when carrying out the training of neutral net, after each sample training, if output error It does not restrain, then, it is necessary to the modification of weight is attached, that is, passes through the storage number to the memory device in storage array According to erasable operation is carried out, change the electric conductivity value of memory, so as to, achieve the purpose that change connection weight, and specifically connecting Under the target modification value of weight, it is difficult to accurately be modified to required electric conductivity value by adjusting storage data, it is difficult to which realization meets god Training result through network algorithm.
For this purpose, based on above-mentioned storage array, the present invention provides a kind of training method of neutral net, which is based on Above-mentioned storage array carries out, and the training of neutral net is carried out using multiple storage arrays, and each storage array is respectively used to Matrix operation between each layer of neutral net, each storage array are made of the storage unit for including nonvolatile memory, Storage data in the storage array are used to characterize the connection weight between layer, which includes:
Multiple sample training is carried out, until output error restrains;
Wherein, the parameter modification of the connection weight of each storage array includes in each sample training:
According to default first continuum and the mapping relations of the first centrifugal pump, by the defeated of the storage array propagated forward Enter data and carry out discretization, to obtain input centrifugal pump;
According to default second continuum and the mapping relations of the second centrifugal pump, by the mistake of the storage array backpropagation Difference data carries out discretization, to obtain error variance value, one in first continuum or second continuum Including at least three continuous sections;
The input data of the propagated forward and the error information of the backpropagation are proportional to according to weight variable quantity The opposite number of product by the input centrifugal pump and the error variance value, determines the modification condition of connection weight, definite to connect The modification condition of weight is connect, the modification condition is default wiping operation biasing, write operation biasing or does not operate biasing;
According to the modification condition, corresponding nonvolatile memory is biased.
In the training method, in the parameter modification of the connection weight to each storage array, respectively by the storage array The input data discretization of propagated forward and the error information progress discretization by storage array backpropagation, so as to obtain respectively Centrifugal pump and error variance value are inputted, by inputting centrifugal pump and error variance value, determines the modification condition of connection weight.The party It in method, when changing connection weight, is adjusted according to default modification condition, it is at random, not to be equivalent to modification amplitude It is to be carried out according to specific weight modification desired value, in this way, the requirement of neural network algorithm actual modification and memory can be made up Gap between the peculiar characteristic of part by repeatedly training and random modification, reaches the output convergent purpose of result, is expired The training result of meaning.
, it is necessary to multiple sample training in the training process of neural network algorithm, until output error restrains, each In sample training, including sample input, propagated forward, output result, compare to obtain output error, backpropagation and connection weight The modification of weight and etc., in this application, the matrix operation in propagated forward and backpropagation is carried out in storage array , the modification of connection weight is also that could be adjusted to what is realized by the storage data to corresponding memory in storage array, , can be as needed for not limiting for other computings in propagated forward and backpropagation and output error, It is realized using suitable device and mode.
It is understood that can include multiple storage arrays in neutral net, each storage array is used for adjacent layer Between matrix operation, in each sample training, for each storage array connection weight parameter change method It is identical, the parameter of the connection weight of each storage array in each sample training is changed below with reference to specific embodiment It is described in detail.
Refering to what is shown in Fig. 5, in step S01, it, will according to default first continuum and the mapping relations of the first centrifugal pump The input data of the storage array propagated forward carries out discretization, to obtain input centrifugal pump.
It is according to default second continuum and the mapping relations of the second centrifugal pump, the storage array is anti-in step S02 Discretization is carried out to the error information of propagation, to obtain error variance value, first continuum or second continuum Between in one include at least three subintervals.
For same storage array, progress is matrix operation between two neighboring layer, including in propagated forward Matrix operation in matrix operation and backpropagation, with reference to shown in figure 3 and 4, for ease of description and understand, if adjacent two One, layer is denoted as current layer, another is denoted as next layer, then for the storage array, the input data of propagated forward is to work as The output data of preceding node layer, the error information of reverse transfer are the error information of next node layer output.In an example In, refering to what is shown in Fig. 4, matrix operation array first hides the defeated of node layer between first hidden layer of storage array and output layer Going out data isInput data as in propagated forward, is denoted as X for ease of descriptioni, the error of output node layer output DataError information as in backpropagation, is denoted as δ for ease of descriptionj, for the storage array, in propagated forward Input data be to come from the output data X of current node layeri, the error information in backpropagation as comes from next The error information δ of node layerj
In this step, first have to the input data of the storage array propagated forward carrying out discretization and deposit this The error information for storing up array backpropagation carries out discretization, in discretization, according to reflecting for default continuum and centrifugal pump It penetrates relation and carries out discretization, one in the first continuum or the second continuum includes at least three continuous sections, even Continuous section quantity, the division in section and the size for corresponding to centrifugal pump can need determine so that discrete according to specific The input centrifugal pump and error variance value of acquisition after change are capable of determining that the modification condition of connection weight.
It, can be by the input data X of storage array propagated forward according to default mapping relations in discretizationiIt carries out Discretization obtains input centrifugal pump xiAnd the error information δ that array backpropagation will be stored upjCarry out discretization, obtain error from Dissipate value εj, the specific data of input data and error information are thus converted into several definite numerical value, in order to subsequently lead to These definite numerical value are crossed, determine the weighted data Δ W of each memory in storage arrayijModification condition.
S03 is proportional to the input data of the propagated forward and the margin of error of the backpropagation according to weight variable quantity According to product opposite number, pass through the input centrifugal pump and error variance value, determine the modification condition of connection weight, institute Modification condition is stated as default wiping operation biasing, write operation biasing or does not operate biasing.
In training each time, weight modification amount Δ WijIt is considered that it is proportional to the input data X to propagationiWith it is reversed The error information δ of propagationjProduct opposite number, with formula expression be:ΔWij∝-(Xi×δj), then, according to the relation, Pass through the input centrifugal pump x after discretizationiWith error variance value εj, it may be determined that connection weight WijModification condition, change item Part is default wiping operation biasing, write operation biasing or does not operate biasing, under the conditions of these modifications, in existing connection weight On the basis of, it can realize increase, the modification reduced or the operation without modification, which is fixed modification Condition is not the modification condition of the numerical value of corresponding actual needs modification.
Due to inputting centrifugal pump xiWith error variance value εjCorresponding to different numerical intervals, then, by specifically inputting Centrifugal pump xiWith error variance value εjIt is known that the section where its actual value, and weight variable quantity and input data and error The opposite number of the actual numerical value product of data is directly proportional, then is assured that out power by the relation of centrifugal pump and numerical intervals The specific modification condition of weight is wiping operation, write operation or does not operate.Such as come from the input centrifugal pump x in a sectioniWith one The error variance value ε in sectionj, respective weights value connection weight WijNot change, modification condition is not operate biasing;Come from The input centrifugal pump x in another sectioniWith error variance value εj, respective weights value connection weight WijDirection is changed as increase, then Modification condition operates biasing for default wipes.
In some preferred embodiments, the first continuum can include zero section and on the occasion of section, can also wrap Include negative value section, zero section and on the occasion of section, each section corresponds to first centrifugal pump, and the second continuum can include Negative value section, zero section and on the occasion of section, each section corresponds to second centrifugal pump, and zero section refers near zero Section where numberical range, negative value section and the quantity on the occasion of section can be one or more, the first centrifugal pump and second Centrifugal pump can specifically be set as needed, in this way, can be identified by the combination of the first centrifugal pump and the second centrifugal pump defeated Enter data and error information from section different situations, so as to know the condition of corresponding weight modification for so that Its weighted data is increase, reduction or constant modification condition.
Preferably, modification condition to change variable quantity is less than the total conductance excursion of the nonvolatile memory hundred / ten, that is to say, that the amplitude of modification is sufficiently small, and in each weight modification, the amplitude of rewriting is not to correspond to The accurate connection weight modification numerical value of training acquisition every time, but a specific sufficiently small numerical value, are equivalent to each training In only ensure to change direction and change amplitude as random modification, in this way, after by sufficient number of training, output error It can gradually restrain, so as to obtain satisfied training result.
In more preferably embodiment, according to different needs, the first continuum can include zero section and at least one A positive value section can also include at least one negative value section, zero section and at least one positive value section, and described first is discrete The symbol that the numerical value of value increases with the increase of the section numerical value of first continuum and numerical symbol is respective bins; Second continuum includes at least one positive value section, zero section and at least one negative value section, second centrifugal pump Numerical value with the increase of the section numerical value of second continuum increases and has the symbol that numerical symbol is respective bins, In, zero section refers to that the numerical value in the section where numberical range near zero, negative value section is negative, the symbol in negative value section Number be it is negative, numerical value in positive value section for just, the symbol in positive value section for just, the symbol in zero section can be it is positive or negative, from Dissipate value has identical symbol with corresponding section.
According to specific needs, the quantity in each section in the first continuum and the second continuum, section division can With identical or different, corresponding first centrifugal pump in each section and the second centrifugal pump can be identical or different.In this setup, The symbol and numerical value of centrifugal pump embody the numerical value change in section, can be directly by inputting centrifugal pump xiWith error variance value εj Symbol and the size of absolute value of product determine the modification direction of weight and the amplitude of modification, in specific application, when- xi×εjDuring less than 0, it is believed that corresponding connection weight WijModification direction to become smaller, it is default wiping to determine modification condition Operation biasing;As-xi×εjDuring more than 0, it is believed that corresponding connection weight WijModification direction to become larger, determine modification item Part biases for default write operation;-xi×εjDuring close to 0, it is believed that corresponding connection weight WijNot change, modification is determined Condition does not operate biasing to be default.
In specific application, wiping operation biasing and write operation biasing includes one or more grades, when only one etc. During grade, wipe operation biasing and write operation biasing corresponds to one and wipes operation voltage and write operation voltage respectively.When for multiple grades when, Grade is more high, changes that amplitude is bigger, by the opposite number-x for inputting centrifugal pump and error variance value producti×εj, determine connection The type of the modification condition of weight, that is, modification are specially write operation, wipe operation or do not operate, and then, it is discrete according to inputting Product-the x of value and error variance valuei×εjAbsolute value select grade in the type of the modification condition, the product it is exhausted Larger to being worth, then what is selected is higher ranked.
Specifically, the different grades for wiping operation biasing can correspond to different wiping operation voltage pulse values and/difference Wiping operation voltage pulse duration and/or different wipings operation voltage pulse number;The grade of different write operation biasings can be with Corresponding to different write operation voltage pulse values and/different write operation voltage pulse duration and/or different write operation voltage Pulse number, larger voltage pulse value or pulse duration or more pulse number correspond to larger modification amplitude.
According to the modification condition, corresponding nonvolatile memory is biased by S04.
After modification condition is determined, then the storage data in corresponding nonvolatile storage are biased, if input Data correspond to the row of input value array, and error information corresponds to input to the row of array, then input centrifugal pump xiAnd error variance Value εjCorresponding nonvolatile storage is the i-th row, the memory of jth row position.
The modification of connection weight is realized by changing the storage data in nonvolatile storage, can be applied on the memory Add and accordingly wipe, write voltage, then corresponding electric conductivity value increases or reduces so that its characterize connection weight increase or reduce or Person applies non-erasable voltage, makes its electric conductivity value constant so that the connection weight that it is characterized is constant.
In specific adjustment, it can one by one or parallel modify to the memory in array, in concurrent modification, will have There is the nonvolatile storage in the storage array of equivalent modifications condition to be carried out at the same time biasing.It in this way, can be by repairing several times Change the modification for just completing all connection weight parameters in entire array, realize concurrent modification, improve the calculating effect of neutral net.
So far, it is achieved that the parameter modification of the connection weight of each storage array in a sample training, it is more by repeating Secondary modification, until output error restrains.
For the ease of understanding the technical solution of the application and technique effect, illustrated below with specific example, at this In specific example, the mapping relations of the first continuum and the first centrifugal pump are as shown in following table one.Second continuum and The mapping relations of two centrifugal pumps are as shown in following table two.First continuum and the second continuum all respectively include three continuous areas Between, positive value and negative value section are respectively one, and positive value section, zero section and negative value section correspond to centrifugal pump 1,0 and -1 respectively.
Table one
First continuum First centrifugal pump
Positive value section (0.18,1) 1
Zero section [- 0.18,0.18] 0
Negative value section (- 1, -0.18) -1
Table two
Second continuum Second centrifugal pump
Positive value section (0.1,1) 1
Zero section [- 0.1,0.1] 0
Negative value section (- 1, -0.1) -1
Mapping relations in table two are neutralized according to table one, after input data and error information are carried out discretization ,-xi×εjMeeting There are following several situations, according to different situations, by weight modification according to Δ Wij∝-Xi×δj, it may be determined that go out corresponding WijModification condition, see the table below three.
Table three
Situation Input centrifugal pump Error variance value Weight modification foundation Modification condition Conductance numerical value change
1 xi=1 εj=1 ΔWij∝-Xi×δj<0 Write operation voltage 10V Become smaller
2 xi=-1 εj=-1 ΔWij∝-Xi×δj<0 Write operation voltage 10V Become smaller
3 xi=1 εj=-1 ΔWij∝-Xi×δj>0 Wipe operation voltage -10V Become larger
4 xi=-1 εj=1 ΔWij∝-Xi×δj>0 Wipe operation voltage -10V Become larger
5 xi=0 εj=1 or -1 0 Non- erasable voltage It is constant
6 xi=1 or -1 εj=0 0 Non- erasable voltage It is constant
7 xi=0 εj=0 0 Non- erasable voltage It is constant
In the specific example, centrifugal pump x is inputtediWith error variance value εjIt can be each in combination there are multiple combinations Kind situation can correspond to the type in corresponding modification condition, that is, wipes operation biasing, write operation biasing or do not operate inclined It puts, bias condition can include voltage pulse value, duration and/or the pulse number of biasing, due to inputting centrifugal pump xiAnd error Centrifugal pump εjIt corresponds respectively to and input data XiWith error information δjThe section at place, by inputting centrifugal pump xiWith error from Dissipate value εjCombination can determine corresponding modification condition, certainly, in the specific example, input centrifugal pump xiAnd error Centrifugal pump εjSymbol and numerical value embody the numerical value change in section, can directly described input centrifugal pump and the error variance value The opposite number of product determines the modification condition of connection weight, is wiping operation biasing, write operation biasing and the modification for not operating biasing Under the conditions of, can realize conductance numerical value respectively, that is, the becoming smaller of connection weight, become larger with it is constant, in this example, write operation It it is one with the grade for wiping operation.
In another example, when input data and/or error information by it is discrete be more centrifugal pumps when, such as -2, -1, 0,1,2, bigger centrifugal pump corresponds to the data interval of bigger, is operated at this point it is possible to set different write operation voltage and wipe Voltage, different voltage correspond to different bias voltage values and/or biasing duration, realize the modification of different amplitudes, see the table below four, It is illustrated by taking write operation as an example.
Table four
Situation Input centrifugal pump Error variance value Weight modification foundation Modification condition Conductance numerical value change
1 xi=1 or 2 εj=1 ΔWij∝-Xi×δj<0 Write operation voltage 10V Become smaller
2 xi=-1 or -2 εj=-1 ΔWij∝-Xi×δj<0 Write operation voltage 10V Become smaller
3 xi=1 εj=1 or 2 ΔWij∝-Xi×δj<0 Write operation voltage 10V Become smaller
4 xi=-1 εj=-1 or -2 ΔWij∝-Xi×δj<0 Write operation voltage 10V Become smaller
5 xi=2 εj=2 ΔWij∝-Xi×δj<0 Write operation voltage 12V Become smaller
6 xi=-2 εj=-2 ΔWij∝-Xi×δj<0 Write operation voltage 12V Become smaller
It can be seen that in this example, under the conditions of modification, conductance numerical value will be caused to become smaller, that is, change condition Type all for write operation, write operation biasing has two grades of 10V and 12V, then by xi×εjThe feelings of product maximum absolute value Condition determines that write operation is biased to 12V, and other situations determine that write operation is biased to 10V.In other examples, it can also set Put in more grades, such as upper table table four, write operation biasing can set 3 grades, such as 10V, 12V and 15V these three Grade, xi×εjProduct absolute value is 1,2,4 situation, corresponds respectively to the write operation biasing of 10V, 12V, 15V.It is only herein Example can also have other rating-types and grade method of determination in other examples.
In specific application, when changing weight, then by the memory at corresponding i rows, j row according to definite modification Condition is biased, and so as to fulfill the weight modification in a sample training, such as under the write operation of 10V, can be incited somebody to action corresponding I rows, two control terminals of memory at j row put the voltage of 5V and -5V respectively so that the electric conductivity value of memory becomes It is small.
For above-mentioned storage array, different structures can be had according to specific design, in embodiments of the present invention, Refering to what is shown in Fig. 6, including:
Multiple storage units 100 of array arrangement, each storage unit 100 include nonvolatile memory 101;
In the storage array, the first source-drain electrode DS1 electrical connections of each nonvolatile memory on first direction X The second source-drain electrode DS2 of each nonvolatile memory is electrically connected the second electrical wiring on first electrical wiring AL, second direction Y The grid G of each nonvolatile memory is electrically connected the 3rd electrical wiring CL on BL, first direction X or second direction Y;
The first electrical wiring AL is used to load the input signal in propagated forward, and the second electrical wiring BL is used to export Output signal in the propagated forward;The second electrical wiring BL is used to loading input signal in backpropagation, and described the One electrical wiring AL is used to export the output signal in the backpropagation.
In embodiments of the present invention, first direction X and second direction Y be array arrangement both direction, array usually with Row, column arrange, in concrete implementation, can by as needed use suitable array arrangement in a manner of, refering to what is shown in Fig. 6, for example Can be the row, column arrangement neatly aligned, or the row, column arrangement of dislocation, i.e., the storage unit of rear a line is positioned at previous Between two storage units of row.In the particular embodiment, first direction X be line direction, then second direction Y be column direction, phase Ying Di, first direction X are column direction, then second direction Y is line direction, and each refers to often a line on line direction, on column direction Each refers to each row.
It should be noted that in the diagram of the embodiment of the present invention, in storage array, only by the first row and first row Storage unit is shown, and the illustration is omitted for the storage unit of other parts, and other parts actually are also provided with storing Unit.
In embodiments of the present invention, the first source-drain electrode DS1 and the second source-drain electrode DS2 is the source of memory or MOS device Or drain terminal, when the first source-drain electrode DS1 is source, then the second source-drain electrode DS2 is drain terminal, correspondingly, when the first source-drain electrode DS1 is During drain terminal, the second source-drain electrode DS2 is source.Nonvolatile memory 101 is included at least in each storage unit, it is non-volatile to deposit Reservoir 101 has the characteristics that power down still retention data, and the matrix computations of neutral net are used for this characteristic design storage array, Nonvolatile memory 101 for example can be memristor, phase transition storage, ferroelectric memory, spin magnetic moment coupled memory, floating Gate field-effect transistor or SONOS (silicon-oxide-nitride-oxide-silicon, Si-Oxide-SiN-Oxide-Si) fieldtron Deng.Further, MOS device (Metal-Oxide-Semiconductor Field- can also be included in each storage unit Effect Transistor, mos field effect transistor).
In each storage unit, MOS device is for the state of auxiliary control nonvolatile memory, the grid of MOS device The grid G 1 of pole G2 and memory controls respectively.In some embodiments, it is each in storage array with reference to shown in figure 7 and Fig. 8 Storage unit 200 includes nonvolatile memory 101 and MOS device 102, and MOS device 102 is gone here and there with nonvolatile memory 101 Connection, that is to say, that the first source and drain end DS1 of MOS device 102 and the second source and drain end DS2 of nonvolatile memory 101 are electrically connected Connect, in the concrete realization, which can be connected directly or indirectly, such as can be MOS device with it is non-volatile Series connection is realized in the leakage of memory common source, or realizes series connection, in these embodiments, memory by interconnection line or doped region 101 the first source-drain electrode DS1 is electrically connected electrical wiring a BL, another source-drain electrode DS2 and is connected to another be electrically connected by MOS device 102 On line AL.X or second direction Y are connected to the 3rd electrical wiring CL, MOS to the grid G 1 of nonvolatile memory 101 along the first direction X or second direction Y are connected to the 4th electrical wiring DL to the grid G 2 of device 102 along the first direction, it is preferable that the 3rd electrical wiring CL and The direction of 4th electrical wiring DL is mutually orthogonal.
In further embodiments, refering to what is shown in Fig. 9, each storage unit 300 in storage array is including non-volatile Memory 101 and MOS device 103, MOS device 103 are total to raceway groove, the source and drain end of MOS device 103 with nonvolatile memory 101 The source and drain end DS2 of DS1 namely nonvolatile memory 101, the grid G 1 of nonvolatile memory 101 X or along the first direction Two direction Y are connected to the 3rd electrical wiring CL, and X or second direction Y are connected to the 4th to the grid G 2 of MOS device 103 along the first direction Electrical wiring DL, it is preferable that the arrangement of the mutually orthogonal memory module in direction of the 3rd electrical wiring CL and the 4th electrical wiring DL can be with Referring to attached drawing 3, the device connection only in storage unit is different.
In the storage array of the embodiment of the present invention, a source and drain end DS1 of each nonvolatile storage on a direction An electrical wiring BL is electrically connected, another source and drain end DS2 electrical connections of each nonvolatile memory on other direction are another Electrical wiring AL, the grid G of nonvolatile memory can select row or column direction to connect electrical wiring as needed, due to it is non-easily The storage characteristics of the property lost memory, the numerical value stored in memory are presented as the electric conductivity value at memory source and drain both ends in memory.
It is real by further setting other devices and carrying out the connection between storage array based on above-mentioned storage array An existing neutral net based on storage array, other devices believe the output of storage array such as amplifier, integrator Number be further processed, realize forward and backward propagate in other computings.It is described herein as managing on the whole Technical scheme is solved, the present invention is to this part and is not specially limited.
Based on the storage array, in concrete modification weighted value, the first electrical wiring AL or the second electrical wiring can be passed through BL or the 4th electrical wiring DL and the 3rd electrical wiring CL loadings write voltage or wipe voltage, can be with if connection weight data need to increase The memory is loaded into default write operation voltage so that the memory continues write operation, if connection weight data need to reduce, The memory can be loaded to default wipe and operate voltage so that the memory carries out erasing operation.
The above is only the preferred embodiment of the present invention, although the present invention has been disclosed in the preferred embodiments as above, so And it is not limited to the present invention.Any those skilled in the art are not departing from technical solution of the present invention ambit Under, many possible changes and modifications are all made to technical solution of the present invention using the methods and technical content of the disclosure above, Or it is revised as the equivalent embodiment of equivalent variations.Therefore, every content without departing from technical solution of the present invention, it is according to the invention Technical spirit still falls within the technology of the present invention side to any simple modification, equivalent variation and modification made for any of the above embodiments In the range of case protection.

Claims (11)

1. a kind of training method of the neutral net based on storage array carries out the instruction of neutral net using multiple storage arrays Practice, each storage array is respectively used to the matrix operation between each layer of neutral net, and each storage array is non-easy by including The storage unit of the property lost memory is formed, and the storage data in the storage array are used to characterize the connection weight between layer, It is characterized in that, the training method includes:
Multiple sample training is carried out, until output error restrains;
Wherein, the parameter modification of the connection weight of each storage array includes in each sample training:
According to default first continuum and the mapping relations of the first centrifugal pump, by the input number of the storage array propagated forward According to discretization is carried out, to obtain input centrifugal pump;
According to default second continuum and the mapping relations of the second centrifugal pump, by the margin of error of the storage array backpropagation According to discretization is carried out, to obtain error variance value, one in first continuum or second continuum is at least Including three continuous sections;
The product of the input data of the propagated forward and the error information of the backpropagation is proportional to according to weight variable quantity Opposite number, pass through the input centrifugal pump and error variance value, determine the modification condition of connection weight, the modification item Part is default wiping operation biasing, write operation biasing or does not operate biasing;
According to the modification condition, corresponding nonvolatile memory is biased.
2. training method according to claim 1, which is characterized in that first continuum includes a zero section With at least one positive value section, the numerical value of first centrifugal pump with the increase of the section numerical value of first continuum and Increase and the symbol that numerical symbol is respective bins;Second continuum includes at least one positive value section, zero section With at least one negative value section, the numerical value of second centrifugal pump with the increase of the section numerical value of second continuum and Increase and the symbol that tool numerical symbol is respective bins.
3. training method according to claim 2, which is characterized in that first continuum further includes at least one negative It is worth section.
4. the training method according to Claims 2 or 3, which is characterized in that described by the input centrifugal pump and described Error variance value determines the modification condition of connection weight, including:
By the opposite number of the input centrifugal pump and the error variance value product, the modification condition of connection weight is determined.
5. training method according to claim 4, which is characterized in that the wiping operation biasing and write operation biasing are including more A grade, grade is more high, and it is bigger to change amplitude;Then,
By the opposite number of the input centrifugal pump and the error variance value product, determine the modification condition of connection weight, wrap It includes:
By the opposite number of the input centrifugal pump and the error variance value product, the class of the modification condition of connection weight is determined Type;
According to the absolute value of the input centrifugal pump and the product of the error variance value, in the type for selecting the modification condition Grade, larger absolute value corresponds to higher grade.
6. training method according to claim 4, which is characterized in that the grade of different wiping operation biasings corresponds to difference Wiping operation voltage pulse value and/or different wiping operation voltage pulse durations and/or different wipings operation voltage pulse number; The grade of different write operation biasings corresponds to different write operation voltage pulse values and/different write operation voltage pulse duration And/or different write operation voltage pulse number.
7. training method according to claim 1, which is characterized in that the modification condition so that changing variable quantity is less than institute State the 10 of the total conductance excursion of nonvolatile memory.
8. training method according to claim 1, which is characterized in that, will be corresponding non-volatile according to the modification condition Property memory is biased, including:
According to the modification condition, the nonvolatile storage in the storage array with equivalent modifications condition is carried out at the same time Biasing.
9. training method according to claim 1, which is characterized in that every on first direction in each storage array First source-drain electrode of one nonvolatile memory is electrically connected the first electrical wiring, each nonvolatile memory in second direction The second source-drain electrode be electrically connected the second electrical wiring, the grid electricity of each nonvolatile memory on first direction or second direction Connect the 3rd electrical wiring;
First electrical wiring is used to load the input signal in propagated forward, and second electrical wiring is used to export the forward direction Output signal in propagation;Second electrical wiring is used to load the input signal in backpropagation, and first electrical wiring is used Output signal in the output backpropagation.
10. training method according to claim 9, which is characterized in that the storage unit further includes MOS device, described First source-drain electrode of nonvolatile memory is electrically connected with the second source-drain electrode of the MOS device, the first source of the MOS device Drain electrode the first electrical wiring of electrical connection, the grid of each field-effect transistor is electrically connected the 4th in a first direction or in second direction Electrical wiring.
11. training method according to claim 9, which is characterized in that the storage unit further include with it is described non-volatile Property memory share the MOS device of raceway groove, the 4th electricity of grid electrical connection of each MOS device on first direction or second direction Line.
CN201711446484.2A 2017-12-27 2017-12-27 Neural network training method based on storage array Active CN108053029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711446484.2A CN108053029B (en) 2017-12-27 2017-12-27 Neural network training method based on storage array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711446484.2A CN108053029B (en) 2017-12-27 2017-12-27 Neural network training method based on storage array

Publications (2)

Publication Number Publication Date
CN108053029A true CN108053029A (en) 2018-05-18
CN108053029B CN108053029B (en) 2021-08-27

Family

ID=62128238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711446484.2A Active CN108053029B (en) 2017-12-27 2017-12-27 Neural network training method based on storage array

Country Status (1)

Country Link
CN (1) CN108053029B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886393A (en) * 2019-02-26 2019-06-14 杭州闪亿半导体有限公司 It is a kind of to deposit the calculation method for calculating integrated circuit and neural network
CN110543937A (en) * 2018-05-28 2019-12-06 厦门半导体工业技术研发有限公司 Neural network, operation method and neural network information processing system
WO2020024608A1 (en) * 2018-08-02 2020-02-06 北京知存科技有限公司 Analog vector-matrix multiplication circuit
CN110766130A (en) * 2018-07-28 2020-02-07 华中科技大学 BP neural network learning circuit
WO2020147142A1 (en) * 2019-01-16 2020-07-23 华为技术有限公司 Deep learning model training method and system
CN111461340A (en) * 2020-03-10 2020-07-28 北京百度网讯科技有限公司 Weight matrix updating method and device and electronic equipment
CN112215340A (en) * 2019-07-11 2021-01-12 富士通株式会社 Arithmetic processing device, control method, and computer-readable recording medium
CN112801274A (en) * 2021-01-29 2021-05-14 清华大学 Artificial intelligence processing device, weight parameter reading and writing method and device
CN113487020A (en) * 2021-07-08 2021-10-08 中国科学院半导体研究所 Stagger storage structure for neural network calculation and neural network calculation method
WO2021255569A1 (en) * 2020-06-18 2021-12-23 International Business Machines Corporation Drift regularization to counteract variation in drift coefficients for analog accelerators
WO2022090980A1 (en) * 2020-11-02 2022-05-05 International Business Machines Corporation Weight repetition on rpu crossbar arrays
CN114861911A (en) * 2022-05-19 2022-08-05 北京百度网讯科技有限公司 Deep learning model training method, device, system, equipment and medium
WO2022243766A1 (en) * 2021-05-20 2022-11-24 International Business Machines Corporation Signing and authentication of digital images and other data arrays
CN115456149A (en) * 2022-10-08 2022-12-09 鹏城实验室 Method, device, terminal and storage medium for learning pulse neural network accelerator

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5299285A (en) * 1992-01-31 1994-03-29 The United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Neural network with dynamically adaptable neurons
CN103917992A (en) * 2011-11-09 2014-07-09 高通股份有限公司 Method and apparatus for using memory in probabilistic manner to store synaptic weights of neural network
US20150379396A1 (en) * 2012-07-30 2015-12-31 International Business Machines Corporation Providing transposable access to a synapse array using a recursive array layout
CN106873903A (en) * 2016-12-30 2017-06-20 北京联想核芯科技有限公司 Date storage method and device
CN107341541A (en) * 2016-04-29 2017-11-10 北京中科寒武纪科技有限公司 A kind of apparatus and method for performing full articulamentum neural metwork training

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5299285A (en) * 1992-01-31 1994-03-29 The United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Neural network with dynamically adaptable neurons
CN103917992A (en) * 2011-11-09 2014-07-09 高通股份有限公司 Method and apparatus for using memory in probabilistic manner to store synaptic weights of neural network
US20150379396A1 (en) * 2012-07-30 2015-12-31 International Business Machines Corporation Providing transposable access to a synapse array using a recursive array layout
CN107341541A (en) * 2016-04-29 2017-11-10 北京中科寒武纪科技有限公司 A kind of apparatus and method for performing full articulamentum neural metwork training
CN106873903A (en) * 2016-12-30 2017-06-20 北京联想核芯科技有限公司 Date storage method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TAKAYUKI MORISHITA,ET AL: "《A Bicmos analog neural network with dynamically updated weights》", 《IEICE TRANS.ELECTRON》 *
张妮: "《BP算法的硬件实现研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543937A (en) * 2018-05-28 2019-12-06 厦门半导体工业技术研发有限公司 Neural network, operation method and neural network information processing system
CN110543937B (en) * 2018-05-28 2022-09-30 厦门半导体工业技术研发有限公司 Neural network, operation method and neural network information processing system
CN110766130B (en) * 2018-07-28 2022-06-14 华中科技大学 BP neural network learning circuit
CN110766130A (en) * 2018-07-28 2020-02-07 华中科技大学 BP neural network learning circuit
WO2020024608A1 (en) * 2018-08-02 2020-02-06 北京知存科技有限公司 Analog vector-matrix multiplication circuit
US11379673B2 (en) 2018-08-02 2022-07-05 Beijing Zhicun Witin Technology Corporation Limited Analog vector-matrix multiplication circuit
WO2020147142A1 (en) * 2019-01-16 2020-07-23 华为技术有限公司 Deep learning model training method and system
CN109886393A (en) * 2019-02-26 2019-06-14 杭州闪亿半导体有限公司 It is a kind of to deposit the calculation method for calculating integrated circuit and neural network
CN109886393B (en) * 2019-02-26 2021-02-09 上海闪易半导体有限公司 Storage and calculation integrated circuit and calculation method of neural network
CN112215340A (en) * 2019-07-11 2021-01-12 富士通株式会社 Arithmetic processing device, control method, and computer-readable recording medium
CN111461340B (en) * 2020-03-10 2023-03-31 北京百度网讯科技有限公司 Weight matrix updating method and device and electronic equipment
CN111461340A (en) * 2020-03-10 2020-07-28 北京百度网讯科技有限公司 Weight matrix updating method and device and electronic equipment
WO2021255569A1 (en) * 2020-06-18 2021-12-23 International Business Machines Corporation Drift regularization to counteract variation in drift coefficients for analog accelerators
GB2611681A (en) * 2020-06-18 2023-04-12 Ibm Drift regularization to counteract variation in drift coefficients for analog accelerators
WO2022090980A1 (en) * 2020-11-02 2022-05-05 International Business Machines Corporation Weight repetition on rpu crossbar arrays
GB2614687B (en) * 2020-11-02 2024-02-21 Ibm Weight repetition on RPU crossbar arrays
GB2614687A (en) * 2020-11-02 2023-07-12 Ibm Weight repetition on RPU crossbar arrays
CN112801274A (en) * 2021-01-29 2021-05-14 清华大学 Artificial intelligence processing device, weight parameter reading and writing method and device
CN112801274B (en) * 2021-01-29 2022-12-06 清华大学 Artificial intelligence processing device, weight parameter reading and writing method and device
WO2022243766A1 (en) * 2021-05-20 2022-11-24 International Business Machines Corporation Signing and authentication of digital images and other data arrays
US11720991B2 (en) 2021-05-20 2023-08-08 International Business Machines Corporation Signing and authentication of digital images and other data arrays
CN113487020B (en) * 2021-07-08 2023-10-17 中国科学院半导体研究所 Ragged storage structure for neural network calculation and neural network calculation method
CN113487020A (en) * 2021-07-08 2021-10-08 中国科学院半导体研究所 Stagger storage structure for neural network calculation and neural network calculation method
CN114861911A (en) * 2022-05-19 2022-08-05 北京百度网讯科技有限公司 Deep learning model training method, device, system, equipment and medium
CN115456149A (en) * 2022-10-08 2022-12-09 鹏城实验室 Method, device, terminal and storage medium for learning pulse neural network accelerator

Also Published As

Publication number Publication date
CN108053029B (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN108053029A (en) A kind of training method of the neutral net based on storage array
US10885429B2 (en) On-chip training of memristor crossbar neuromorphic processing systems
US11521054B2 (en) Analog neuromorphic circuit implemented using resistive memories
US20210117769A1 (en) Monolithic multi-bit weight cell for neuromorphic computing
US20190213234A1 (en) Vector-by-matrix multiplier modules based on non-volatile 2d and 3d memory arrays
US11461620B2 (en) Multi-bit, SoC-compatible neuromorphic weight cell using ferroelectric FETs
CN108038542A (en) A kind of memory module based on neutral net, module and data processing method
US20190019538A1 (en) Non-volatile (nv) memory (nvm) matrix circuits employing nvm matrix circuits for performing matrix computations
Wang et al. Predicting house price with a memristor-based artificial neural network
KR20200076571A (en) Nand block architecture for in-memory multiply-and-accumulate operations
CN110047540A (en) Product item and accelerator array
Lee et al. Operation scheme of multi-layer neural networks using NAND flash memory as high-density synaptic devices
CN1094831A (en) Has the functional neural network of space distribution
TW201935851A (en) Sum-of-products array for neuromorphic computing system
WO2016068953A1 (en) Double bias memristive dot product engine for vector processing
JPH04503270A (en) Artificial neural network structure
CN111052154A (en) Neural network operation circuit using nonvolatile semiconductor memory element
CN108073984A (en) A kind of memory module and storage module based on neutral net
CN111128279A (en) Memory computing chip based on NAND Flash and control method thereof
CN211016545U (en) Memory computing chip based on NAND Flash, memory device and terminal
KR20200062278A (en) Mathematical problem solving circuit including resistive elements
US10643694B1 (en) Partial-polarization resistive electronic devices, neural network systems including partial-polarization resistive electronic devices and methods of operating the same
Cao et al. Parasitic-aware modelling for neural networks implemented with memristor crossbar array
Zhao et al. Adaptive Weight Mapping Strategy to Address the Parasitic Effects for ReRAM-based Neural Networks
JPH03174679A (en) Synapse cell

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 310018 Room 202, Building 17, 57 Baiyang Street Science Park Road, Hangzhou Economic and Technological Development Zone, Zhejiang Province

Applicant after: Hangzhou Semiconductor Co., Ltd.

Address before: 315832 Room 221, Office Building 21, Meishan Avenue Business Center, Beilun District, Ningbo City, Zhejiang Province

Applicant before: Ningbo Hill Electronic Technology Co., Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 607-a, 6 / F, block a, building 1, 800 Naxian Road, China (Shanghai) pilot Free Trade Zone, 200120

Applicant after: Shanghai Shanyi Semiconductor Co., Ltd

Address before: 310018 Room 202, Building 17, 57 Baiyang Street Science Park Road, Hangzhou Economic and Technological Development Zone, Zhejiang Province

Applicant before: HANGZHOU SHANYI SEMICONDUCTOR Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant