CN111931924A - Memristor neural network chip architecture compensation method based on online migration training - Google Patents

Memristor neural network chip architecture compensation method based on online migration training Download PDF

Info

Publication number
CN111931924A
CN111931924A CN202010755929.0A CN202010755929A CN111931924A CN 111931924 A CN111931924 A CN 111931924A CN 202010755929 A CN202010755929 A CN 202010755929A CN 111931924 A CN111931924 A CN 111931924A
Authority
CN
China
Prior art keywords
memristor
neural network
compensation
training
tile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010755929.0A
Other languages
Chinese (zh)
Other versions
CN111931924B (en
Inventor
高滨
刘宇一
唐建石
吴华强
钱鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010755929.0A priority Critical patent/CN111931924B/en
Publication of CN111931924A publication Critical patent/CN111931924A/en
Application granted granted Critical
Publication of CN111931924B publication Critical patent/CN111931924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a memristor neural network chip architecture compensation method based on online migration training, and belongs to the field of application of memory-computation integrated chips. The method includes the steps that firstly, offline training is conducted on a memristor neural network chip, and neural network weights corresponding to all layers in the memristor neural network chip are obtained. Then, adding a compensation full-connection layer Tile before the last full-connection layer Tile to realize the architecture compensation of the memristor neural network chip; training the memristor neural network chip after the framework compensation through an online migration training method to obtain the trained memristor neural network chip. According to the method, the online migration training precision of the memristor neural network chip can reach an acceptable level under the condition that the device condition is limited, and the online training recognition precision of the chip can be improved.

Description

Memristor neural network chip architecture compensation method based on online migration training
Technical Field
The invention belongs to the field of application of memory and computation integrated chips, and provides a memristor neural network chip architecture compensation method based on online migration training.
Background
In recent years, research and application of artificial intelligence has made a significant breakthrough. At present, both a general purpose processor GPU and a special purpose processor TPU have obvious effects on accelerating neural network training and reasoning. However, CMOS circuits consume a lot of energy to perform complex tasks and are energy inefficient. Compared with a neural network chip based on a CMOS (complementary metal oxide semiconductor), the memristor neural network chip has the advantages of on-chip weight storage, online learning and capability of being expanded to a larger array, can quickly complete the neural network core calculation, namely matrix-vector multiplication, greatly improves the calculation energy efficiency, and is one of the key technologies for realizing a high-performance artificial intelligence chip in the future. At present, a great deal of research and realization on reasoning, training and the like are carried out on a memristor neural network chip. However, the memristor device has intrinsic non-ideal characteristics, and therefore, the online training of the memristor neural network chip can be influenced.
The device used in the memristor neural network chip is an Analog memristor (Analog RRAM), which is a memristor with a continuous conductance value that changes with the increase of the number of applied pulses. However, due to the non-ideal characteristics of the simulated memristors, such as yield (yield) and fluctuation (variation), different neural networks can achieve the same precision during offline training, and weights of the neural networks trained offline cannot be completely and accurately mapped onto the memristor arrays, so that the recognition precision which can be achieved by the chip online training is much lower than the software offline training precision.
At present, for a memristor neural network chip, an on-line training method adopts transfer training, namely, only the weight of a full connection layer is trained, and the weight of a convolutional layer is not trained. Therefore, for a large-scale deep neural network, the non-ideality of the simulated memristor has a larger influence on online training, and the precision loss is more serious. If the scale of the neural network is small, the accuracy reduction achieved by the online training is larger.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a memristor neural network chip architecture compensation method based on online migration training. According to the method, the online migration training precision of the memristor neural network chip can reach an acceptable level under the condition that the device condition is limited, and the online training recognition precision of the chip can be improved.
The invention provides a memristor neural network chip architecture compensation method based on online migration training, which is characterized by comprising the following steps of:
1) performing off-line training on a neural network to be mapped onto a memristor neural network chip, and acquiring neural network weights corresponding to each layer in the neural network after the training is finished;
the architecture of the memristor neural network chip comprises an N-layer mapping convolution layer Tile and an M-layer mapping full-connection layer Tile which are sequentially connected;
2) performing architecture compensation on the memristor neural network chip; for the structure of the original chip, a compensation full-connection layer Tile is added before the last full-connection layer Tile;
3) training the memristor neural network chip after the framework compensation through an online migration training method to obtain the trained memristor neural network chip; the method comprises the following specific steps
3-1) updating the memristor array conductance values in the full-connection layer Tile in the memristor neural network chip after the framework compensation through iterative training to obtain the memristor array conductance values in the compensated full-connection layer Tile after the training is finished; the method comprises the following specific steps:
3-1-1) sequentially assigning the neural network weights corresponding to the layers obtained by the off-line training in the step 1) to the memristor arrays in the corresponding layers Tile of the memristor neural network chip after the framework compensation, wherein the neural network weights of the layers are represented by the conductance values of the memristor arrays in the layers Tile, and the conductance values are used as the initial conductance values of the memristor arrays in the layers Tile; at the moment, the conductance values of the memristor arrays in the compensation fully-connected layer Tile are all in a high-resistance state, and the conductance value corresponding to the high-resistance state is used as the initial conductance value of the memristor arrays in the compensation fully-connected layer Tile;
3-1-2) carrying out independent iterative training on the compensation fully-connected layer, carrying out one-time reasoning calculation on the memristor neural network chip after framework compensation, calculating a conductance update value of the memristor array in the compensation fully-connected layer Tile through a back propagation algorithm, and updating according to the update value to obtain the conductance value of the memristor array in the compensation fully-connected layer Tile after the iteration;
3-1-3) after the conductance value of the memristor array in the compensation full-connection layer Tile is updated and compensated each time, carrying out reasoning calculation once again to obtain the identification rate corresponding to the current iteration, and completing the iteration once;
3-1-4) when the variance of the recognition rate of the latest continuous K iterations is smaller than a set recognition rate threshold, finishing training of the memristor array conductance value in the compensation fully-connected layer Tile to obtain the trained memristor array conductance value in the compensation fully-connected layer Tile;
3-2) updating the conductance values of the memristor arrays in all the fully-connected layers Tile through iterative training to obtain trained memristor neural network chips after framework compensation; the method comprises the following specific steps:
3-2-1) taking the memristor array conductance value in the trained compensation fully-connected layer Tile obtained in the step 3-1) as an initial conductance value of the memristor array in the layer Tile; conducting iterative training on the conductance values of the memristor arrays in each mapping full-connection layer Tile and the compensation full-connection layer Tile;
during each iterative training, performing reasoning calculation on the memristor neural network chip after the structural compensation, calculating a conductance update value of the memristor array in each fully-connected layer Tile through a back propagation algorithm, and updating according to the update value to obtain the conductance value of the memristor array in each fully-connected layer Tile after the iteration;
3-2-2) after the conductance values of the memristor arrays in all the full-connection layers Tile are updated each time, carrying out reasoning calculation once again to obtain the recognition rate corresponding to the current iteration, and completing the iteration once;
3-2-3) when the variance of the recognition rate of the latest continuous K iterations is smaller than a set recognition rate threshold, finishing the training of the conductance value of the memristor array in each fully-connected layer Tile, and finishing the on-line migration training of the memristor neural network chip after the framework compensation; and the corresponding maximum recognition rate in the iterative training process is the training precision of the memristor neural network chip after the framework compensation.
The invention has the characteristics and beneficial effects that:
the method ensures the online migration training precision, performs architecture compensation when designing the memristor neural network chip, increases the hardware Tile of a compensation full-connection layer, and reduces the online training precision loss caused by the non-ideal characteristics of devices. After compensation, the redundancy overhead of the memristor array is 2% -10% of that of the original network, and the problem that the online training precision of the memristor neural network chip is reduced is solved through less hardware overhead. The memristor neural network chip can be applied to the fields of artificial intelligence such as image recognition and voice recognition, and has the advantages of high energy efficiency, low power consumption and the like. The method solves the problem of accuracy reduction of the memristor neural network chip, and enables the chip to be more reliable in application in the field of artificial intelligence.
Drawings
FIG. 1 is an overall flow diagram of the method of the present invention;
FIG. 2 is a memristor neural network chip hardware architecture compensation schematic in the present disclosure;
FIG. 3 is a memristor neural network chip hardware architecture compensation schematic of an embodiment of the present disclosure;
FIG. 4 is a preliminary simulation result of the memristor neural network chip architecture compensation method based on online migration training according to the embodiment of the present invention.
Detailed Description
The invention provides a memristor neural network chip architecture compensation method based on online migration training, which is further described in detail below by combining the attached drawings and specific embodiments.
The invention provides a memristor neural network chip architecture compensation method based on online migration training, the overall flow is shown in figure 1, and the method comprises the following steps:
1) off-line training is carried out on a Neural Network (Neural Network) which needs to be mapped to a memristor Neural Network chip, wherein the off-line training is to train the weights of the Neural Network by using a standard learning algorithm in an external computer, and the weights of the Neural Network corresponding to each layer in the Neural Network are obtained after the off-line training is finished.
Fig. 2(a) is a structure of a memristor neural network chip for performing neural network offline training (i.e., before modification) according to the present invention, and includes N layers of mapping convolution layer Tile and M layers of mapping full-connection layer Tile (N and M are positive integers and may not be equal) that are connected in sequence. The Tile is a basic hardware module for mapping a neural network layer, in this embodiment, both the convolutional layer Tile and the connection layer Tile adopt the structure of Extended Data file.5 in documents p.yao et al, "full hardware-implemented parameter logical network," Nature, vol.577, No.7792, pp.641-646,2020 ", and the Tile includes a memristor array, a WL/SL/BL driver, a DAC/ADC (digital-to-analog converter/analog-to-digital converter), a shift adder, and the like.
2) Performing architecture compensation on the memristor neural network chip; for the original hardware neural network architecture, a compensation full-connection layer Tile is added before the last full-connection layer Tile;
fig. 2 is a schematic diagram of a compensation method for an online migration training hardware network according to the present invention. Fig. 2(a) is a framework of a memristor neural network chip before modification, and fig. 2(B) is a framework of a memristor neural network chip after modification, in order to ensure that hardware network weights have a certain redundancy ratio and avoid excessive compensation overhead, the position of an added compensation full-connection layer Tile is before the last layer of the mapping full-connection layer Tile, and the compensation full-connection layer Tile may also adopt tiles the same as other layers. For neural networks with different depths, the redundancy ratio of the compensated hardware network weight is about 2% -10%. By adding the compensation full-connection layer, on one hand, the redundancy of the hardware network weight is improved, so that the influence of non-ideality of certain devices is counteracted; on the other hand, it can also be understood that a classification layer is added, and the influence of the non-ideality of the device is reduced through further nonlinear fitting.
3) Training the memristor neural network chip after the framework compensation through an online migration training method to obtain the trained memristor neural network chip; the method comprises the following specific steps
And 3-1) updating the memristor array conductance value in the full-connection layer Tile in the memristor neural network chip after the framework compensation through iterative training to obtain the memristor array conductance value in the compensated full-connection layer Tile after the training is finished. The method comprises the following specific steps:
3-1-1) sequentially assigning the neural network weights corresponding to the layers obtained by the off-line training in the step 1) to the memristor arrays in the corresponding layers Tile of the memristor neural network chip after the framework compensation, wherein the neural network weight values of the layers are represented by the conductance values of the memristor arrays in the layers Tile, and the conductance values are used as the initial conductance values of the memristor arrays in the layers Tile; at this time, the compensation fully-connected layer Tile does not have weight mapping of offline training, the conductance values of the memristor arrays in the compensation fully-connected layer Tile are all in a high resistance state (the high resistance state of the device is determined by the dynamic window range of the device, for example, if the dynamic window of the memristor array is 2 μ S to 20 μ S, the conductance value corresponding to the high resistance state of the memristor array is 2 μ S), and the conductance value corresponding to the high resistance state is used as the initial conductance value of the memristor array in the compensation fully-connected layer Tile.
3-1-2) carrying out independent iterative training on the compensation fully-connected layer, carrying out one-time reasoning calculation on the memristor neural network chip after framework compensation, calculating a conductance update value of the memristor array in the compensation fully-connected layer Tile through a back propagation algorithm, and updating according to the update value to obtain the conductance value of the memristor array in the compensation fully-connected layer Tile after the iteration; the iterative calculation of this section only updates the conductance values of the memristor arrays in the compensation fully-connected layer.
3-1-3) updating and compensating the conductance value of the memristor array in the fully-connected layer fc _ p Tile every time, and then performing reasoning calculation once again to obtain the recognition rate corresponding to the current iteration, and completing the iteration once.
3-1-4) when the variance of the recognition rate of the latest continuous K iterations is less than the set recognition rate threshold (wherein K in this embodiment takes the value of 10; the recognition rate threshold value of the embodiment is 0.06), and if the recognition rate of the neural network of the iterative training is stable, the conduction values of the memristor arrays in the compensation fully-connected layer Tile are trained to obtain the conduction values of the memristor arrays in the trained compensation fully-connected layer Tile.
And 3-2) updating the conductance values (including the compensation full connection layers) of the memristor arrays in all full connection layers Tile through iterative training to obtain the trained architecture-compensated memristor neural network chip. The method comprises the following specific steps:
3-2-1) on the basis of independently training the memristor array conductance values in the fully-connected layer, taking the memristor array conductance values in the fully-connected layer Tile after training obtained in the step 3-1) as the initial conductance values of the memristor arrays in the layer Tile, wherein the initial conductance values of the memristor arrays in the rest layers Tile are still the initial conductance values determined in the step 3-1-1), and performing iterative training on the conductance values of the memristor arrays in each mapping fully-connected layer Tile and the fully-connected layer Tile.
During each iteration training, after the memristor neural network chip after the framework compensation performs one-time reasoning calculation, the conductance update value of the memristor array in each fully-connected layer Tile is calculated through a back propagation algorithm, and the conductance values of the memristor array in the M-layer mapping fully-connected layer Tile and the compensation fully-connected layer fc _ p Tile after the iteration are obtained according to the update value.
3-2-2) after the conductance values of the memristor arrays in all the full-connection layers Tile are updated each time, carrying out reasoning calculation once again to obtain the recognition rate corresponding to the current iteration, and completing the iteration once.
3-2-3) when the variance of the recognition rate of the last continuous K iterations is smaller than a set recognition rate threshold (in this embodiment, it is set that when the variance of the recognition rate of the last continuous 10 iterations is smaller than 0.06), the training of the conductance values of the memristor arrays in all fully-connected layers Tile is completed, and the training of the on-line migration of the memristor neural network chip after the architecture compensation is completed. In the iterative training process, the maximum recognition rate output by each iterative reasoning calculation is the training precision of the memristor neural network chip after the framework compensation.
After the compensation full-connection layer is introduced, the training method of the online migration training also needs to be adjusted. The original migration training method is to train the weights of the full connection layer only and not to train the weights of the convolution layer after the off-line trained weights are mapped to the neural network layers in sequence, and aims to reduce training time and power consumption overhead. Improvements are made to this algorithm in the present invention. Because the compensation fully-connected layer does not have the weight mapping trained offline, the newly introduced compensation fully-connected layer is trained independently at first, namely in the initial iterative training, the conductance value of the memristor array in the compensation fully-connected layer fc _ p is only updated after the weight updating value of the fully-connected layer is calculated through a back propagation algorithm, and the aim of saving the power consumption expense is fulfilled. After certain iterative training times, the network identification precision is stable, and all the fully-connected layers (including the compensation fully-connected layers) are further trained, namely after the weight update value of each fully-connected layer is calculated through a back propagation algorithm, the conductance value of the memristor array in each fully-connected layer (including the compensation fully-connected layer) is changed based on the weight update value of each layer. And when the iterative training is carried out for a plurality of times, the network identification precision is stabilized again, and the framework compensation and the online migration training thereof are completed. According to the memristor neural network architecture compensation method based on online migration training, the online training precision is improved to a certain extent compared with the original network before compensation, and the method relieves the problem of reduced online training recognition precision caused by the nonideality of memristor devices.
The technical solution of the present invention is further described in detail with reference to a specific example.
In this embodiment, the original memristor neural network chip is a convolutional neural network for identifying a Cifar-10 data set, and its hardware implementation includes Tile of six mapped convolutional layers and Tile of two mapped fully-connected layers (fig. 3(a)), and specific network parameters are as shown in table 1. Mapping means that off-line trained neural network weights are programmed to memristor arrays in all layers of tiles in sequence, and the weight values of the network are expressed by device conductance values of the memristors.
TABLE 1 parameter table of memristor neural network chip in this embodiment
Figure BDA0002611573490000061
In order to compensate the influence of the non-ideal characteristics of the device on the online migration training, a compensation full-connection layer, namely Tile in this example (fig. 3(B)), is added to the hardware architecture, and Tile fc _ p is added before the original network Tile fc2, wherein the size of the memristor array is 128 × 128.
And then, training the compensated hardware neural network chip by adopting online migration training. The online migration training is divided into two steps: firstly, training a newly added compensation full-connection layer fc _ p by using a back propagation algorithm, and changing the conductance value of a memristor array in Tile fc _ p based on a weight updating value obtained by the algorithm; and after the recognition rate is stable, training all full connection layers, namely fc1, fc _ p and fc2, changing the conductance values of memristor arrays in Tile fc1, Tile fc _ p and Tile fc2 according to the weight updating value obtained by a back propagation algorithm, and completing hardware neural network architecture compensation and online migration training when the recognition rate is stable again.
The initial result shows that the memristor neural network chip architecture compensation based on the online migration training can improve the online training recognition precision by simulating the original network architecture and the compensation network architecture. The left graph in FIG. 4 is an online training recognition accuracy change curve of the original memristor neural network chip before compensation; and the right graph is a compensated memristor neural network chip on-line training recognition precision change curve. According to the results, compared with the original network architecture, the on-line training precision of the test set is improved by 3% and the recognition precision of the training set is improved by 4% due to the architecture compensation.

Claims (1)

1. A memristor neural network chip architecture compensation method based on online migration training is characterized by comprising the following steps:
1) performing off-line training on a neural network to be mapped onto a memristor neural network chip, and acquiring neural network weights corresponding to each layer in the neural network after the training is finished;
the architecture of the memristor neural network chip comprises an N-layer mapping convolution layer Tile and an M-layer mapping full-connection layer Tile which are sequentially connected;
2) performing architecture compensation on the memristor neural network chip; for the structure of the original chip, a compensation full-connection layer Tile is added before the last full-connection layer Tile;
3) training the memristor neural network chip after the framework compensation through an online migration training method to obtain the trained memristor neural network chip; the method comprises the following specific steps
3-1) updating the memristor array conductance values in the full-connection layer Tile in the memristor neural network chip after the framework compensation through iterative training to obtain the memristor array conductance values in the compensated full-connection layer Tile after the training is finished; the method comprises the following specific steps:
3-1-1) sequentially assigning the neural network weights corresponding to the layers obtained by the off-line training in the step 1) to the memristor arrays in the corresponding layers Tile of the memristor neural network chip after the framework compensation, wherein the neural network weights of the layers are represented by the conductance values of the memristor arrays in the layers Tile, and the conductance values are used as the initial conductance values of the memristor arrays in the layers Tile; at the moment, the conductance values of the memristor arrays in the compensation fully-connected layer Tile are all in a high-resistance state, and the conductance value corresponding to the high-resistance state is used as the initial conductance value of the memristor arrays in the compensation fully-connected layer Tile;
3-1-2) carrying out independent iterative training on the compensation fully-connected layer, carrying out one-time reasoning calculation on the memristor neural network chip after framework compensation, calculating a conductance update value of the memristor array in the compensation fully-connected layer Tile through a back propagation algorithm, and updating according to the update value to obtain the conductance value of the memristor array in the compensation fully-connected layer Tile after the iteration;
3-1-3) after the conductance value of the memristor array in the compensation full-connection layer Tile is updated and compensated each time, carrying out reasoning calculation once again to obtain the identification rate corresponding to the current iteration, and completing the iteration once;
3-1-4) when the variance of the recognition rate of the latest continuous K iterations is smaller than a set recognition rate threshold, finishing training of the memristor array conductance value in the compensation fully-connected layer Tile to obtain the trained memristor array conductance value in the compensation fully-connected layer Tile;
3-2) updating the conductance values of the memristor arrays in all the fully-connected layers Tile through iterative training to obtain trained memristor neural network chips after framework compensation; the method comprises the following specific steps:
3-2-1) taking the memristor array conductance value in the trained compensation fully-connected layer Tile obtained in the step 3-1) as an initial conductance value of the memristor array in the layer Tile; conducting iterative training on the conductance values of the memristor arrays in each mapping full-connection layer Tile and the compensation full-connection layer Tile;
during each iterative training, performing reasoning calculation on the memristor neural network chip after the structural compensation, calculating a conductance update value of the memristor array in each fully-connected layer Tile through a back propagation algorithm, and updating according to the update value to obtain the conductance value of the memristor array in each fully-connected layer Tile after the iteration;
3-2-2) after the conductance values of the memristor arrays in all the full-connection layers Tile are updated each time, carrying out reasoning calculation once again to obtain the recognition rate corresponding to the current iteration, and completing the iteration once;
3-2-3) when the variance of the recognition rate of the latest continuous K iterations is smaller than a set recognition rate threshold, finishing the training of the conductance value of the memristor array in each fully-connected layer Tile, and finishing the on-line migration training of the memristor neural network chip after the framework compensation; and the corresponding maximum recognition rate in the iterative training process is the training precision of the memristor neural network chip after the framework compensation.
CN202010755929.0A 2020-07-31 2020-07-31 Memristor neural network chip architecture compensation method based on online migration training Active CN111931924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010755929.0A CN111931924B (en) 2020-07-31 2020-07-31 Memristor neural network chip architecture compensation method based on online migration training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010755929.0A CN111931924B (en) 2020-07-31 2020-07-31 Memristor neural network chip architecture compensation method based on online migration training

Publications (2)

Publication Number Publication Date
CN111931924A true CN111931924A (en) 2020-11-13
CN111931924B CN111931924B (en) 2022-12-13

Family

ID=73315877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010755929.0A Active CN111931924B (en) 2020-07-31 2020-07-31 Memristor neural network chip architecture compensation method based on online migration training

Country Status (1)

Country Link
CN (1) CN111931924B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114330688A (en) * 2021-12-23 2022-04-12 厦门半导体工业技术研发有限公司 Model online migration training method, device and chip based on resistive random access memory
WO2023201773A1 (en) * 2022-04-22 2023-10-26 浙江大学 Neural network retraining and gradient sparsification method based on memristor aging perception
WO2024032220A1 (en) * 2022-08-10 2024-02-15 华为技术有限公司 In-memory computing circuit-based neural network compensation method, apparatus and circuit

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063826A (en) * 2018-03-19 2018-12-21 重庆大学 A kind of convolutional neural networks implementation method based on memristor
CN109657787A (en) * 2018-12-19 2019-04-19 电子科技大学 A kind of neural network chip of two-value memristor
CN109800870A (en) * 2019-01-10 2019-05-24 华中科技大学 A kind of Neural Network Online learning system based on memristor
CN110796241A (en) * 2019-11-01 2020-02-14 清华大学 Training method and training device of neural network based on memristor
WO2020115746A1 (en) * 2018-12-04 2020-06-11 Technion Research & Development Foundation Limited Delta-sigma modulation neurons for high-precision training of memristive synapses in deep neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063826A (en) * 2018-03-19 2018-12-21 重庆大学 A kind of convolutional neural networks implementation method based on memristor
WO2020115746A1 (en) * 2018-12-04 2020-06-11 Technion Research & Development Foundation Limited Delta-sigma modulation neurons for high-precision training of memristive synapses in deep neural networks
CN109657787A (en) * 2018-12-19 2019-04-19 电子科技大学 A kind of neural network chip of two-value memristor
CN109800870A (en) * 2019-01-10 2019-05-24 华中科技大学 A kind of Neural Network Online learning system based on memristor
CN110796241A (en) * 2019-11-01 2020-02-14 清华大学 Training method and training device of neural network based on memristor

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114330688A (en) * 2021-12-23 2022-04-12 厦门半导体工业技术研发有限公司 Model online migration training method, device and chip based on resistive random access memory
WO2023201773A1 (en) * 2022-04-22 2023-10-26 浙江大学 Neural network retraining and gradient sparsification method based on memristor aging perception
WO2024032220A1 (en) * 2022-08-10 2024-02-15 华为技术有限公司 In-memory computing circuit-based neural network compensation method, apparatus and circuit

Also Published As

Publication number Publication date
CN111931924B (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN111931924B (en) Memristor neural network chip architecture compensation method based on online migration training
CN109460817B (en) Convolutional neural network on-chip learning system based on nonvolatile memory
CN111507464B (en) Equation solver based on memristor array and operation method thereof
CN109146070A (en) A kind of peripheral circuit and system of neural network training of the support based on RRAM
CN110297490B (en) Self-reconstruction planning method of heterogeneous modular robot based on reinforcement learning algorithm
CN110852429B (en) 1T 1R-based convolutional neural network circuit and operation method thereof
CN108647184B (en) Method for realizing dynamic bit convolution multiplication
CN116224794A (en) Reinforced learning continuous action control method based on discrete-continuous heterogeneous Q network
CN115689070A (en) Energy prediction method for optimizing BP neural network model based on imperial butterfly algorithm
Wei et al. A relaxed quantization training method for hardware limitations of resistive random access memory (ReRAM)-based computing-in-memory
Wu et al. Bulk‐Switching Memristor‐Based Compute‐In‐Memory Module for Deep Neural Network Training
Geng et al. An on-chip layer-wise training method for RRAM based computing-in-memory chips
CN113807040A (en) Optimal design method for microwave circuit
CN112199234A (en) Neural network fault tolerance method based on memristor
CN115879530B (en) RRAM (remote radio access m) memory-oriented computing system array structure optimization method
CN109359734B (en) Neural network synaptic structure based on memristor unit and adjusting method thereof
CN116468090A (en) Hardware convolutional neural network model based on memristor realization
CN115906976A (en) Full-analog vector matrix multiplication memory computing circuit and application thereof
CN116303229A (en) Calculation method of storage computing system for grouping forward gradient regression
de Lima et al. Quantization-aware in-situ training for reliable and accurate edge ai
Doevenspeck et al. Noise tolerant ternary weight deep neural networks for analog in-memory inference
CN110829434B (en) Method for improving expansibility of deep neural network tidal current model
Park et al. Recognition Accuracy Enhancement using Interface Control with Weight Variation-Lowering in Analog Computation-in-Memory
CN113359471B (en) Self-adaptive dynamic programming optimal control method and system based on collaborative state assistance
CN114399037B (en) Memristor-based convolutional neural network accelerator core simulation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant