CN111950718B - Method for realizing progressive CNN operation by using storage and computation integrated chip - Google Patents

Method for realizing progressive CNN operation by using storage and computation integrated chip Download PDF

Info

Publication number
CN111950718B
CN111950718B CN201910407923.1A CN201910407923A CN111950718B CN 111950718 B CN111950718 B CN 111950718B CN 201910407923 A CN201910407923 A CN 201910407923A CN 111950718 B CN111950718 B CN 111950718B
Authority
CN
China
Prior art keywords
chip
characteristic
feature
sequence
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910407923.1A
Other languages
Chinese (zh)
Other versions
CN111950718A (en
Inventor
王绍迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhicun Computing Technology Co ltd
Original Assignee
Beijing Witinmem Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Witinmem Technology Co ltd filed Critical Beijing Witinmem Technology Co ltd
Priority to CN201910407923.1A priority Critical patent/CN111950718B/en
Publication of CN111950718A publication Critical patent/CN111950718A/en
Application granted granted Critical
Publication of CN111950718B publication Critical patent/CN111950718B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method for realizing progressive CNN operation by using a storage and computation integrated chip, which judges whether a characteristic unit cached in an on-chip memory contains a current characteristic unit to be operated or not according to a convolution kernel of a current convolution layer; if yes, obtaining an input feature sequence according to the feature unit to be operated currently; inputting the input characteristic number sequence into a flash memory unit sequence pre-stored with a weight number sequence so as to obtain a matrix multiply-add operation result of the weight number sequence and the input characteristic number sequence at an output end of the flash memory unit sequence; wherein the number of elements of the input feature number series is equal to the number of elements of the weight number series. Namely: when the current operation is carried out, the matrix multiply-add operation can be carried out only by caching the current characteristic unit to be operated by the on-chip memory, the capacity of the required on-chip memory is small, and the convolution neural network operation can be realized by utilizing the existing storage-operation integrated chip.

Description

Method for realizing progressive CNN operation by using storage and computation integrated chip
Technical Field
The invention relates to the technical field of semiconductor integrated circuit application, in particular to a method for realizing progressive CNN operation by using a storage and computation integrated chip.
Background
With the introduction of deep learning theory and the improvement of digital computing equipment, Convolutional Neural Networks (CNNs) have been rapidly developed and are widely applied to the fields of computer vision, natural language processing, and the like. Convolutional Neural Networks are a class of feed-forward Neural Networks (fed-forward Neural Networks) that contain convolutional calculations and have deep structures, and are one of the representative algorithms of deep learning (deep learning).
The main operation in the convolutional neural network is concentrated in the convolutional layer, the function of which is to perform feature extraction on input data, and the convolutional layer may contain one or more convolutional cores therein, and each element constituting a convolutional core corresponds to a weight coefficient and a bias vector (bias vector). In the operation process of the convolutional neural network, assuming that the input feature map size is (W, H, K), the convolutional kernel size of the convolutional layer is (W, H, K), the number of convolutional kernels is C, X, and the moving step length in the Y direction is (X, Y), respectively, the output feature map size of the convolutional layer is [1+ (W-W)/X,1+ (H-H)/Y, K ], where X represents the horizontal direction, Y represents the vertical direction, W represents the width of the input feature map (or the number of feature units in the X direction), H represents the height of the input feature map (or the number of feature units in the Y direction), and K represents the number of channels. In addition, the input feature map includes (W × H) feature cells in the X and Y directions, and each small square represents one feature cell. In the convolution process, the convolution kernel moves on the input characteristic diagram in sequence according to the step length in the X and Y directions, and simultaneously, the values of the corresponding positions are multiplied and added to be used as the value of the current position of the output characteristic diagram, namely matrix multiplication and addition operation. Fig. 2 shows a convolution process in which the input feature map has a size (W ═ 5, H ═ 5, and K ═ 3), and C ═ 2 convolution kernels have all sizes (W ═ 3, H ═ 3, and K ═ 3), and the shift step is (x ═ 1, and y ═ 1), and convolution operation with the shift step being (x ═ 1, y ═ 1) is performed on the input feature map according to convolution kernel 1 and convolution kernel 2, respectively, to obtain two corresponding output feature maps.
In recent years, a computing-integrated chip architecture has been widely focused and researched. The integrated storage and computation chip can realize analog vector-matrix multiplication and is suitable for the field of Artificial Intelligence (AI). However, in the operation of the convolutional neural network, all input feature maps need to be read into an on-chip memory and then operated, so that the required on-chip memory has large capacity and high operation complexity, and the conventional storage and operation integrated chip cannot realize the operation of the convolutional neural network.
Disclosure of Invention
In view of this, the present invention provides a method for implementing a progressive CNN operation by using a storage-computation-integrated chip and a storage-computation-integrated chip, which can implement a convolutional neural network operation by using the existing storage-computation-integrated chip, and reduce the capacity of an on-chip memory.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, a method for implementing a progressive CNN operation by using a storage-and-computation-integrated chip is provided, where the CNN operation is performed on an input feature map, the CNN includes a plurality of convolutional layers, the input feature map includes a plurality of feature units arranged in an array, and the storage-and-computation-integrated chip includes: the flash memory cell array is used for carrying out matrix multiplication and addition operation, and the on-chip memory is used for caching partial characteristic cells of the input characteristic diagram;
the method for realizing the progressive CNN by using the storage and calculation integrated chip comprises the following steps:
judging whether the characteristic units cached in the on-chip memory contain the current characteristic units to be operated or not according to the convolution kernels of the current convolution layer;
if yes, obtaining an input feature sequence according to the feature unit to be operated currently;
inputting the input characteristic number sequence into a flash memory unit sequence pre-stored with a weight number sequence so as to obtain a matrix multiply-add operation result of the weight number sequence and the input characteristic number sequence at an output end of the flash memory unit sequence;
wherein the number of elements of the input feature number series is equal to the number of elements of the weight number series.
Further, the memory integrated chip further includes: a reading circuit for reading the characteristic cells of the input characteristic pattern from an external storage device,
the method for realizing the progressive CNN by using the storage and calculation integrated chip further comprises the following steps:
and controlling the reading circuit to read the characteristic unit of the input characteristic diagram from an external storage device and buffer the characteristic unit to the on-chip memory.
Further, the memory integrated chip further includes: a programming circuit for controlling a weight of each flash cell in the array of flash cells, the method for implementing the progressive CNN using a computational integrated chip further comprising:
obtaining the weight series according to the convolution kernel;
and controlling the programming circuit to write the weight number sequence into the flash memory unit sequence.
Further, after inputting the input characteristic number sequence into the flash memory cell sequence pre-stored with the weight number sequence, the method further includes:
and deleting the operated garbage feature unit in the feature unit cached in the on-chip memory.
Further, still include:
and caching the matrix multiplication and addition operation result as a characteristic unit of the next convolution layer to the on-chip memory.
Further, the input feature map size is (W, H, K), the convolution kernel size is (W, H, K), and the number of convolution kernels is C.
The judging whether the characteristic unit cached in the on-chip memory contains the current characteristic unit to be operated according to the convolution kernel of the CNN operation comprises the following steps:
judging whether the number of the characteristic units cached in the on-chip memory is greater than p;
p=[W×(h-1)+w]×k。
further, the obtaining the weight series according to the convolution kernel includes:
and acquiring elements of the convolution kernel according to a preset sequence to obtain the weight series.
Further, the obtaining an input feature sequence according to the feature unit to be operated currently comprises:
and acquiring the current feature unit to be operated according to the preset sequence to obtain the input feature number sequence.
In a second aspect, there is provided a computing integrated chip comprising: the method comprises the steps of a flash memory cell array for carrying out matrix multiplication and addition operation, an on-chip memory for caching partial characteristic cells of the input characteristic diagram, and a method for implementing the progressive CNN operation by using a calculation integrated chip.
The embodiment of the invention provides a method for realizing progressive CNN operation by using a storage and calculation integrated chip and the storage and calculation integrated chip, wherein the method is used for performing CNN operation on an input feature map, the CNN comprises a plurality of convolution layers, the input feature map comprises a plurality of feature units arranged in an array, and the storage and calculation integrated chip comprises: the flash memory cell array is used for carrying out matrix multiplication and addition operation, and the on-chip memory is used for caching partial characteristic cells of the input characteristic diagram; the method for realizing the progressive CNN by using the storage and calculation integrated chip comprises the following steps: judging whether the characteristic units cached in the on-chip memory contain the current characteristic units to be operated or not according to the convolution kernels of the current convolution layer; if yes, obtaining an input feature sequence according to the feature unit to be operated currently; inputting the input characteristic number sequence into a flash memory unit sequence pre-stored with a weight number sequence so as to obtain a matrix multiply-add operation result of the weight number sequence and the input characteristic number sequence at an output end of the flash memory unit sequence; wherein the number of elements of the input feature number series is equal to the number of elements of the weight number series. Namely: when the current operation is carried out, the matrix multiplication and addition operation can be carried out only by caching the current feature unit to be operated by the on-chip memory, and because all input feature graphs do not need to be read into the on-chip memory and then operated, the required on-chip memory is small in capacity and low in operation complexity, and the conventional storage and calculation integrated chip can be used for realizing the operation of the convolutional neural network.
On the other hand, after the input characteristic number sequence is input into the flash memory cell sequence pre-stored with the weight number sequence, the computed garbage characteristic cells in the characteristic cells cached in the on-chip memory are deleted, so that the storage space is released, and further the capacity requirement of the on-chip memory is effectively reduced.
In order to make the aforementioned and other objects, features and advantages of the invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. In the drawings:
FIG. 1 is a schematic diagram of a convolutional neural network operation;
FIG. 2 is a schematic diagram of the operation of a convolutional neural network;
FIG. 3 is a block diagram of a computing integrated chip according to an embodiment of the present invention;
FIG. 4 is a first flowchart of a method for implementing a progressive CNN operation by using a storage and computation integrated chip according to an embodiment of the present invention;
FIG. 5 is a flowchart II of a method for implementing a progressive CNN operation by using a storage-computation-integrated chip according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a method for implementing a progressive CNN operation by using a storage-computation-integrated chip according to an embodiment of the present invention;
FIG. 7a is a diagram of a CNN operation performed on a 5 × 5 feature map using a 3 × 3 convolution kernel;
fig. 7b shows the weight sequence obtained from the convolution kernel when the method for implementing the progressive CNN operation by using the memory-integrated chip in the embodiment of the present invention is used.
Fig. 7c shows that the input feature number sequence is obtained according to the feature unit to be operated currently when the method for implementing the progressive CNN operation by using the memory-integrated chip in the embodiment of the present invention is used.
Fig. 7d shows a circuit diagram of a progressive CNN operation implemented by using a memory integrated chip according to an embodiment of the present invention.
Fig. 8a to 8c are schematic diagrams illustrating a method for implementing a progressive CNN operation by using a storage-computation-integrated chip according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of this application and the above-described drawings, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Because all input characteristic graphs need to be read into the on-chip memory in the operation of the convolutional neural network and then the operation is carried out, the capacity of the required on-chip memory is large, and the operation complexity is high, so that the conventional storage and operation integrated chip cannot realize the operation of the convolutional neural network.
In order to solve the technical problems in the prior art, embodiments of the present invention provide a method for implementing a progressive CNN operation by using a storage-computation-integrated chip and the storage-computation-integrated chip, when performing a current operation, only an on-chip memory needs to cache a feature unit to be currently operated, so that a matrix multiply-add operation can be performed, and since it is not necessary to read all input feature maps into the on-chip memory and then perform the operation, the required on-chip memory has a small capacity and a low operation complexity, and the conventional storage-computation-integrated chip can be used to implement a convolutional neural network operation. In addition, according to the method for implementing the progressive CNN operation by using the integrated memory chip provided by the embodiment of the present invention, after the input feature number sequence is input into the flash memory cell sequence in which the weight number sequence is pre-stored, the operated garbage feature cell in the feature cell cached in the on-chip memory is deleted, or the feature cell with the longest caching time in the feature cell cached in the on-chip memory is deleted, so that the storage space is released, and the capacity requirement of the on-chip memory is further effectively reduced.
FIG. 3 is a block diagram of a computing integrated chip according to an embodiment of the present invention; as shown in fig. 3, the bank chip includes: an input register 1 for registering data to be operated, a DAC2 for converting the data to be operated registered by the input register 1 into analog data, a flash memory cell array 3 for performing calculation (such as matrix multiplication operation) on the analog data in a calculation mode, an analog processing module 4 for preprocessing the calculation result of the flash memory cell array 3, an ADC5 for converting the processing result of the analog processing module 4 into digital data, a post-processing module 10 for performing a certain processing on the digital data, an output register 6 for registering and outputting the post-processed data, a programming circuit 7 for programming the flash memory cell array 3, a reading circuit (not shown in the figure) for reading data (also called weight) stored in each flash memory cell in the flash memory cell array, a controller 8 for controlling the operations of the programming circuit 7 and the reading circuit, and an on-chip memory 9 for storing data.
In an optional embodiment, the NorFlash-based computing integrated chip may further include: a row-column decoder. The row-column decoder is connected to the flash memory cell array 3 and the controller 8, and is configured to perform row-column decoding on the flash memory cell array 3 under the control of the controller 8.
The flash memory cell array 3 is composed of a plurality of flash memory cells with adjustable threshold voltages. The threshold voltage of the flash memory units is adjustable, namely the transconductance of each flash memory unit is adjustable, which is equivalent to that each flash memory unit stores variable simulation weight data, a plurality of flash memory units in the flash memory unit array form a simulation data array, each data in the array can be freely adjusted, according to ohm's law and kirchhoff's law, the output current of each flash memory unit is equal to the input simulation data multiplied by the simulation weight data, the output currents of the plurality of flash memory units are equal to the sum of the output currents of each flash memory unit, and further, the simulation vector-matrix multiplication operation is directly realized in the flash memory unit array.
The flash memory cell can be implemented by a floating gate transistor.
One end of the on-chip memory 9 is connected to the input interface of the chip with computer integration, the other end is connected to the input end of the input register 1, the output end of the input register 1 is connected to the input end of the DAC2, the output end of the DAC2 is connected to the input end of the flash memory cell array 3, the output end of the flash memory cell array 3 is connected to the input end of the analog processing module 4, the output end of the analog processing module 4 is connected to the input end of the ADC5, the output end of the ADC5 is connected to the input end of the post-processing module 10, the output end of the post-processing module 10 is connected to the input end of the output register 6, the output end of the output register 6 is connected to the output interface of the chip with computer integration and one end of the on-chip memory 9, the programming circuit 7 is connected to the flash memory cell array 3, and the controller 8 is used for controlling the operation of the modules in the chip with computer integration so that the modules cooperate with each other.
The programming circuit 7 is connected to the source, the gate and/or the substrate of each flash memory cell in the flash memory cell array, and is used for regulating and controlling the threshold voltage of the flash memory cell. Wherein, the programming circuit 7 may include: a voltage generating circuit for generating a program voltage or an erase voltage, and a voltage control circuit for applying the program voltage to a selected flash memory cell.
Specifically, the programming circuit applies a high voltage (i.e., a programming voltage) to the source of the flash memory cell using a hot electron injection effect to accelerate channel electrons to a high speed to increase the threshold voltage of the flash memory cell, thereby implementing programming.
The programming circuit uses tunneling effect to apply a high voltage (i.e., an erase voltage) to the gate or substrate of the flash memory cell, thereby reducing the threshold voltage of the flash memory cell and enabling erase.
The controller 8 controls the programming circuit 7 to program the flash memory cell array, namely, the threshold voltage of each flash memory cell is adjusted, and the data to be processed is input into the flash memory cell array 3 after being subjected to digital-to-analog conversion by a register so as to realize calculation.
The controller 8 is also connected to a read circuit 11, and controls the operation of the read circuit.
The controller 8 is configured to execute the method for implementing the progressive CNN operation by using a storage and computation integrated chip according to an embodiment of the present invention, and referring to fig. 4, the method for implementing the progressive CNN operation by using a storage and computation integrated chip may include the following steps:
step S200: judging whether the characteristic units cached in the on-chip memory contain the current characteristic units to be operated or not according to the convolution kernels of the current convolution layer;
if yes, go to step S300; and if not, controlling the reading circuit to read the feature unit of the input feature diagram from an external storage device and cache the feature unit to the on-chip memory.
Specifically, the determining, with the input feature map size being (W, H, K), the convolution kernel size being (W, H, K), and the number of convolution kernels being C, whether a feature unit cached in the on-chip memory includes a feature unit to be currently operated according to the convolution kernel of the CNN operation includes:
judging whether the number of the characteristic units cached in the on-chip memory is greater than p;
p=[W×(h-1)+w]×k。
step S300: and obtaining an input characteristic number sequence according to the current characteristic unit to be operated.
Specifically, the current feature unit to be operated is obtained according to a preset sequence, and the input feature number sequence is obtained.
Step S400: inputting the input characteristic number sequence into a flash memory unit sequence pre-stored with a weight number sequence so as to obtain a matrix multiply-add operation result of the weight number sequence and the input characteristic number sequence at an output end of the flash memory unit sequence;
wherein the number of elements of the input feature number series is equal to the number of elements of the weight number series.
In an alternative embodiment, the method for implementing the progressive CNN operation by using the memory integrated chip may further include:
step S100: and controlling the reading circuit to read the characteristic units of the input characteristic diagram from an external storage device and cache the characteristic units to the on-chip memory.
Specifically, when the operation of the first layer of convolution layer is executed, the reading circuit needs to be controlled to read the feature unit of the input feature map from the external storage device and cache the feature unit to the on-chip memory, and the operation result is directly stored on the on-chip memory and used as the input feature map of the next layer of convolution layer.
It can be understood by those skilled in the art that when the capacity of the on-chip memory is limited and the operation result of all the previous convolution layer cannot be stored, the operation result of the previous convolution layer needs to be stored in the external memory, and when the convolution operation of the current convolution layer is performed, the feature unit of the input feature map of the current convolution layer needs to be read from the external storage device and buffered in the on-chip memory by the control reading circuit.
According to the technical scheme, the method for realizing the progressive CNN operation by using the storage and computation integrated chip and the storage and computation integrated chip provided by the embodiment of the invention have the advantages that when the current operation is carried out, the matrix multiplication and addition operation can be carried out only by caching the current feature unit to be operated by the on-chip memory, and because all input feature images are not required to be read into the on-chip memory and then operated, the required on-chip memory is small in capacity and low in operation complexity, and the convolution neural network operation can be realized by using the existing storage and computation integrated chip.
Fig. 5 is a flowchart of a method for implementing a progressive CNN operation by using a storage-integration chip according to an embodiment of the present invention. As shown in fig. 5, the method for implementing the progressive CNN operation by using the integrative memory chip may further include the following steps based on the method for implementing the progressive CNN operation by using the integrative memory chip shown in fig. 4:
step S10: and obtaining the weight series according to the convolution kernel.
Specifically, the elements of the convolution kernel are obtained according to a preset sequence to obtain the weight series.
Briefly, the elements in the convolution kernel are converted into a sequence of all elements in the convolution kernel according to a predetermined rule (e.g., from left to right, in top to bottom order).
It should be noted that the preset sequence is the same as the preset sequence in the step S300 of obtaining the current feature unit to be operated according to the preset sequence to obtain the input feature sequence, so as to ensure that the feature units correspond to elements of the convolution kernel one to one, and further ensure the correctness of the convolution operation.
Step S20: and controlling the programming circuit to write the weight number sequence into the flash memory unit sequence.
Specifically, each element in the weight number series is written into each flash memory cell in a column of flash memory cells in sequence, and each flash memory cell stores a weight.
It should be noted that, at the beginning of the convolution operation by the convolution kernel, or when the convolution kernel needs to be updated, or when the weights in the flash memory cells are greatly deviated due to leakage or the like, it is necessary to sequentially write each element in the corresponding weight sequence into each flash memory cell in one column of flash memory cells by the convolution kernel.
It can be understood by those skilled in the art that the steps S10 and S20 can be executed before the step S100, after the step S100, or synchronously with the steps S100 and S200, and the embodiment of the present invention is not limited thereto, and it is only necessary to execute the steps S10 and S20 before the step S300.
Fig. 6 is a flowchart of a method for implementing a progressive CNN operation by using a storage-integration chip in an embodiment of the present invention. As shown in fig. 6, the method for implementing the progressive CNN operation by using the integrative memory chip may further include the following steps based on the method for implementing the progressive CNN operation by using the integrative memory chip shown in fig. 5:
step S500: deleting the operated garbage feature units in the feature units cached in the on-chip memory, or deleting the feature units with the longest caching time in the feature units cached in the on-chip memory, thereby releasing the storage space.
After the input characteristic number sequence is input into the flash memory unit sequence pre-stored with the weight number sequence, the operated garbage characteristic unit in the characteristic unit cached in the on-chip memory is deleted, or the characteristic unit with the longest caching time in the characteristic unit cached in the on-chip memory is deleted, so that the storage space is released, and the capacity requirement of the on-chip memory is effectively reduced.
Step S600: and caching the matrix multiplication and addition operation result as a characteristic unit of the next convolution layer to the on-chip memory.
And when the characteristic unit corresponding to the convolution kernel of the next volume of the packed layer cached in the on-chip memory contains the characteristic unit to be operated corresponding to the convolution kernel of the next volume of the packed layer, the next volume of the packed layer executes convolution operation on the characteristic unit to be in the cloud end according to the convolution kernel, and the like.
Those skilled in the art will understand that the execution sequence of step S500 and step S600 is not limited, either of them may be executed first, or both of them may be executed synchronously, and the embodiment of the present invention is not limited thereto.
It can be understood by those skilled in the art that, because the inputs corresponding to the columns in the flash memory cell array in the integrated storage chip are the same, the weight number series corresponding to the convolution kernels can be stored in each column, and after an input feature number series is input into the flash memory cell array, each column of storage cells performs convolution operation of the convolution kernel corresponding to each column, so that multiple convolution operations corresponding to multiple convolution kernels can be synchronously implemented on the same input feature map, the operation efficiency is high, the operation speed is high, and the operation time is greatly saved.
FIG. 7a is a diagram of a CNN operation performed on a 5 × 5 signature using a 3 × 3 convolution kernel. As shown in fig. 7a, the input feature map is a 5 × 5 feature map, which contains 25 feature units in total, the size of the convolution kernel is 3 × 3, and the convolution operation is performed assuming that (x ═ 1, y ═ 1) is used as a step size as follows:
first, a weight sequence corresponding to the convolution kernel is obtained according to the 3 × 3 convolution kernel, referring to fig. 7b, the convolution kernel is processed from left to right and from top to bottom in order
Figure BDA0002061850100000101
Conversion is to the series [ 0,1,0,0,1,0,0,1,1 ].
The control programming circuit writes the weight number array into 9 flash memory units in a flash memory unit array, and the weights corresponding to the 9 flash memory units are respectively as follows: 0,1,0,0,1,0,0,1,1.
Then, according to the convolution kernel
Figure BDA0002061850100000102
And judging whether the characteristic units cached in the on-chip memory contain the current characteristic units to be operated.
Referring to FIG. 7a, the feature unit to be operated on currently is the shaded portion of the drawing
Figure BDA0002061850100000103
Then according to
Figure BDA0002061850100000104
An input feature sequence [1, 1,0,2,2,2,0,0,2 ] is obtained, see fig. 7 b.
Finally, the input feature number sequence [1, 1,0,2,2,2,0,0,2 ] is input into 9 flash memory cells in which flash memory cell sequences of 0,1,0,0,1,0,0,1,1 have been written respectively, referring to fig. 7c, outputs of the 9 flash memory cells are connected in series, an output current I obtained at an output terminal is 0 × 1+1 × 1+0 × 0+0 × 2+1 × 2+0 × 2+0 × 0+1 × 0+1 × 2 ═ 5, and the operation result 5 participates in the operation as one feature cell in the input feature map of the next volume of the stack, and then, the feature cell to be operated next is processed.
When a plurality of convolution operations based on different convolution kernels need to be performed on the input feature map, only the weight number sequences corresponding to the different convolution kernels need to be written into different flash memory cell sequences in sequence, and then the input feature number sequences are input to the input ends of the flash memory cell sequences.
It is worth noting that one input feature map corresponds to a plurality of input feature number sequences, each input feature number sequence representing a feature unit in the current receptive field when the convolution kernel sweeps across the input feature.
Fig. 8a to 8c are schematic diagrams illustrating a method for implementing a progressive CNN operation by using a storage-computation-integrated chip according to an embodiment of the present invention. As shown in figures 8a to 8c,
first, assuming that the input feature maps to be computed are all stored in the off-chip memory, the convolution kernel 1 is a 3 × 3 convolution kernel. And obtaining a corresponding weight number sequence according to the convolution kernel 1, and then controlling the programming circuit to write the weight number sequence into a flash memory unit row. Of course, it will be understood by those skilled in the art that the above steps may also be performed before the first convolution operation is performed.
(1) Feature cells of the input feature map are sequentially read from the off-chip memory into the on-chip memory.
(2) When the feature cell in the on-chip memory is equal to (W × (h-1) + W) × k, referring to time 1 in fig. 8a, the on-chip memory already contains the feature cell to be operated when the first convolution layer is convolved for the first time, and at this time, the first convolution operation of the first convolution layer can be started, specifically:
step 1: obtaining an input feature number sequence according to a feature unit to be operated during the first convolution;
step 2: inputting the input characteristic number sequence into a flash memory unit sequence pre-stored with a weight number sequence so as to obtain a matrix multiply-add operation result of the weight number sequence and the input characteristic number sequence at an output end of the flash memory unit sequence;
and step 3: and storing the matrix multiplication and addition operation result as a characteristic unit of the input characteristic diagram of the second convolution layer on an on-chip memory.
(3) When the input characteristic number sequence corresponding to the characteristic unit to be operated in the first convolution is input into the flash memory unit sequence, the characteristic unit which is initially stored in the on-chip memory can be deleted, and therefore the storage space is released.
(4) Reading the next feature unit from the off-chip memory in sequence, and storing the next feature unit in the on-chip memory, as shown in time 2 in fig. 8a, at this time, the on-chip memory already contains the feature unit to be operated during the second convolution, and at this time, the second convolution operation can be started, and the specific steps can be referred to as the first convolution operation, which is not described herein again.
(5) And circularly executing the steps until all the characteristic units of the input characteristic diagram are subjected to convolution operation corresponding to the first convolution layer.
It should be noted that, referring to fig. 8b, at time N, after the nth convolution is performed, N convolution operation results are obtained, which are equivalent to the first N feature units of the second convolution layer, at this time, the first N feature units already include the feature unit to be operated of the first convolution of the second convolution layer, at this time, the second convolution layer starts to perform the first convolution, the obtained operation result is used as the feature unit of the input feature map of the third convolution layer, and so on, until the on-chip memory is used up, at this time, the output feature map is stored in the off-chip memory, referring to fig. 8 c.
Therefore, according to the method for realizing the progressive CNN operation by using the storage-computation-integrated chip and the storage-computation-integrated chip provided by the embodiment of the invention, when the current operation is carried out, the matrix multiplication and addition operation can be carried out only by caching the current feature unit to be operated by the on-chip memory, and since all input feature maps are not required to be read into the on-chip memory and then operated, the required on-chip memory has small capacity and low operation complexity, and the convolution neural network operation can be realized by using the existing storage-computation-integrated chip.
On the other hand, after the input characteristic number sequence is input into the flash memory cell sequence pre-stored with the weight number sequence, the computed garbage characteristic cells in the characteristic cells cached in the on-chip memory are deleted, or the characteristic cells with the longest caching time in the characteristic cells cached in the on-chip memory are deleted, so that the storage space is released, and the capacity requirement of the on-chip memory is effectively reduced.
Moreover, the operation can be carried out while inputting, and the operation does not need to be carried out after all the characteristic units are input, so that the operation speed can be improved, the waiting time can be saved, on the other hand, the operation does not need to be started after the next convolution layer finishes the operation, but the operation is started when the current convolution layer is operated to a certain progress, and therefore the parallel operation of a plurality of convolution layers can be realized.
Finally, for the purpose of simply explaining the principle of the embodiment of the present invention, only one convolution kernel is corresponding to each convolution layer, and when one convolution layer corresponds to a plurality of convolution kernels, the embodiment of the present invention can perform convolution operation on the same input feature map by using a plurality of convolution kernels in parallel, thereby effectively improving the operation speed and the operation efficiency.
Embodiments of the present invention also provide an electronic device, which may include the above-mentioned computing integrated chip, and specifically, the electronic device may be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a controller to implement the method for implementing the progressive CNN operation by using a storage and computation integrated chip. The computer program may be downloaded and installed from a network, and/or installed from a removable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (9)

1. A method for implementing a progressive CNN operation by using a storage and computation integrated chip is used for performing the CNN operation on an input feature map, wherein the CNN includes a plurality of convolution layers, the input feature map includes a plurality of feature units arranged in an array, and the storage and computation integrated chip includes: the flash memory cell array is used for carrying out matrix multiplication and addition operation, and the on-chip memory is used for caching partial characteristic cells of the input characteristic diagram;
the method for realizing the progressive CNN by using the storage and calculation integrated chip comprises the following steps:
judging whether the characteristic units cached in the on-chip memory contain the current characteristic units to be operated or not according to the convolution kernels of the current convolution layer;
if yes, obtaining an input feature number sequence according to the feature unit to be operated currently;
inputting the input characteristic number sequence into a flash memory unit sequence pre-stored with a weight number sequence so as to obtain a matrix multiply-add operation result of the weight number sequence and the input characteristic number sequence at an output end of the flash memory unit sequence;
wherein the number of elements of the input feature number series is equal to the number of elements of the weight number series;
if not, controlling the reading circuit to read the feature unit of the input feature diagram from the external storage device and cache the feature unit to the on-chip memory;
when the feature unit in the on-chip memory already contains the feature unit to be operated currently, the operation can be started; reading a next characteristic unit from the off-chip memory in sequence, and storing the characteristic unit in the on-chip memory for subsequent operation;
when the feature cell in the on-chip memory is equal to (W (h-1) + W) k, the feature cell cached in the on-chip memory contains the feature cell to be operated currently, and then the current convolution operation of the current convolution layer can be started; when the input characteristic number sequence corresponding to the characteristic unit to be operated is input into the flash memory unit sequence, deleting the characteristic unit which is stored in the on-chip memory at the beginning;
in addition, the method for implementing the progressive CNN operation by using the memory integrated chip further includes: reading the next characteristic unit from the off-chip memory in sequence, and storing the next characteristic unit in the on-chip memory, so that the on-chip memory already contains the characteristic unit to be operated in the next convolution, and then the next convolution operation can be started;
wherein, the input characteristic diagram size is (W, H, K), and the convolution kernel size is (W, H, K).
2. The method for implementing a progressive CNN operation using a banked chip according to claim 1, wherein the banked chip further comprises: a reading circuit for reading the characteristic cells of the input characteristic map from an external storage device,
the method for realizing the progressive CNN by using the storage and computation integrated chip further comprises the following steps:
and controlling the reading circuit to read the characteristic units of the input characteristic diagram from an external storage device and cache the characteristic units to the on-chip memory.
3. The method for implementing a progressive CNN operation using a banked chip according to claim 1, wherein the banked chip further comprises: a programming circuit for controlling a weight of each flash cell in the array of flash cells, the method for implementing the progressive CNN using a computational integrated chip further comprising:
obtaining the weight series according to the convolution kernel;
and controlling the programming circuit to write the weight number sequence into the flash memory unit sequence.
4. The method of claim 1, wherein after inputting the input feature number sequence into a flash memory cell sequence pre-stored with a weight number sequence, the method further comprises:
and deleting the operated garbage feature unit in the feature unit cached in the on-chip memory.
5. The method for implementing a progressive CNN operation using a memory-and-computation-integrated chip according to claim 1, further comprising:
and caching the matrix multiplication and addition operation result as a characteristic unit of the next convolution layer to the on-chip memory.
6. The method for implementing the progressive CNN operation by using the memory-computation-integrated chip as claimed in claim 1, wherein the input feature map size is (W, H, K), the convolution kernel size is (W, H, K), and the number is C;
the judging whether the feature unit cached in the on-chip memory contains the current feature unit to be operated according to the convolution kernel of the CNN operation includes:
judging whether the number of the characteristic units cached in the on-chip memory is greater than p;
p=[W×(h-1)+w]×k。
7. the method of claim 3, wherein the obtaining the weight sequence according to the convolution kernel comprises:
and acquiring elements of the convolution kernel according to a preset sequence to obtain the weight series.
8. The method of claim 7, wherein the obtaining an input feature sequence according to the feature unit to be operated currently comprises:
and acquiring the current feature units to be operated according to the preset sequence to obtain the input feature number sequence.
9. A computing integrated chip, comprising: a flash memory cell array for performing matrix multiply-add operations, an on-chip memory for buffering part of the feature cells of the input feature map, and steps for performing the method of implementing a progressive CNN operation using a computationally integrated chip as claimed in any one of claims 1 to 8.
CN201910407923.1A 2019-05-16 2019-05-16 Method for realizing progressive CNN operation by using storage and computation integrated chip Active CN111950718B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910407923.1A CN111950718B (en) 2019-05-16 2019-05-16 Method for realizing progressive CNN operation by using storage and computation integrated chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910407923.1A CN111950718B (en) 2019-05-16 2019-05-16 Method for realizing progressive CNN operation by using storage and computation integrated chip

Publications (2)

Publication Number Publication Date
CN111950718A CN111950718A (en) 2020-11-17
CN111950718B true CN111950718B (en) 2021-12-07

Family

ID=73335888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910407923.1A Active CN111950718B (en) 2019-05-16 2019-05-16 Method for realizing progressive CNN operation by using storage and computation integrated chip

Country Status (1)

Country Link
CN (1) CN111950718B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989268B (en) * 2021-02-06 2024-01-30 江南大学 Memory operation-oriented fully-unfolded non-orthogonal wiring memory array design method
CN112989273B (en) * 2021-02-06 2023-10-27 江南大学 Method for carrying out memory operation by utilizing complementary code coding
CN113222107A (en) * 2021-03-09 2021-08-06 北京大学 Data processing method, device, equipment and storage medium
CN114723044B (en) * 2022-04-07 2023-04-25 杭州知存智能科技有限公司 Error compensation method, device, chip and equipment for in-memory computing chip
CN115204380B (en) * 2022-09-15 2022-12-27 之江实验室 Data storage and array mapping method and device of storage and calculation integrated convolutional neural network
CN115660058B (en) * 2022-12-13 2023-04-14 至讯创新科技(无锡)有限公司 Method for realizing multi-bit data convolution operation by NAND flash memory

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199266A1 (en) * 2014-01-16 2015-07-16 Carnegie Mellon University 3dic memory chips including computational logic-in-memory for performing accelerated data processing
CN106066783A (en) * 2016-06-02 2016-11-02 华为技术有限公司 The neutral net forward direction arithmetic hardware structure quantified based on power weight
CN108009640A (en) * 2017-12-25 2018-05-08 清华大学 The training device and its training method of neutral net based on memristor
CN108416434A (en) * 2018-02-07 2018-08-17 复旦大学 The circuit structure accelerated with full articulamentum for the convolutional layer of neural network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229645B (en) * 2017-04-28 2021-08-06 北京市商汤科技开发有限公司 Convolution acceleration and calculation processing method and device, electronic equipment and storage medium
CN109190756B (en) * 2018-09-10 2022-02-18 中国科学院计算技术研究所 Arithmetic device based on Winograd convolution and neural network processor comprising same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199266A1 (en) * 2014-01-16 2015-07-16 Carnegie Mellon University 3dic memory chips including computational logic-in-memory for performing accelerated data processing
CN106066783A (en) * 2016-06-02 2016-11-02 华为技术有限公司 The neutral net forward direction arithmetic hardware structure quantified based on power weight
CN108009640A (en) * 2017-12-25 2018-05-08 清华大学 The training device and its training method of neutral net based on memristor
CN108416434A (en) * 2018-02-07 2018-08-17 复旦大学 The circuit structure accelerated with full articulamentum for the convolutional layer of neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CMP-PIM: An Energy-Efficient Comparator-based Processing-In-Memory Neural Network Accelerator;Shaahin Angizi等;《2018 55th ACM/ESDA/IEEE Design Automation Conference(DAC)》;20180920;第1-6页 *
基于忆阻器的PIM结构实现深度卷积神经网络近似计算;李楚曦等;《计算机研究与发展》;20170630;第54卷(第6期);第1367-1380页 *
基于通用向量 DSP 的深度学习硬件加速技术;王慧丽等;《中国科学:信息科学》;20190319;第49卷(第3期);第256-276页 *

Also Published As

Publication number Publication date
CN111950718A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN111950718B (en) Method for realizing progressive CNN operation by using storage and computation integrated chip
US11574031B2 (en) Method and electronic device for convolution calculation in neural network
US11244225B2 (en) Neural network processor configurable using macro instructions
US20190188237A1 (en) Method and electronic device for convolution calculation in neutral network
KR101959376B1 (en) Systems and methods for a multi-core optimized recurrent neural network
KR101298393B1 (en) Training convolutional neural networks on graphics processing units
EP3373210A1 (en) Transposing neural network matrices in hardware
JP7325158B2 (en) Data Representation for Dynamic Accuracy in Neural Network Cores
KR20180109619A (en) Convolutional neural network processing method and apparatus
CN113344170B (en) Neural network weight matrix adjustment method, write-in control method and related device
KR20210036715A (en) Neural processing apparatus and method for processing pooling of neural network thereof
US11693627B2 (en) Contiguous sparsity pattern neural networks
US11803360B2 (en) Compilation method, apparatus, computing device and medium
CN112703511B (en) Operation accelerator and data processing method
CN111128279A (en) Memory computing chip based on NAND Flash and control method thereof
CN111914991A (en) Computing device and method for training artificial neural network model and memory system
CN111587439A (en) Pulse width modulation multiplier
US11822900B2 (en) Filter processing device and method of performing convolution operation at filter processing device
KR20210079785A (en) Method and apparatus for processing convolution operation of neural network
CN111164687A (en) Digitally supported flash memory refresh
CN109902821B (en) Data processing method and device and related components
KR20230081697A (en) Method and apparatus for accelerating dilatational convolution calculation
US20230136021A1 (en) Method and system for three-dimensional modeling
US20230053696A1 (en) Method of predicting characteristics of semiconductor device and computing device performing the same
US20220179923A1 (en) Information processing apparatus, information processing method, and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 213-175, 2nd Floor, Building 1, No. 180 Kecheng Street, Qiaosi Street, Linping District, Hangzhou City, Zhejiang Province, 311100

Patentee after: Hangzhou Zhicun Computing Technology Co.,Ltd.

Country or region after: China

Address before: 1707 shining building, 35 Xueyuan Road, Haidian District, Beijing 100083

Patentee before: BEIJING WITINMEM TECHNOLOGY Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address