CN111967582A - CNN convolutional layer operation method and CNN convolutional layer operation accelerator - Google Patents

CNN convolutional layer operation method and CNN convolutional layer operation accelerator Download PDF

Info

Publication number
CN111967582A
CN111967582A CN202010791455.5A CN202010791455A CN111967582A CN 111967582 A CN111967582 A CN 111967582A CN 202010791455 A CN202010791455 A CN 202010791455A CN 111967582 A CN111967582 A CN 111967582A
Authority
CN
China
Prior art keywords
image
matrix
read
cnn
rows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010791455.5A
Other languages
Chinese (zh)
Other versions
CN111967582B (en
Inventor
杨继林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202010791455.5A priority Critical patent/CN111967582B/en
Publication of CN111967582A publication Critical patent/CN111967582A/en
Application granted granted Critical
Publication of CN111967582B publication Critical patent/CN111967582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a CNN convolutional layer operation method and a CNN convolutional layer operation accelerator, which can both: reading convolution kernels used for performing CNN convolution layer operation on the characteristic image to be processed, and converting the read convolution kernels into a weight matrix H (H)pq) (ii) a Reading a block image on the characteristic image to be processed according to a preset image size threshold value, and according to the weight matrix H (H)pq) Calculating a local operation result of the CNN convolution layer corresponding to the currently read block image; judging whether the whole characteristic image to be processed is read completely: if so, arranging the obtained local operation results of the CNN convolutional layers according to the relative position relationship between the block images, and splicing to obtain the CNN convolutional layer operation results corresponding to the whole characteristic image to be processed; if not, continuing to read in the next block image.The method is used for reducing the complexity of the CNN convolution operation, reducing the pressure of the storage bandwidth and reducing the cost of completing the CNN convolution layer operation.

Description

CNN convolutional layer operation method and CNN convolutional layer operation accelerator
Technical Field
The invention relates to the field of convolution operation acceleration, in particular to a CNN convolution layer operation method and a CNN convolution layer operation accelerator.
Background
With the continuous development of CNNs (Convolutional Neural Networks), CNNs are applied more and more widely in the fields of image classification, image recognition, and the like.
The operation of the CNN convolutional layer is two-dimensional convolution, and a common implementation scheme is that sliding windows are used for realizing convolution calculation, namely, a special control module is used for acquiring a feature map two-dimensional window with the same size for a k × k convolution kernel, then the feature map two-dimensional window slides on a feature map which needs convolutional layer calculation, and then multiplication and addition operation is carried out on the feature map which needs convolutional layer calculation and the corresponding point of the convolution kernel. The method for realizing two-dimensional convolution by the sliding window is more intuitive, and the subsequent calculation process is relatively simple as long as the correct two-dimensional window of the feature map can be obtained. However, the control module for generating the two-dimensional window is relatively complex to implement. In addition, for k × k convolution kernels, and for feature maps requiring convolution layer calculation for online input, additional k-1 line storage is often required, thereby increasing the cost. In addition, in the operation process of the conventional CNN convolution layer, the weight in the convolution kernel is required to be repeatedly read for many times, and the storage bandwidth pressure is increased to a certain extent.
Therefore, the present invention provides a CNN convolutional layer operation method and a CNN convolutional layer operation accelerator, which are used to solve the above problems.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention provides a CNN convolutional layer operation method and a CNN convolutional layer operation accelerator, which are used for reducing the complexity of CNN convolutional operation. The invention also provides for reducing storage bandwidth pressure. The invention is also used for reducing the cost of completing the CNN convolutional layer operation.
In a first aspect, the present invention provides a CNN convolutional layer operation method, including the steps of:
s1, reading convolution kernels used for CNN convolution layer operation on the characteristic image to be processed, and converting the read convolution kernels into a weight matrix H (H)pq) Where the convolution kernel is a k × k convolution kernel, hpqIs a weight matrix H (H)pq) P is 0, 1, 2, …, k-1; q is 0, 1, 2, …, k-1;
s2, reading a block image on the characteristic image to be processed according to the preset image size threshold value and the block image, and according to the weight matrix H (H)pq) Calculating a local operation result of the CNN convolution layer corresponding to the currently read block image;
s3, judging whether the whole characteristic image to be processed is read completely, if so, continuing to execute the step S4, otherwise, repeatedly executing the step S2;
s4, arranging the local operation results of the CNN convolutional layers obtained in the step S2 according to the relative position relationship among the block images, and splicing to obtain the CNN convolutional layer operation results corresponding to the whole characteristic image to be processed;
wherein, the step S2 is based on the weight matrix H (H)pq) The implementation method for calculating the local operation result of the CNN convolution layer corresponding to the currently read block image comprises the following steps:
p1, converting the block image read currently into image matrix A (a)ij) Wherein the block image is a digital image of m × n pixels, aijIs an image matrix A (a)ij) The (i, j) element of (a); wherein i ═ 0, 1, 2.., m-1; j ═ 0, 1, 2,. ang, n-1;
p2, read weight matrix H (H)pq) Each element h ofpqAnd respectively obtaining each element hpqEach corresponding image matrix A (a)ij) And a product matrix formed by all elements required to be multiplied with the element, and each element hpqMultiplying with the corresponding product matrix to obtain each element hpqRespective corresponding local matrices;
p3, calculating the sum of the obtained local matrixes, wherein the sum is the local operation result of the CNN convolution layer corresponding to the block image read in currently;
in step S2, the block images read each time are different from each other;
in step S2, a block image on the feature image to be processed is read according to a preset image size threshold, and the reading method includes:
when reading the block image for the first time, reading a block image meeting the requirement of the image size threshold from the characteristic image to be processed according to a preset reading initial position;
when the block images are read again, each read block image contains k-1 rows or k-1 columns of pixels of its respective adjacent block image.
Further, the element h involved in step P2pqThe corresponding product matrix includes the following cases:
when p is 0 and q is 0, the element h is concernedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein all rows and columns remaining after removing the n-k +1, n-k +2, n-k +3, …, n-1 th column and the m-k +1, m-k +2, m-k +3, …, m-1 th row form an (m-k +1) × (n-k +1) matrix;
when p is 0 and q is not equal to 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns except for the 0 th, 1 st, 2 nd, … th, q-1 th, n-k + q +1, n-k + q +2 th, n-k + q +3 th, … th and n-1 st columns and all rows and columns left after the m-k +1 th, m-k +2 th, m-k +3 th, … th and m-1 th rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns which are remained after removing the n-k +1, n-k +2, n-k +3, … and n-1 columns and removing the 0, 1, 2, …, p-1, m-k + p +2, m-k + p +3, … and m-1 rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein the (m-k +1) × (n-k +1) matrix is formed by splicing all rows and columns which are left after removing the 0 th, 1 st, 2 nd, … rd, q-1 th, n-k + q +1, n-k + q +2, n-k + q +3, … th and n-1 st columns and removing the 0 th, 1 st, 2 nd, … th, p-1 th, m-k + p +2 th, m-k + p +3 th, … th and m-1 st rows.
Further, the weight matrix H (H) obtained by the conversion in step S1pq) Storing in a cache; the image matrix A (a) to be converted in step P1ij) Stored in a cache.
Further, the CNN convolutional layer operation method is realized based on an FPGA.
Further, in step P2, each element h is divided into multiple elements by a multiplier arraypqMultiplying with the corresponding product matrix to obtain each element hpqA respective corresponding partial matrix.
In another aspect, the present invention provides a CNN convolutional layer arithmetic accelerator, including:
the first data pre-reading module is used for reading a convolution kernel used for performing CNN convolution layer operation on the characteristic image to be processed and converting the read convolution kernel into a weight matrix H (H)pq) Where the convolution kernel is a k × k convolution kernel, hpqIs a weight matrix H (H)pq) The (p, q) element of (a), p ═ 0, 1, 2,. ang, k-1; q-0, 1, 2,. k-1;
the second data pre-reading module is used for reading a block image on the characteristic image to be processed according to a preset image size threshold;
a local operation module for calculating the weight matrix H (H) according to the weight matrixpq) Calculating a local operation result of the CNN convolution layer corresponding to the block image currently read in by the second data pre-reading module;
the judging module is used for judging whether the whole characteristic image to be processed is read completely;
the convolutional layer operation result output module is used for arranging the local operation results of the CNN convolutional layers obtained by the local operation module according to the relative position relationship between the block images when the judgment module judges that the whole characteristic image to be processed is read, and then splicing the CNN convolutional layers to obtain and output the CNN convolutional layer operation results corresponding to the whole characteristic image to be processed;
the calling module is used for calling the data pre-reading module to continue executing when the judging module judges that the whole characteristic image to be processed is not read;
wherein, the local operation module comprises:
an image matrix conversion unit for converting the currently read block image into an image matrix A (a)ij) Wherein the block image is a digital image of m × n pixels, aijIs an image matrix A (a)ij) The (i, j) element of (a); wherein i ═ 0, 1, 2.., m-1; j ═ 0, 1, 2,. ang, n-1;
a local matrix acquisition unit for reading the weight matrix H (H)pq) Each element ofVegetarian foodpqAnd respectively obtaining each element hpqEach corresponding image matrix A (a)ij) And a product matrix formed by all elements required to be multiplied with the element, and each element hpqMultiplying with the corresponding product matrix to obtain each element hpqRespective corresponding local matrices;
the local operation result acquisition unit is used for calculating the sum of all the obtained local matrixes, wherein the sum is the local operation result of the CNN convolution layer corresponding to the currently read block image;
the second data pre-reading module reads different block images of the feature image to be processed each time;
the second data pre-reading module reads a block image on the characteristic image to be processed according to a preset image size threshold, and the reading method comprises the following steps:
when reading the block image for the first time, reading a block image meeting the requirement of the image size threshold from the characteristic image to be processed according to a preset reading initial position;
when the block images are read again, each read block image contains k-1 rows or k-1 columns of pixels of its respective adjacent block image.
Further, the element h involved in the local matrix acquisition unitpqThe corresponding product matrix includes the following cases:
when p is 0 and q is 0, the element h is concernedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein all rows and columns remaining after removing the n-k +1, n-k +2, n-k +3, …, n-1 th column and the m-k +1, m-k +2, m-k +3, …, m-1 th row form an (m-k +1) × (n-k +1) matrix;
when p is 0 and q is not equal to 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns except for the 0 th, 1 st, 2 nd, … th, q-1 th, n-k + q +1, n-k + q +2 th, n-k + q +3 th, … th and n-1 st columns and all rows and columns left after the m-k +1 th, m-k +2 th, m-k +3 th, … th and m-1 th rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns which are remained after removing the n-k +1, n-k +2, n-k +3, … and n-1 columns and removing the 0, 1, 2, …, p-1, m-k + p +2, m-k + p +3, … and m-1 rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein the (m-k +1) × (n-k +1) matrix is formed by splicing all rows and columns which are left after removing the 0 th, 1 st, 2 nd, … rd, q-1 th, n-k + q +1, n-k + q +2, n-k + q +3, … th and n-1 st columns and removing the 0 th, 1 st, 2 nd, … th, p-1 th, m-k + p +2 th, m-k + p +3 th, … th and m-1 st rows.
Furthermore, the CNN convolution layer operation accelerator also comprises a cache;
the weight matrix H (H) converted by the first data pre-reading modulepq) Storing in a cache;
the image matrix A (a) converted by the image matrix conversion unitij) Stored in a cache.
Further, the CNN convolutional layer operation accelerator is realized based on an FPGA.
Further, the local matrix acquisition unit adopts a multiplier array to combine each element hpqMultiplying with the corresponding product matrix to obtain each element hpqA respective corresponding partial matrix.
The beneficial effect of the invention is that,
(1) the CNN convolutional layer operation method and the CNN convolutional layer operation accelerator provided by the invention avoid the use of a feature map two-dimensional window in the prior art, further avoid the use of a control module for generating the feature map two-dimensional window in the prior art, reduce the complexity of CNN convolutional operation to a certain extent and are convenient to realize.
(2) The CNN convolutional layer operation method and the CNN convolutional layer operation accelerator provided by the invention use each weight (corresponding to element h) in the convolutional corepq) For the starting point, the place in the block image corresponding to each weight is directly obtainedThere are feature points (image matrix A (a) converted from corresponding block image) that need to be multiplied by the read weightsij) All the elements of the block image are used for generating a convolution kernel, then multiplying each weight in the convolution kernel by the corresponding product matrix to obtain a local matrix corresponding to each weight, and then performing matrix addition operation on all the local matrices corresponding to all the weights in the convolution kernel to obtain a local operation result of the CNN convolution layer corresponding to each block image.
(3) The CNN convolutional layer operation method and the CNN convolutional layer operation accelerator provided by the invention use the weight matrix H (H) required in the operation processpq) And an image matrix A (a)ij) The storage in the cache avoids extra storage, is beneficial to reducing the cost of completing the CNN convolutional layer operation to a certain extent, reduces the times of reading data required by the CNN convolutional layer operation from external storage, and is beneficial to increasing the speed of the CNN convolutional layer operation to a certain extent.
In addition, the invention has reliable design principle, simple structure and very wide application prospect.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a schematic flow diagram of a method of one embodiment of the invention.
Fig. 2 is a schematic diagram of the distribution of the relative position relationship of the segmented image F1 in the feature image to be processed in the present invention.
Fig. 3 is a schematic diagram of the distribution of the relative position relationship of the segmented image F2 in the feature image to be processed in the present invention.
Fig. 4 is a schematic diagram of the distribution of the relative positional relationship of the segmented image F3 in the feature image to be processed in the present invention.
Fig. 5 is a schematic diagram of the distribution of the relative positional relationship of the segmented image F4 in the feature image to be processed in the present invention.
FIG. 6 is a schematic diagram of the arrangement positions of the matrix C1, the matrix C2, the matrix C3 and the matrix C4 in the present invention.
FIG. 7 is a schematic block diagram of a system of one embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a CNN convolutional layer operation method according to an embodiment of the present invention.
As shown in fig. 1, the CNN convolutional layer operation method includes:
step S1, reading convolution kernels used for CNN convolution layer operation on the characteristic image to be processed, and converting the read convolution kernels into a weight matrix H (H)pq) Where the convolution kernel is a k × k convolution kernel, hpqIs a weight matrix H (H)pq) The (p, q) element of (a), p ═ 0, 1, 2,. ang, k-1; q-0, 1, 2,. k-1;
step S2, reading a block image on the characteristic image to be processed according to the preset image size threshold value, and according to the weight matrix H (H)pq) Calculating a local operation result of the CNN convolution layer corresponding to the currently read block image;
step S3, judging whether the whole characteristic image to be processed is read completely, if so, continuing to execute step S4, otherwise, repeatedly executing step S2;
and S4, arranging the local operation results of the CNN convolutional layers obtained in the step S2 according to the relative position relationship among the block images, and splicing to obtain the CNN convolutional layer operation results corresponding to the whole characteristic image to be processed.
Wherein, the step S2 is based on the weight matrix H (H)pq) The implementation method for calculating the local operation result of the CNN convolution layer corresponding to the currently read block image comprises the following steps:
step P1, converting the block image currently read in into image matrix A (a)ij) Wherein the block image is a digital image of m × n pixels, aijIs an image matrix A (a)ij) The (i, j) element of (a); wherein i ═ 0, 1, 2.., m-1; j ═ 0, 1, 2,. ang, n-1;
step P2, read the weight matrix H (H)pq) Each element h ofpqAnd respectively obtaining each element hpqEach corresponding image matrix A (a)ij) And a product matrix formed by all elements required to be multiplied with the element, and each element hpqMultiplying with the corresponding product matrix to obtain each element hpqRespective corresponding local matrices;
and step P3, calculating the sum of the obtained local matrixes, wherein the sum is the local operation result of the CNN convolution layer corresponding to the block image which is read currently.
In step S2, the block images of the feature image to be processed read each time are different from each other;
in step S2, a block image on the feature image to be processed is read according to a preset image size threshold, where the reading method includes:
when reading the block image for the first time, reading a block image meeting the requirement of the image size threshold from the characteristic image to be processed according to a preset reading initial position;
when the block images are read again, each read block image contains k-1 rows or k-1 columns of pixels of its respective adjacent block image.
Alternatively, as an embodiment of the present invention, the element involved in step P2Vegetarian foodpqThe corresponding product matrix includes the following cases:
when p is 0 and q is 0, the element h is concernedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein all rows and columns remaining after removing the n-k +1, n-k +2, n-k +3, …, n-1 th column and the m-k +1, m-k +2, m-k +3, …, m-1 th row form an (m-k +1) × (n-k +1) matrix;
when p is 0 and q is not equal to 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns except for the 0 th, 1 st, 2 nd, … th, q-1 th, n-k + q +1, n-k + q +2 th, n-k + q +3 th, … th and n-1 st columns and all rows and columns left after the m-k +1 th, m-k +2 th, m-k +3 th, … th and m-1 th rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns which are remained after removing the n-k +1, n-k +2, n-k +3, … and n-1 columns and removing the 0, 1, 2, …, p-1, m-k + p +2, m-k + p +3, … and m-1 rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein the (m-k +1) × (n-k +1) matrix is formed by splicing all rows and columns which are left after removing the 0 th, 1 st, 2 nd, … rd, q-1 th, n-k + q +1, n-k + q +2, n-k + q +3, … th and n-1 st columns and removing the 0 th, 1 st, 2 nd, … th, p-1 th, m-k + p +2 th, m-k + p +3 th, … th and m-1 st rows.
Alternatively, as an embodiment of the present invention, the weight matrix H (H) obtained by conversion in step S1pq) Storing in a cache; the image matrix A (a) to be converted in step P1ij) Stored in a cache.
Optionally, as an embodiment of the present invention, the CNN convolutional layer operation method is implemented based on an FPGA.
Alternatively, as an embodiment of the present invention, in step P2, each element h is divided into multiple elements by using a multiplier arraypqMultiplying with the corresponding product matrix to obtain each element hpqRespective corresponding partial partAnd (4) matrix.
In order to facilitate understanding of the present invention, the following describes the CNN convolutional layer operation method provided by the present invention further by using the principle of the CNN convolutional layer operation method of the present invention and combining the process of performing CNN convolutional layer operation on the feature image to be processed in the embodiment.
Specifically, the CNN convolutional layer operation method includes:
l1, reading convolution kernels used for CNN convolution layer operation on the characteristic image to be processed, and converting the read convolution kernels into a weight matrix H (H)pq) Where the convolution kernel is a k × k convolution kernel, hpqIs a weight matrix H (H)pq) P is 0, 1, 2, …, k-1; q is 0, 1, 2, …, k-1.
The characteristic image to be processed is an image which needs to be subjected to CNN convolutional layer operation.
The feature image to be processed and the convolution kernel required for CNN convolution layer operation are both stored in an external DDR (Double Data Rate) in advance.
In the present embodiment, for convenience of description, k — 3 is taken as an example. Correspondingly, the convolution kernel for CNN convolutional layer operation on the feature image to be processed in this embodiment is a convolution kernel of 3 × 3, and further corresponds to a weight matrix H (H)pq) The order of the third-order matrix, specifically,
Figure BDA0002623902260000111
p=q=0,1,2。
to increase the pair weight matrix H (H)pq) For convenience of reading, the third-order matrix obtained by conversion is stored in a buffer in the step L1, and a weight matrix H (H) is takenpq) When each element (i.e., weight) in the weight matrix H (H) is used, the weight matrix H (H) can be directly read from the bufferpq) And (4) each element.
Step L2, according to the preset image size threshold, reading a block image on the characteristic image to be processed, and according to the weight matrix H (H)pq) And calculating the local operation result of the CNN convolution layer corresponding to the currently read block image.
In step L2, the block images of the feature image to be processed read each time are different from each other.
In step L2, a block image on the feature image to be processed is read according to a preset image size threshold, and the reading method includes:
when reading the block image for the first time, reading a block image meeting the requirement of the image size threshold from the characteristic image to be processed according to a preset reading initial position;
when the block images are read again, each of the read block images contains two rows or two columns of pixels of its respective adjacent block image.
Therefore, in this embodiment, when the size of the feature image to be processed is smaller than or equal to the image size threshold, a block image that meets the requirement of the image size threshold on the feature image to be processed, which is read for the first time according to the preset image size threshold in step L2, is the feature image to be processed itself; when the size of the feature image to be processed is larger than the image size threshold, a block image which meets the requirement of the image size threshold on the feature image to be processed and is read for the first time according to the preset image size threshold in step L2 is a local image of the feature image to be processed.
In this embodiment, the size of the feature image to be processed is larger than the image size threshold, the preset image size threshold is 10 × 9 pixels, and the feature image to be processed is 15 × 12 pixels.
In this embodiment, the step L2 reads a block image F1 with a size of 10 × 9 pixels on the feature image to be processed for the first time according to a preset image size threshold (the reading start position may be preset as a pixel point at the upper left corner of the feature image to be processed). The position of this block image F1 on the feature image to be processed (15 × 12 pixels) is shown in fig. 2. The broken line portion in fig. 2 represents the feature image to be processed of 15 × 12 pixels, each of the broken line boxes represents one pixel of the feature image to be processed, and the portion framed by the black rectangular box in fig. 2 represents the patch image F1.
Wherein the step L2 is based on the weight matrix H (H)pq) The implementation method for calculating the local operation result of the CNN convolutional layer corresponding to the currently read block image F1 comprises the following steps:
step L21, converting the currently read block image F1 into an image matrix A (a)ij),aijIs an image matrix A (a)ij) The (i, j) element of (a); wherein i ═ 0, 1, 2.., m-1; j ═ 0, 1, 2,. ang, n-1; the patch image F1 in the present embodiment corresponds to m 7, n 6; specifically, this time:
image matrix
Figure BDA0002623902260000121
Denoted as image matrix a 1.
The image matrix A (a) to be converted in step L21ij) Stored in a buffer, and when needed, the image matrix A (a) is takenij) And when the data is needed, the data can be directly taken from the cache.
Step L22, read the weight matrix H (H)pq) Each element h ofpqAnd respectively obtaining each element hpqEach corresponding image matrix A (a)ij) And a product matrix formed by all elements required to be multiplied with the element, and each element hpqMultiplying with the corresponding product matrix to obtain each element hpqA respective corresponding partial matrix.
Wherein, the element h involved in the step L22pqThe corresponding product matrix includes the following cases:
when p is 0 and q is 0, the element h is concernedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein all rows and columns remaining after removing the n-k +1, n-k +2, n-k +3, …, n-1 th column and the m-k +1, m-k +2, m-k +3, …, m-1 th row form an (m-k +1) × (n-k +1) matrix;
when p is 0 and q is not equal to 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) Removing the 0 th, 1 th, 2 th, … th, q-1 th, n-k + q +2 th, n-k + q +3 th, … th and n-1 th columns and removing the m-k +1 th, m-k +2 th, m-k +3 th, … th and m-1 th rowsThen all the remaining rows and columns are spliced to form an (m-k +1) × (n-k +1) matrix;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns which are remained after removing the n-k +1, n-k +2, n-k +3, … and n-1 columns and removing the 0, 1, 2, …, p-1, m-k + p +2, m-k + p +3, … and m-1 rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein the (m-k +1) × (n-k +1) matrix is formed by splicing all rows and columns which are left after removing the 0 th, 1 st, 2 nd, … rd, q-1 th, n-k + q +1, n-k + q +2, n-k + q +3, … th and n-1 st columns and removing the 0 th, 1 st, 2 nd, … th, p-1 th, m-k + p +2 th, m-k + p +3 th, … th and m-1 st rows.
In the present embodiment, when the weight matrix H (H) is readpq) Element h ofpqIs h00When p is 0 and q is 0, then the element h00The corresponding product matrix is:
all rows and columns remaining after removing the n-k +1, n-k +2, n-k +3, … and n-1 columns and the m-k +1, m-k +2, m-k +3, … and m-1 rows in the image matrix A1 form an (m-k +1) × (n-k +1) matrix, namely a 5 × 4 matrix formed by splicing all rows and columns remaining after removing the 4 th and 5 th columns and removing the 5 th and 6 th rows in the image matrix A1 is recorded as a product matrix 00, and specifically:
Figure BDA0002623902260000141
in the present embodiment, when the weight matrix H (H) is readpq) Element h ofpqIs h01When, since p is 0 and q is 1 ≠ 0, then the element h01The corresponding product matrix is:
a 5 × 4 matrix formed by splicing all rows and columns except the 0 th and 5 th columns and all rows and columns left after the 5 th and 6 th rows in the image matrix a1 is recorded as a product matrix 01, and specifically includes:
Figure BDA0002623902260000142
in the present embodiment, when the weight matrix H (H) is readpq) Element h ofpqIs h10When, since q is 0 and p is 1 ≠ 0, then the element h10The corresponding product matrix is:
a 5 × 4 matrix formed by splicing all rows and columns except the 4 th and 5 th columns and the 0 th and 6 th rows in the image matrix a1 is recorded as a product matrix 10, and specifically includes:
Figure BDA0002623902260000143
in the present embodiment, when the weight matrix H (H) is readpq) Element h ofpqIs h11When q is equal to p is equal to 1, the element h is11The corresponding product matrix is:
a 5 × 4 matrix formed by splicing all rows and columns except the 0 th and 5 th columns and all rows except the 0 th and 6 th rows in the image matrix a1 is recorded as a product matrix 11, and specifically includes:
Figure BDA0002623902260000151
when the weight matrix H (H) is readpq) Other element of (a) hpqIn time, the corresponding product matrix can be obtained by referring to the above manner.
For the image matrix A1, the weight matrix H (H) is readpq) Each element h ofpqAnd obtains the weight matrix H (H)pq) Each element h ofpqAfter the corresponding product matrix, the weight matrix H (H) is respectively setpq) Each element h ofpqMultiplying with the corresponding product matrix to obtain each element hpqA respective corresponding partial matrix.
It can be seen that the present invention uses each weight (corresponding to element h) in the convolution kernelpq) For the starting point, directly obtaining the corresponding block diagram of each weightThe product matrix formed by all feature points needing multiplication operation with the read weight in the image reduces the reading times of each weight in a convolution kernel from external storage (DDR) to a certain extent, and is beneficial to reducing the storage bandwidth pressure to a certain extent.
And L23, calculating the sum of the local matrixes obtained in the step L22, wherein the sum is the local operation result of the CNN convolutional layer corresponding to the currently read block image F1.
The sum of the local matrices is calculated by addition of the matrices.
And L3, judging whether the whole characteristic image to be processed is read completely, if so, continuing to execute the step L4, otherwise, repeatedly executing the step L2.
In this embodiment, it is obvious that after the block image F1 is read, the entire feature image to be processed is not completely read, and it is necessary to continue reading the block images of the feature image to be processed.
In this embodiment, in step L2, after the whole feature image to be processed is completely read according to the preset image size threshold (10 × 9 pixels), the total separately read block images include, in addition to the block image F1, a block image F2, a block image F3, and a block image F4, wherein the schematic position diagrams of the block image F2, the block image F3, and the block image F4 on the whole feature image to be processed are shown in fig. 3, 4, and 5 in sequence.
For each of the block image F2, the block image F3, and the block image F4, the block image F1 in step L2 is referred to, and the CNN convolution layer local operation results (which are also matrices) corresponding thereto are obtained.
It should be noted that, before each new block image is read, the last stored block image may be cleared.
And L4, arranging the local operation results of the CNN convolutional layers obtained in the step L2 according to the relative position relationship among the block images, and splicing to obtain the CNN convolutional layer operation results corresponding to the whole characteristic image to be processed.
For example, the feature image to be processed with 15 × 12 pixels as described above, the whole feature image to be processed needs to be read four times, corresponding to 4 block images, and the 4 block images sequentially include a block image F1, a block image F2, a block image F3, and a block image F4 according to the reading order, where a schematic diagram of the distribution of the relative position relationship among the block image F1, the block image F2, the block image F3, and the block image F4 in the feature image to be processed is shown in fig. 2.
Note that the CNN convolutional layer local operation results corresponding to the block image F1, the block image F2, the block image F3, and the block image F4 are sequentially noted as a matrix C1, a matrix C2, a matrix C3, and a matrix C4, and then a schematic diagram of the arrangement positions of the matrix C1, the matrix C2, the matrix C3, and the matrix C4 is shown in fig. 6. The matrix C1, the matrix C2, the matrix C3, and the matrix C4 are spliced according to their arrangement positions to form a spliced matrix, which is the CNN convolutional layer operation result corresponding to the whole feature image to be processed in this embodiment. And converting the CNN convolutional layer operation result into an image for output, so as to obtain a corresponding image of the whole to-be-processed characteristic image after the CNN convolutional layer operation.
The CNN convolutional layer operation method in this embodiment is implemented based on an FPGA.
In summary, the CNN convolutional layer operation method of the present invention uses the weight matrix H (H) required in the CNN convolutional layer operation processpq) And an image matrix A (a)ij) The storage in the cache avoids extra storage, is beneficial to reducing the cost of completing the CNN convolutional layer operation to a certain extent, and reduces the times of reading data required by the CNN convolutional layer operation from external storage (DDR) and is beneficial to increasing the speed of the CNN convolutional layer operation to a certain extent.
In addition, the CNN convolution layer operation method also avoids the use of a feature map two-dimensional window in the prior art, further avoids the use of a control module for generating the feature map two-dimensional window in the prior art, reduces the complexity of CNN convolution operation to a certain extent, and is convenient to implement.
FIG. 7 is a diagram of an embodiment of a CNN convolutional layer arithmetic accelerator according to the present invention.
As shown in fig. 7, the CNN convolutional layer arithmetic accelerator 100 includes:
a first data pre-reading module 101, configured to read in a convolution kernel used for performing CNN convolution layer operation on a feature image to be processed, and convert the read convolution kernel into a weight matrix H (H)pq) Where the convolution kernel is a k × k convolution kernel, hpqIs a weight matrix H (H)pq) The (p, q) element of (a), p ═ 0, 1, 2,. ang, k-1; q-0, 1, 2,. k-1;
the second data pre-reading module 102 is configured to read a block image on the feature image to be processed according to a preset image size threshold;
a local operation module 103 for calculating a weight matrix H (H) according to the weight matrixpq) Calculating a local operation result of the CNN convolution layer corresponding to the block image currently read in by the second data pre-reading module 102;
the judging module 104 is used for judging whether the whole characteristic image to be processed is read completely;
a convolutional layer operation result output module 105, configured to arrange, according to the relative position relationship between the block images, the local operation results of the CNN convolutional layers obtained by the local operation module 103 when the determination module 104 determines that the entire feature image to be processed has been read, and then splice to obtain and output the CNN convolutional layer operation result corresponding to the entire feature image to be processed;
the calling module 106 is configured to call the data pre-reading module to continue executing when the judging module 104 judges that the whole feature image to be processed is not read;
wherein the local operation module 103 includes:
an image matrix converting unit 1031 for converting the currently read block image into an image matrix a (a)ij) Wherein the block image is a digital image of m × n pixels, aijIs an image matrix A (a)ij) The (i, j) element of (a); wherein i ═ 0, 1, 2.., m-1; j ═ 0, 1, 2,. ang, n-1;
a local matrix acquisition unit 1032 for reading the weight matrix H (H)pq) Each element h ofpqAnd respectively obtaining each element hpqEach corresponding image matrix A (a)ij) The Chinese herbal medicineHaving a product matrix of elements to be multiplied with and having each element hpqMultiplying with the corresponding product matrix to obtain each element hpqRespective corresponding local matrices;
a local operation result obtaining unit 1033, configured to calculate a sum of the obtained local matrices, where the sum is a local operation result of the CNN convolution layer corresponding to the currently read block image.
The second data pre-reading module reads different block images each time.
The second data pre-reading module 102 reads a block image on the feature image to be processed according to a preset image size threshold, and the reading method includes:
when reading the block image for the first time, reading a block image meeting the requirement of the image size threshold from the characteristic image to be processed according to a preset reading initial position;
when the block images are read again, each read block image contains k-1 rows or k-1 columns of pixels of its respective adjacent block image.
Optionally, as an embodiment of the present invention, the element h involved in the local matrix obtaining unit 1032pqThe corresponding product matrix includes the following cases:
when p is 0 and q is 0, the element h is concernedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein all rows and columns remaining after removing the n-k +1, n-k +2, n-k +3, …, n-1 th column and the m-k +1, m-k +2, m-k +3, …, m-1 th row form an (m-k +1) × (n-k +1) matrix;
when p is 0 and q is not equal to 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns except for the 0 th, 1 st, 2 nd, … th, q-1 th, n-k + q +1, n-k + q +2 th, n-k + q +3 th, … th and n-1 st columns and all rows and columns left after the m-k +1 th, m-k +2 th, m-k +3 th, … th and m-1 th rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix isImage matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns which are remained after removing the n-k +1, n-k +2, n-k +3, … and n-1 columns and removing the 0, 1, 2, …, p-1, m-k + p +2, m-k + p +3, … and m-1 rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein the (m-k +1) × (n-k +1) matrix is formed by splicing all rows and columns which are left after removing the 0 th, 1 st, 2 nd, … rd, q-1 th, n-k + q +1, n-k + q +2, n-k + q +3, … th and n-1 st columns and removing the 0 th, 1 st, 2 nd, … th, p-1 th, m-k + p +2 th, m-k + p +3 th, … th and m-1 st rows.
Optionally, as an embodiment of the present invention, the CNN convolutional layer arithmetic accelerator further includes a cache;
the first data pre-reading module 101 converts the weight matrix into a weight matrix H (H)pq) Storing in a cache;
the image matrix A (a) converted by the image matrix conversion unit 1031ij) Stored in a cache.
Optionally, as an embodiment of the present invention, the CNN convolutional layer arithmetic accelerator is implemented based on an FPGA.
Optionally, as an embodiment of the present invention, a block image that meets the requirement of the image size threshold on the read to-be-processed feature image is:
when the size of the part which is not read on the feature image to be processed is larger than the image size threshold, reading a local image which is equal to the image size threshold in size on the part which is not read on the feature image to be processed;
and when the size of the part which is not read on the characteristic image to be processed is smaller than or equal to the image size threshold, reading all the images which are not read on the characteristic image to be processed.
In particular implementations, the local matrix unit 1032 may employ a multiplier array to multiply each element hpqMultiplying with the corresponding product matrix to obtain each element hpqRespective corresponding local matrices; local operation result acquisition sheetElement 1033 may use an accumulator array to compute the sum of the resulting local matrices.
The same and similar parts in the various embodiments in this specification may be referred to each other.
In the embodiments provided in the present invention, it should be understood that the disclosed system and method can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules and the units is only one logical function division, and there may be other division ways in actual implementation.
Although the present invention has been described in detail by referring to the drawings in connection with the preferred embodiments, the present invention is not limited thereto. Various equivalent modifications or substitutions can be made on the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and these modifications or substitutions are within the scope of the present invention/any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A CNN convolutional layer operation method is characterized by comprising the following steps:
s1, reading convolution kernels used for CNN convolution layer operation on the characteristic image to be processed, and converting the read convolution kernels into a weight matrix H (H)pq) Where the convolution kernel is a k × k convolution kernel, hpqIs a weight matrix H (H)pq) P is 0, 1, 2, …, k-1; q is 0, 1, 2, …, k-1;
s2, reading a block image on the characteristic image to be processed according to the preset image size threshold value, and according to the weight matrix H (H)pq) Calculating a local operation result of the CNN convolution layer corresponding to the currently read block image;
s3, judging whether the whole characteristic image to be processed is read completely, if so, continuing to execute the step S4, otherwise, repeatedly executing the step S2;
s4, arranging the local operation results of the CNN convolutional layers obtained in the step S2 according to the relative position relationship among the block images, and splicing to obtain the CNN convolutional layer operation results corresponding to the whole characteristic image to be processed;
wherein, the step S2 is based on the weight matrix H (H)pq) The implementation method for calculating the local operation result of the CNN convolution layer corresponding to the currently read block image comprises the following steps:
p1, converting the block image read currently into image matrix A (a)ij) Wherein the block image is a digital image of m × n pixels, aijIs an image matrix A (a)ij) The (i, j) element of (a); wherein i ═ 0, 1, 2.., m-1; j ═ 0, 1, 2,. ang, n-1;
p2, read weight matrix H (H)pq) Each element h ofpqAnd respectively obtaining each element hpqEach corresponding image matrix A (a)ij) And a product matrix formed by all elements required to be multiplied with the element, and each element hpqMultiplying with the corresponding product matrix to obtain each element hpqRespective corresponding local matrices;
p3, calculating the sum of the obtained local matrixes, wherein the sum is the local operation result of the CNN convolution layer corresponding to the block image read in currently;
in step S2, the block images of the feature image to be processed read each time are different from each other;
in step S2, a block image on the feature image to be processed is read according to a preset image size threshold, where the reading method includes:
when reading the block image for the first time, reading a block image meeting the requirement of the image size threshold from the characteristic image to be processed according to a preset reading initial position;
when the block images are read again, each read block image contains k-1 rows or k-1 columns of pixels of its respective adjacent block image.
2. The CNN convolutional layer operation method of claim 1, wherein the element h involved in step P2pqThe corresponding product matrix includes the following cases:
when p is 0 and q is 0, the element h is concernedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein all rows and columns remaining after removing the n-k +1, n-k +2, n-k +3, …, n-1 th column and the m-k +1, m-k +2, m-k +3, …, m-1 th row form an (m-k +1) × (n-k +1) matrix;
when p is 0 and q is not equal to 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns except for the 0 th, 1 st, 2 nd, … th, q-1 th, n-k + q +1, n-k + q +2 th, n-k + q +3 th, … th and n-1 st columns and all rows and columns left after the m-k +1 th, m-k +2 th, m-k +3 th, … th and m-1 th rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns which are remained after removing the n-k +1, n-k +2, n-k +3, … and n-1 columns and removing the 0, 1, 2, …, p-1, m-k + p +2, m-k + p +3, … and m-1 rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein the (m-k +1) × (n-k +1) matrix is formed by splicing all rows and columns which are left after removing the 0 th, 1 st, 2 nd, … rd, q-1 th, n-k + q +1, n-k + q +2, n-k + q +3, … th and n-1 st columns and removing the 0 th, 1 st, 2 nd, … th, p-1 th, m-k + p +2 th, m-k + p +3 th, … th and m-1 st rows.
3. The CNN convolutional layer operation method of claim 1 or 2,
the weight matrix H (H) obtained by conversion in step S1pq) Storing in a cache;
the image matrix A (a) to be converted in step P1ij) Stored in a cache.
4. The CNN convolutional layer operation method of claim 1 or 2, wherein the CNN is
The convolution layer operation method is realized based on FPGA.
5. The CNN convolutional layer operation method of claim 1 or 2, wherein each element h is formed by a multiplier array in step P2pqMultiplying with the corresponding product matrix to obtain each element hpqA respective corresponding partial matrix.
6. A CNN convolutional layer arithmetic accelerator, comprising:
the first data pre-reading module is used for reading a convolution kernel used for performing CNN convolution layer operation on the characteristic image to be processed and converting the read convolution kernel into a weight matrix H (H)pq) Where the convolution kernel is a k × k convolution kernel, hpqIs a weight matrix H (H)pq) The (p, q) element of (a), p ═ 0, 1, 2,. ang, k-1; q-0, 1, 2,. k-1;
the second data pre-reading module is used for reading a block image on the characteristic image to be processed according to a preset image size threshold;
a local operation module for calculating the weight matrix H (H) according to the weight matrixpq) Calculating a local operation result of the CNN convolution layer corresponding to the block image currently read in by the second data pre-reading module;
the judging module is used for judging whether the whole characteristic image to be processed is read completely;
the convolutional layer operation result output module is used for arranging the local operation results of the CNN convolutional layers obtained by the local operation module according to the relative position relationship between the block images when the judgment module judges that the whole characteristic image to be processed is read, and then splicing the CNN convolutional layers to obtain and output the CNN convolutional layer operation results corresponding to the whole characteristic image to be processed;
the calling module is used for calling the data pre-reading module to continue executing when the judging module judges that the whole characteristic image to be processed is not read;
wherein, the local operation module comprises:
an image matrix conversion unit for converting the currently read block image into an image matrix A (a)ij) Wherein the block image is a digital image of m × n pixels, aijIs an image matrix A (a)ij) The (i, j) element of (a); wherein i ═ 0, 1, 2.., m-1; j ═ 0, 1, 2,. ang, n-1;
a local matrix acquisition unit for reading the weight matrix H (H)pq) Each element h ofpqAnd respectively obtaining each element hpqEach corresponding image matrix A (a)ij) And a product matrix formed by all elements required to be multiplied with the element, and each element hpqMultiplying with the corresponding product matrix to obtain each element hpqRespective corresponding local matrices;
the local operation result acquisition unit is used for calculating the sum of all the obtained local matrixes, wherein the sum is the local operation result of the CNN convolution layer corresponding to the currently read block image;
the second data pre-reading module reads different block images each time;
the second data pre-reading module reads a block image on the characteristic image to be processed according to a preset image size threshold, and the reading method comprises the following steps:
when reading the block image for the first time, reading a block image meeting the requirement of the image size threshold from the characteristic image to be processed according to a preset reading initial position;
when the block images are read again, each read block image contains k-1 rows or k-1 columns of pixels of its respective adjacent block image.
7. The CNN convolutional layer arithmetic accelerator of claim 6, wherein the element h involved in the local matrix fetch unitpqThe corresponding product matrix includes the following cases:
when p is 0 and q is 0, the element h is concernedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein all rows and columns remaining after removing the n-k +1, n-k +2, n-k +3, …, n-1 th column and the m-k +1, m-k +2, m-k +3, …, m-1 th row form an (m-k +1) × (n-k +1) matrix;
when p is 0 and q is not equal to 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns except for the 0 th, 1 st, 2 nd, … th, q-1 th, n-k + q +1, n-k + q +2 th, n-k + q +3 th, … th and n-1 st columns and all rows and columns left after the m-k +1 th, m-k +2 th, m-k +3 th, … th and m-1 th rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) A (m-k +1) × (n-k +1) matrix formed by splicing all rows and columns which are remained after removing the n-k +1, n-k +2, n-k +3, … and n-1 columns and removing the 0, 1, 2, …, p-1, m-k + p +2, m-k + p +3, … and m-1 rows;
when p ≠ 0 and q ≠ 0, the element h involvedpqThe corresponding product matrix is the image matrix A (a)ij) Wherein the (m-k +1) × (n-k +1) matrix is formed by splicing all rows and columns which are left after removing the 0 th, 1 st, 2 nd, … rd, q-1 th, n-k + q +1, n-k + q +2, n-k + q +3, … th and n-1 st columns and removing the 0 th, 1 st, 2 nd, … th, p-1 th, m-k + p +2 th, m-k + p +3 th, … th and m-1 st rows.
8. The CNN convolutional layer arithmetic accelerator of claim 6 or 7, further comprising a cache;
the weight matrix H (H) converted by the first data pre-reading modulepq) Storing in a cache;
the image matrix A (a) converted by the image matrix conversion unitij) Stored in a cache.
9. The CNN convolutional layer arithmetic accelerator of claim 6 or 7, wherein the CNN convolutional layer arithmetic accelerator is implemented based on FPGA.
10. CNN volume according to claim 6 or 7The laminated arithmetic accelerator is characterized in that the local matrix acquisition unit adopts a multiplier array to convert each element hpqMultiplying with the corresponding product matrix to obtain each element hpqA respective corresponding partial matrix.
CN202010791455.5A 2020-08-07 2020-08-07 CNN convolutional layer operation method and CNN convolutional layer operation accelerator Active CN111967582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010791455.5A CN111967582B (en) 2020-08-07 2020-08-07 CNN convolutional layer operation method and CNN convolutional layer operation accelerator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010791455.5A CN111967582B (en) 2020-08-07 2020-08-07 CNN convolutional layer operation method and CNN convolutional layer operation accelerator

Publications (2)

Publication Number Publication Date
CN111967582A true CN111967582A (en) 2020-11-20
CN111967582B CN111967582B (en) 2022-07-08

Family

ID=73365849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010791455.5A Active CN111967582B (en) 2020-08-07 2020-08-07 CNN convolutional layer operation method and CNN convolutional layer operation accelerator

Country Status (1)

Country Link
CN (1) CN111967582B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950656A (en) * 2021-03-09 2021-06-11 北京工业大学 Block convolution method for pre-reading data according to channel based on FPGA platform
CN112966729A (en) * 2021-02-26 2021-06-15 成都商汤科技有限公司 Data processing method and device, computer equipment and storage medium
CN115860080A (en) * 2023-02-15 2023-03-28 苏州浪潮智能科技有限公司 Computing core, accelerator, computing method, device, equipment, medium and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549931A (en) * 2018-04-25 2018-09-18 济南浪潮高新科技投资发展有限公司 A kind of accelerator and method of convolutional neural networks
CN110321996A (en) * 2018-03-28 2019-10-11 华为技术有限公司 A kind of method and apparatus of the image procossing based on convolutional neural networks
CN110399591A (en) * 2019-06-28 2019-11-01 苏州浪潮智能科技有限公司 Data processing method and device based on convolutional neural networks
US20200118638A1 (en) * 2018-10-11 2020-04-16 International Business Machines Corporation Kernel sets normalization with capacitor charge sharing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321996A (en) * 2018-03-28 2019-10-11 华为技术有限公司 A kind of method and apparatus of the image procossing based on convolutional neural networks
CN108549931A (en) * 2018-04-25 2018-09-18 济南浪潮高新科技投资发展有限公司 A kind of accelerator and method of convolutional neural networks
US20200118638A1 (en) * 2018-10-11 2020-04-16 International Business Machines Corporation Kernel sets normalization with capacitor charge sharing
CN110399591A (en) * 2019-06-28 2019-11-01 苏州浪潮智能科技有限公司 Data processing method and device based on convolutional neural networks

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966729A (en) * 2021-02-26 2021-06-15 成都商汤科技有限公司 Data processing method and device, computer equipment and storage medium
CN112950656A (en) * 2021-03-09 2021-06-11 北京工业大学 Block convolution method for pre-reading data according to channel based on FPGA platform
CN115860080A (en) * 2023-02-15 2023-03-28 苏州浪潮智能科技有限公司 Computing core, accelerator, computing method, device, equipment, medium and system

Also Published As

Publication number Publication date
CN111967582B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN111967582B (en) CNN convolutional layer operation method and CNN convolutional layer operation accelerator
WO2019184657A1 (en) Image recognition method, apparatus, electronic device and storage medium
WO2019192316A1 (en) Image related processing method and apparatus, device and storage medium
US20190303731A1 (en) Target detection method and device, computing device and readable storage medium
EP3855367A1 (en) Operation accelerator, processing method, and related device
CN107633297B (en) Convolutional neural network hardware accelerator based on parallel fast FIR filter algorithm
WO2020258491A1 (en) Universal character recognition method, apparatus, computer device, and storage medium
WO2022037257A1 (en) Convolution calculation engine, artificial intelligence chip, and data processing method
WO2022042124A1 (en) Super-resolution image reconstruction method and apparatus, computer device, and storage medium
CN105761233A (en) FPGA-based real-time panoramic image mosaic method
WO2018113224A1 (en) Picture reduction method and device
CN110428382A (en) A kind of efficient video Enhancement Method, device and storage medium for mobile terminal
CN112686377B (en) Method and device for carrying out deconvolution processing on feature data by utilizing convolution hardware
CN113962861A (en) Image reconstruction method and device, electronic equipment and computer readable medium
CN116434039B (en) Target detection method based on multiscale split attention mechanism
US20220343468A1 (en) Image processing method, electronic device and computer-readable storage medium
CN111667401A (en) Multi-level gradient image style migration method and system
CN105160622A (en) Field programmable gate array (FPGA) based implementation method for image super resolution
CN116012588A (en) Novel feature up-sampling method for semantic segmentation
US20220130012A1 (en) Demosaicing method and demosaicing device
CN110399881B (en) End-to-end quality enhancement method and device based on binocular stereo image
US20230252600A1 (en) Image size adjustment structure, adjustment method, and image scaling method and device based on streaming architecture
Rubio-Ibáñez et al. Efficient VHDL Implementation of an Upscaling Function for Real Time Video Applications
JP7418517B2 (en) Text recognition methods, devices, electronic devices, storage media and computer programs
JP4156538B2 (en) Matrix operation unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant