CN107679619B - Construction method and device of convolution-like artificial neural network - Google Patents
Construction method and device of convolution-like artificial neural network Download PDFInfo
- Publication number
- CN107679619B CN107679619B CN201710952743.2A CN201710952743A CN107679619B CN 107679619 B CN107679619 B CN 107679619B CN 201710952743 A CN201710952743 A CN 201710952743A CN 107679619 B CN107679619 B CN 107679619B
- Authority
- CN
- China
- Prior art keywords
- convolution
- layer
- neural network
- artificial neural
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of data processing, in particular to a construction method and a device of a convolution-like artificial neural network. The invention discloses a construction method of a convolution-like artificial neural network, which comprises the following steps: carrying out single class convolution layer operation of a class convolution artificial neural network on the plurality of channel characteristic input images; the output of the convolutional layer of the previous type is used as the input of the convolutional layer of the next type. The invention relates to a construction device of a convolution-like artificial neural network, which comprises: the operation module is used for carrying out single class convolution layer operation on the similar convolution artificial neural network on the plurality of channel characteristic input images; and the output-input module is used for taking the output of the convolution layer of the previous type as the input of the convolution layer of the next type. The invention provides a construction method and a device of a convolution-like artificial neural network, which can shorten the training time of the convolution-based artificial neural network and reduce the application energy consumption of the convolution-based artificial neural network.
Description
Technical Field
The invention relates to the field of data processing, in particular to a construction method and a device of a convolution-like artificial neural network.
Background
Artificial neural networks, particularly convolutional artificial neural networks, are currently in large-scale use in the field of data processing. However, the time consumed for training the convolutional artificial neural network is very surprising, thereby causing the energy consumption of the artificial neural network application to be increased significantly. The application energy consumption of the convolution-based artificial neural network can be reduced by using an effective mode, and the convolution-based artificial neural network has undoubted great significance for efficiently carrying out large-scale data processing. In the convolutional artificial neural network widely used at present, when feature images input into a plurality of channels are convolved, the convolution results after different channel operations are directly accumulated.
Disclosure of Invention
The invention aims to overcome the defect of overlarge time consumed by the training of the convolution artificial neural network, and provides a construction method and a device of the convolution-like artificial neural network, which can shorten the training time of the convolution-based artificial neural network and reduce the application energy consumption of the convolution-based artificial neural network.
In order to achieve the purpose, the invention adopts the following technical scheme:
a construction method of a convolution-like artificial neural network comprises the following steps:
step 1: carrying out single class convolution layer operation of a class convolution artificial neural network on the plurality of channel characteristic input images;
step 2: the output of the convolutional layer of the previous type is used as the input of the convolutional layer of the next type.
Preferably, the step 1 comprises:
step 1.1: calculating a two-dimensional convolution result of the plurality of channel feature input maps and the convolution kernel:
for a single class convolution layer L ═ kjAnd j is in the state of { 1.,. and n }, calculating two-dimensional convolution results of the plurality of channel characteristic input images and convolution kernels, and calculating a plurality of channel characteristic input images I ═ by adopting standard two-dimensional convolutioniJ, i ∈ {1,.. m } and a convolution kernel kjConv2d (I) as a result of the convolutioni,kj);
Step 1.2: weighting the two-dimensional convolution result, and taking the weighted result as a local layer type convolution output image:
using a weighting parameter Λ ═ λ for two-dimensional convolution results for multiple channelsiWeighting to obtain an output convolution image O ═ OjAs an output map of the convolutional layer of this type, OjThe calculation formula of (a) is as follows:
wherein, OjApplying the jth convolution kernel to the convolution result of the multi-channel input image, i indicating the channel, m indicating the number of channels, λiIs the weight, k, corresponding to channel ijIs the jth convolution kernel, IiThe feature input image for the ith channel, Conv2d (k)j,Ii) Representing a 2-dimensional convolution operation on the jth convolution kernel and the feature input image of the ith channel.
Preferably, after the step 2, the method further comprises the following steps:
constructing a multilayer convolution-like artificial neural network, wherein the multilayer convolution-like artificial neural network comprises: an input layer, a class convolution layer, a full link layer and a classifier layer; the method comprises the following steps: the input layer, the class convolution layer and the full link layer are connected in series, and a classifier is formulated. A convolutional-like artificial neural network construction apparatus, comprising:
the operation module is used for carrying out single class convolution layer operation on the similar convolution artificial neural network on the plurality of channel characteristic input images;
and the output-input module is used for taking the output of the convolution layer of the previous type as the input of the convolution layer of the next type.
Preferably, the method further comprises the following steps:
a building module configured to build a multilayer convolution-like artificial neural network, the multilayer convolution-like artificial neural network including: an input layer, a class convolution layer, a full link layer and a classifier layer; the method comprises the following steps: the input layer, the class convolution layer and the full link layer are connected in series, and a classifier is formulated.
Preferably, the operation module comprises:
a calculation module for calculating a two-dimensional convolution result of the plurality of channel feature input images and the convolution kernel:
for a single class convolution layer L ═ kjAnd j is in the state of { 1.,. and n }, calculating two-dimensional convolution results of the plurality of channel characteristic input images and convolution kernels, and calculating a plurality of channel characteristic input images I ═ by adopting standard two-dimensional convolutioniJ, i ∈ {1,.. m } and a convolution kernel kjConv2d (I) as a result of the convolutioni,kj);
And the weighting module is used for weighting the two-dimensional convolution result and taking the weighted result as a local layer convolution output image:
using a weighting parameter Λ ═ λ for two-dimensional convolution results for multiple channelsiWeighting to obtain an output convolution image O ═ OjAs an output map of the convolutional layer of this type, OjThe calculation formula of (a) is as follows:
wherein, OjApplying the jth convolution kernel to the convolution result of the multi-channel input image, i indicating the channel, m indicating the number of channels, λiIs the weight, k, corresponding to channel ijIs the jth convolution kernel, IiThe feature input image for the ith channel, Conv2d (k)j,Ii) Representing a 2-dimensional convolution operation on the jth convolution kernel and the feature input image of the ith channel.
Compared with the prior art, the invention has the following beneficial effects:
the method comprises the steps of weighting two-dimensional convolution results by calculating two-dimensional convolution results of a plurality of channel characteristic input images and convolution kernels, taking the weighted results as local convolution output images, and taking the output of the last convolution layer as the input of the next convolution layer. The invention provides a construction method of a similar convolution artificial neural network by redesigning a multi-channel convolution operation process in the convolution artificial neural network, which can shorten the training time of the convolution-based artificial neural network, and does not reduce or slightly improve the performance of the artificial neural network, thereby reducing the application energy consumption of the convolution-based artificial neural network.
Drawings
Fig. 1 is a basic flow diagram of a method for constructing a convolutional-like artificial neural network according to the present invention.
FIG. 2 is a second schematic flow chart of a method for constructing a convolutional-like artificial neural network according to the present invention.
FIG. 3 is a schematic diagram of a single convolutional layer-like operation process of a convolutional-like artificial neural network according to the method for constructing a convolutional-like artificial neural network of the present invention.
FIG. 4 is a structural diagram of a complete multi-layer convolution-like artificial neural network according to the method for constructing a convolution-like artificial neural network of the present invention.
FIG. 5 is a schematic structural diagram of a convolutional artificial neural network-like constructing apparatus according to the present invention.
FIG. 6 is a second schematic structural diagram of a convolutional artificial neural network-like structure according to the present invention.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
the first embodiment is as follows:
as shown in fig. 1, the method for constructing a convolutional-like artificial neural network of the present invention comprises the following steps:
step S11: and carrying out single convolution-like layer operation of the convolution-like artificial neural network on the plurality of channel characteristic input images.
Step S12: the output of the convolutional layer of the previous type is used as the input of the convolutional layer of the next type.
Example two:
as shown in fig. 2-4, another kind of convolutional artificial neural network of the present invention comprises the following steps:
step S21: performing a single convolutional-like layer operation of a convolutional-like artificial neural network on the plurality of channel feature input images:
step S211: for a single class convolution layer L ═ kjAnd j is in the state of { 1.,. and n }, calculating two-dimensional convolution results of the plurality of channel characteristic input images and convolution kernels, and calculating a plurality of channel characteristic input images I ═ by adopting standard two-dimensional convolutioniJ, i ∈ {1,.. m } and a convolution kernel kjConv2d (I) as a result of the convolutioni,kj);
Step S212: using a weighting parameter Λ ═ λ for two-dimensional convolution results for multiple channelsiWeighting to obtain an output convolution image O ═ OjAs an output map of the convolutional layer of this type, OjThe calculation formula of (a) is as follows:
wherein, OjApplying the jth convolution kernel to the convolution result of the multi-channel input image, i indicating the channel, m indicating the number of channels, λiIs the weight, k, corresponding to channel ijIs the jth convolution kernel, IiThe feature input image for the ith channel, Conv2d (k)j,Ii) Representing a 2-dimensional convolution operation on the feature input image of the jth convolution kernel and the ith channel;
as an implementable manner, the weight parameter Λ is obtained by training a standard back propagation algorithm, and the optimization algorithm of the weight parameter Λ adopts a random gradient descent algorithm.
Step S22: the output of the convolutional layer of the previous type is used as the input of the convolutional layer of the next type.
Step S23: constructing a multilayer convolution-like artificial neural network, wherein the multilayer convolution-like artificial neural network comprises: an input layer, a class convolution layer, a full link layer and a classifier layer; the method comprises the following steps: serially connecting the input layer, the class convolution layer and the full connection layer, and making a classifier; wherein the input layer, the full connection layer and the classifier layer are consistent with the construction method of the convolution artificial neural network.
As shown in FIG. 3, the input to the class convolution layer is a multi-channel 2-dimensional map containing m channels, the class convolution layer containing n convolution kernels. For each convolution kernel, firstly calculating the convolution kernel and standard 2-dimensional convolution results input by m channels, and then carrying out linear weighting on the results of the m channels by using corresponding parameters to obtain an output image corresponding to the convolution kernel.
As shown in FIG. 4, the input to the network is a multi-channel image I as input to a convolutional-like layer of layer 0, and the Lth0Output of layer as Lth1And inputting the layers. And so on, Ll(L1, 2.,) the operation result of the convolution-like layer of the layer is used as the lth network layerl+1And inputting the layers. The input-to-output mapping calculation process for each convolution layer is as shown in FIG. 3Shown in the figure. And finally, establishing a classifier through serial connection of the multilayer type convolution layer, the pooling layer and the full-connection layer, thereby constructing and obtaining the complete multilayer type convolution artificial neural network. The input layer, the class convolution layer, the full link layer, the classifier layer and the possible pooling layer of the network are connected in the same way as the convolutional artificial neural network, and are shown as crossed dotted lines in the figure.
It is worth noting that the standard convolutional neural network does not take into account the differences in importance of different channels of the convolutional layer, and thus λ is calculated in the calculation processiThe fixed value is 1. In the present invention, λiAs a trainable parameter in a convolution-like artificial neural network, the method is used for simulating the difference of different paths in a biological neural network. At the same time mathematically, a parameter lambda is introducediIt can be seen as a decoupled sum of the convolution kernel parameters and the weight parameters in the class convolution layer.
As an implementable mode, experiments are carried out on a handwritten character data set MNIST and other more general data sets, and results show that when the handwritten character data set MNIST and the other more general data sets have the same network structure conditions and the number of convolution layers is 2, the time of the convolution-like method is shortened by 10% -12% compared with that of a standard convolution neural network training time, and the network classification accuracy is improved by 0.3% at most. Meanwhile, when the number of the similar convolution layers in the network is increased, the training time of the whole similar convolution network parameters is further shortened, and the construction method of the similar convolution artificial neural network has strong universality.
Example three:
as shown in fig. 5, the convolutional artificial neural network-like construction apparatus of the present invention includes:
and an operation module 31, configured to perform a convolution-like artificial neural network single-layer convolution-like operation on the multiple channel feature input images.
And an output-input module 32, configured to use an output of the previous convolutional layer as an input of the next convolutional layer.
Example four:
as shown in fig. 6, another kind of convolutional artificial neural network construction device of the present invention includes:
and an operation module 41, configured to perform a convolution-like artificial neural network single-layer convolution-like operation on the multiple channel feature input images.
And an output-input module 42, configured to use an output of the previous convolutional layer as an input of the next convolutional layer.
A building module 43, configured to build a multilayer convolution-like artificial neural network, where the multilayer convolution-like artificial neural network includes: an input layer, a class convolution layer, a full link layer and a classifier layer; the method comprises the following steps: the input layer, the class convolution layer and the full link layer are connected in series, and a classifier is formulated.
The operation module 41 includes:
a calculating module 411, configured to calculate a two-dimensional convolution result of the multiple channel feature input images and the convolution kernel:
for a single class convolution layer L ═ kjAnd j is in the state of { 1.,. and n }, calculating two-dimensional convolution results of the plurality of channel characteristic input images and convolution kernels, and calculating a plurality of channel characteristic input images I ═ by adopting standard two-dimensional convolutioniJ, i ∈ {1,.. m } and a convolution kernel kjConv2d (I) as a result of the convolutioni,kj);
A weighting module 412, configured to weight the two-dimensional convolution result, and use the weighted result as a local layer convolution output image:
using a weighting parameter Λ ═ λ for two-dimensional convolution results for multiple channelsiWeighting to obtain an output convolution image O ═ OjAs an output map of the convolutional layer of this type, OjThe calculation formula of (a) is as follows:
wherein, OjApplying the jth convolution kernel to the convolution result of the multi-channel input image, i indicating the channel, m indicating the number of channels, λiIs the weight, k, corresponding to channel ijIs the jth convolution kernel, IiThe feature input image for the ith channel, Conv2d (k)j,Ii) 2 representing the feature input image to the jth convolution kernel and ith channelAnd (4) performing dimensional convolution operation.
The above shows only the preferred embodiments of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.
Claims (4)
1. A construction method of a convolution-like artificial neural network is characterized by comprising the following steps:
step 1: carrying out single class convolution layer operation of a class convolution artificial neural network on the plurality of channel characteristic input images;
the step 1 comprises the following steps:
step 1.1: calculating a two-dimensional convolution result of the plurality of channel feature input maps and the convolution kernel:
for a single class convolution layer L ═ kjAnd j is in the state of { 1.,. and n }, calculating two-dimensional convolution results of the plurality of channel characteristic input images and convolution kernels, and calculating a plurality of channel characteristic input images I ═ by adopting standard two-dimensional convolutioniJ, i ∈ {1, …, m } and a convolution kernel kjConv2d (I) as a result of the convolutioni,kj);
Step 1.2: weighting the two-dimensional convolution result, and taking the weighted result as a local layer type convolution output image:
using a weighting parameter Λ ═ λ for two-dimensional convolution results for multiple channelsiWeighting to obtain an output convolution image O ═ OjAs an output map of the convolutional layer of this type, OjThe calculation formula of (a) is as follows:
wherein, OjApplying the jth convolution kernel to the convolution result of the multi-channel input image, i indicating the channel, m indicating the number of channels, λiIs the weight, k, corresponding to channel ijIs the jth convolution kernel, IiThe feature input image for the ith channel, Conv2d (k)j,Ii) Representing a 2-dimensional convolution operation on the feature input image of the jth convolution kernel and the ith channel;
step 2: taking the output of the convolution layer of the previous type as the input of the convolution layer of the next type;
experiments are carried out on a handwritten character data set MNIST, and when the method has the same network structure condition and the number of convolution layers is 2, the training time of the method is shortened by 10% -12% compared with that of a standard convolution neural network.
2. The method for constructing a convolutional-like artificial neural network as claimed in claim 1, further comprising:
constructing a multilayer convolution-like artificial neural network, wherein the multilayer convolution-like artificial neural network comprises: an input layer, a class convolution layer, a full link layer and a classifier layer; the method comprises the following steps: the input layer, the class convolution layer and the full link layer are connected in series, and a classifier is formulated.
3. The convolutional artificial neural network-like construction apparatus based on the convolutional artificial neural network-like construction method of any one of claims 1 to 2, comprising:
the operation module is used for carrying out single class convolution layer operation on the similar convolution artificial neural network on the plurality of channel characteristic input images;
the operation module comprises:
a calculation module for calculating a two-dimensional convolution result of the plurality of channel feature input images and the convolution kernel:
for a single class convolution layer L ═ kjAnd j is in the state of { 1.,. and n }, calculating two-dimensional convolution results of the plurality of channel characteristic input images and convolution kernels, and calculating a plurality of channel characteristic input images I ═ by adopting standard two-dimensional convolutioniJ, i ∈ {1, …, m } and a convolution kernel kjConv2d (I) as a result of the convolutioni,kj);
And the weighting module is used for weighting the two-dimensional convolution result and taking the weighted result as a local layer convolution output image:
using weight parameters for two-dimensional convolution results of multiple channelsA ═ λiWeighting to obtain an output convolution image O ═ OjAs an output map of the convolutional layer of this type, OjThe calculation formula of (a) is as follows:
wherein, OjApplying the jth convolution kernel to the convolution result of the multi-channel input image, i indicating the channel, m indicating the number of channels, λiIs the weight, k, corresponding to channel ijIs the jth convolution kernel, IiThe feature input image for the ith channel, Conv2d (k)j,Ii) Representing a 2-dimensional convolution operation on the feature input image of the jth convolution kernel and the ith channel;
and the output-input module is used for taking the output of the convolution layer of the previous type as the input of the convolution layer of the next type.
4. The convolutional artificial neural network-like construction apparatus as claimed in claim 3, further comprising:
a building module configured to build a multilayer convolution-like artificial neural network, the multilayer convolution-like artificial neural network including: an input layer, a class convolution layer, a full link layer and a classifier layer; the method comprises the following steps: the input layer, the class convolution layer and the full link layer are connected in series, and a classifier is formulated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710952743.2A CN107679619B (en) | 2017-10-13 | 2017-10-13 | Construction method and device of convolution-like artificial neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710952743.2A CN107679619B (en) | 2017-10-13 | 2017-10-13 | Construction method and device of convolution-like artificial neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107679619A CN107679619A (en) | 2018-02-09 |
CN107679619B true CN107679619B (en) | 2020-04-24 |
Family
ID=61140838
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710952743.2A Active CN107679619B (en) | 2017-10-13 | 2017-10-13 | Construction method and device of convolution-like artificial neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107679619B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063824B (en) * | 2018-07-25 | 2023-04-07 | 深圳市中悦科技有限公司 | Deep three-dimensional convolutional neural network creation method and device, storage medium and processor |
CN110363151B (en) * | 2019-07-16 | 2023-04-18 | 中国人民解放军海军航空大学 | Radar target detection method based on controllable false alarm of two-channel convolutional neural network |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104281858A (en) * | 2014-09-15 | 2015-01-14 | 中安消技术有限公司 | Three-dimensional convolutional neutral network training method and video anomalous event detection method and device |
CN104809426A (en) * | 2014-01-27 | 2015-07-29 | 日本电气株式会社 | Convolutional neural network training method and target identification method and device |
CN106462802A (en) * | 2014-11-14 | 2017-02-22 | 谷歌公司 | Generating natural language descriptions of images |
CN106503799A (en) * | 2016-10-11 | 2017-03-15 | 天津大学 | Deep learning model and the application in brain status monitoring based on multiple dimensioned network |
CN106709532A (en) * | 2017-01-25 | 2017-05-24 | 京东方科技集团股份有限公司 | Image processing method and device |
CN106874956A (en) * | 2017-02-27 | 2017-06-20 | 陕西师范大学 | The construction method of image classification convolutional neural networks structure |
US9740966B1 (en) * | 2016-02-05 | 2017-08-22 | Internation Business Machines Corporation | Tagging similar images using neural network |
-
2017
- 2017-10-13 CN CN201710952743.2A patent/CN107679619B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104809426A (en) * | 2014-01-27 | 2015-07-29 | 日本电气株式会社 | Convolutional neural network training method and target identification method and device |
CN104281858A (en) * | 2014-09-15 | 2015-01-14 | 中安消技术有限公司 | Three-dimensional convolutional neutral network training method and video anomalous event detection method and device |
CN106462802A (en) * | 2014-11-14 | 2017-02-22 | 谷歌公司 | Generating natural language descriptions of images |
US9740966B1 (en) * | 2016-02-05 | 2017-08-22 | Internation Business Machines Corporation | Tagging similar images using neural network |
CN106503799A (en) * | 2016-10-11 | 2017-03-15 | 天津大学 | Deep learning model and the application in brain status monitoring based on multiple dimensioned network |
CN106709532A (en) * | 2017-01-25 | 2017-05-24 | 京东方科技集团股份有限公司 | Image processing method and device |
CN106874956A (en) * | 2017-02-27 | 2017-06-20 | 陕西师范大学 | The construction method of image classification convolutional neural networks structure |
Non-Patent Citations (1)
Title |
---|
一种易于初始化的类卷积神经网络视觉跟踪算法;李寰宇 等;《电子与信息学报》;20160131;第1-7页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107679619A (en) | 2018-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111684473B (en) | Improving performance of neural network arrays | |
JP6714690B2 (en) | Information processing system, method of operating information processing system, and machine learning computing unit | |
US10445638B1 (en) | Restructuring a multi-dimensional array | |
CN107403141B (en) | Face detection method and device, computer readable storage medium and equipment | |
CN110059710B (en) | Apparatus and method for image classification using convolutional neural network | |
CN107229598B (en) | Low-power-consumption voltage-adjustable convolution operation module for convolution neural network | |
CN108416327B (en) | Target detection method and device, computer equipment and readable storage medium | |
CN107358575A (en) | A kind of single image super resolution ratio reconstruction method based on depth residual error network | |
WO2021018163A1 (en) | Neural network search method and apparatus | |
CN109271990A (en) | A kind of semantic segmentation method and device for RGB-D image | |
CN107506828A (en) | Computing device and method | |
CN109191455A (en) | A kind of field crop pest and disease disasters detection method based on SSD convolutional network | |
CN107229942A (en) | A kind of convolutional neural networks rapid classification method based on multiple graders | |
CN110263925A (en) | A kind of hardware-accelerated realization framework of the convolutional neural networks forward prediction based on FPGA | |
CN112163601B (en) | Image classification method, system, computer device and storage medium | |
CN106203625A (en) | A kind of deep-neural-network training method based on multiple pre-training | |
CN110689118A (en) | Improved target detection method based on YOLO V3-tiny | |
Xia et al. | Fully dynamic inference with deep neural networks | |
CN107103285A (en) | Face depth prediction approach based on convolutional neural networks | |
CN110163358A (en) | A kind of computing device and method | |
US10733498B1 (en) | Parametric mathematical function approximation in integrated circuits | |
CN107679619B (en) | Construction method and device of convolution-like artificial neural network | |
CN114004847A (en) | Medical image segmentation method based on graph reversible neural network | |
CN108647184A (en) | A kind of Dynamic High-accuracy bit convolution multiplication Fast implementation | |
CN108171328A (en) | A kind of convolution algorithm method and the neural network processor based on this method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |