JP2020027399A5 - - Google Patents
Download PDFInfo
- Publication number
- JP2020027399A5 JP2020027399A5 JP2018151323A JP2018151323A JP2020027399A5 JP 2020027399 A5 JP2020027399 A5 JP 2020027399A5 JP 2018151323 A JP2018151323 A JP 2018151323A JP 2018151323 A JP2018151323 A JP 2018151323A JP 2020027399 A5 JP2020027399 A5 JP 2020027399A5
- Authority
- JP
- Japan
- Prior art keywords
- computer system
- pooling
- subsystems
- layer
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000011176 pooling Methods 0.000 claims 24
- 235000008694 Humulus lupulus Nutrition 0.000 claims 6
- 230000001537 neural Effects 0.000 claims 6
- 238000000034 method Methods 0.000 claims 5
- 239000006185 dispersion Substances 0.000 claims 1
- 235000009808 lpulo Nutrition 0.000 claims 1
- 210000002569 neurons Anatomy 0.000 claims 1
Claims (11)
1以上のプロセッサと、
1以上の記憶装置と、を含み、
前記グラフ上の畳み込みニューラルネットワークは、
1以上の畳み込み層と、
1以上のプーリング層と、を含み、
前記1以上の記憶装置は、前記1以上の畳み込み層のカーネルの重みデータを格納し、
前記1以上のプロセッサは、
各畳み込み層において、各ノードの値を、所定ホップ数のサイズを有するカーネルに基づく畳み込み演算によって、更新し、
各プーリング層において、各ノードの値を、各ノードの値及び各ノードから所定ホップ数のプーリング範囲内のノードの値に基づくプーリング処理によって、更新し、
プーリング層の後段の畳み込み層のカーネルのサイズは、前記プーリング層の前段の畳み込み層のカーネルのサイズよりも大きい、計算機システム。 A computer system that executes a convolutional neural network on a graph.
With one or more processors
Including one or more storage devices,
The convolutional neural network on the graph
With one or more convolutional layers,
Including one or more pooling layers,
The one or more storage devices store the kernel weight data of the one or more convolution layers.
The one or more processors
In each convolutional layer, the value of each node is updated by a kernel-based convolutional operation with a predetermined number of hops.
In each pooling layer, the value of each node is updated by the pooling process based on the value of each node and the value of the node within the pooling range of a predetermined number of hops from each node.
A computer system in which the size of the kernel of the convolution layer after the pooling layer is larger than the size of the kernel of the convolution layer before the pooling layer.
畳み込み層の後段のプーリング層のプーリング範囲は、前記畳み込み層の前段のプーリング層のプーリング範囲よりも広い、計算機システム。 The computer system according to claim 1.
A computer system in which the pooling range of the pooling layer after the convolutional layer is wider than the pooling range of the pooling layer before the convolutional layer.
前記1以上のプロセッサは、前記所定ホップ数のプーリング範囲のプーリング処理を、1ホップ範囲のプーリング処理を前記所定ホップ数と一致する回数だけ繰り返すことで行う、計算機システム。 The computer system according to claim 1.
The computer system in which the one or more processors perform the pooling process of the pooling range of the predetermined number of hops by repeating the pooling process of the one hop range as many times as the number of times matching the predetermined number of hops.
前記1以上のプロセッサは、前記グラフ上の畳み込みニューラルネットワークの誤差逆伝播学習において、誤差関数に前記1以上の畳み込み層の正則化項を含める、計算機システム。 The computer system according to claim 1.
Said one or more processors, the backpropagation learning convolution neural network on the graph, including a regularization term of the one or more convolution layer error function, the computer system.
前記1以上のプロセッサは、前記プーリング処理において、入力値にスムースマキシマムを適用した後に平均値プーリングを行う、計算機システム。 The computer system according to claim 1.
A computer system in which one or more processors perform average value pooling after applying a smooth maximum to an input value in the pooling process.
前記1以上のプロセッサは、前記1以上の畳み込み層及び前記1以上のプーリング層の後段において、全域平均プーリングを実行する、計算機システム。 The computer system according to claim 1.
A computer system in which the one or more processors perform global average pooling in the subsequent stages of the one or more convolution layers and the one or more pooling layers.
前記計算機システムは、ネットワークにより接続された複数のサブシステムを含み、
前記複数のサブシステムのそれぞれは、前記複数のサブシステムにおいて隣接するサブシステムとのみ通信を行い、
前記複数のサブシステムのそれぞれは、前記グラフ上の畳み込みニューラルネットワークにおいて、同一位置のニューロンからなるカラムの計算を行う、計算機システム。 The computer system according to claim 1.
The computer system includes a plurality of subsystems connected by a network.
Each of the plurality of subsystems communicates only with the adjacent subsystem in the plurality of subsystems.
Each of the plurality of subsystems is a computer system that calculates a column composed of neurons at the same position in the convolutional neural network on the graph.
前記複数のサブシステムのそれぞれは、
前記カラムの値を示す内部状態ベクトルを保持し、
隣接サブシステムを含む他のサブシステムから前記内部状態ベクトルを取得し、
自内部状態ベクトル及び前記他のサブシステムの内部状態ベクトルを使用して前記カラムの計算を行う、計算機システム。 The computer system according to claim 7.
Each of the plurality of subsystems
Holds an internal state vector indicating the value of the column
Obtain the internal state vector from other subsystems, including adjacent subsystems,
The calculation of the column using the internal state vector of the own internal state vector and the other subsystems, computer system.
前記複数のサブシステムは、前記1以上の畳み込み層及び前記1以上のプーリング層の後段において、分散で前記カラムの値の平均値を計算する、計算機システム。 The computer system according to claim 7.
The plurality of subsystems are computer systems that calculate the average value of the values of the columns by dispersion in the subsequent stage of the one or more convolution layers and the one or more pooling layers.
前記複数のサブシステムのそれぞれは、
他のサブシステムから独立して、誤差逆伝播による学習によって前記1以上の畳み込み層の重みを更新し、
前記更新した重みの平均値を前記他のサブシステムとの通信により計算する、計算機システム。 The computer system according to claim 7.
Each of the plurality of subsystems
Independent of the other subsystems, the weights of the one or more convolution layers are updated by learning by error backpropagation.
The average value of the weights to the updated calculated by communication with the other subsystems, computer system.
前記計算機システムは、
1以上のプロセッサと、
1以上の記憶装置と、を含み、
前記グラフ上の畳み込みニューラルネットワークは、
1以上の畳み込み層と、
1以上のプーリング層と、を含み、
前記1以上の記憶装置は、前記1以上の畳み込み層のカーネルの重みデータを格納し、
前記方法は、
各畳み込み層において、各ノードの値を、所定ホップ数のサイズを有するカーネルに基づく畳み込み演算によって、更新し、
各プーリング層において、各ノードの値を、各ノードの値及び各ノードから所定ホップ数のプーリング範囲内のノードの値に基づくプーリング処理によって、更新し、
プーリング層の後段の畳み込み層のカーネルのサイズは、前記プーリング層の前段の畳み込み層のカーネルのサイズよりも大きい、方法。 A way in which a computer system executes a convolutional neural network on a graph,
The computer system
With one or more processors
Including one or more storage devices,
The convolutional neural network on the graph
With one or more convolutional layers,
Including one or more pooling layers,
The one or more storage devices store the kernel weight data of the one or more convolution layers.
The method is
In each convolutional layer, the value of each node is updated by a kernel-based convolutional operation with a predetermined number of hops.
In each pooling layer, the value of each node is updated by the pooling process based on the value of each node and the value of the node within the pooling range of a predetermined number of hops from each node.
A method in which the size of the kernel of the convolution layer after the pooling layer is larger than the size of the kernel of the convolution layer before the pooling layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018151323A JP7036689B2 (en) | 2018-08-10 | 2018-08-10 | Computer system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018151323A JP7036689B2 (en) | 2018-08-10 | 2018-08-10 | Computer system |
Publications (3)
Publication Number | Publication Date |
---|---|
JP2020027399A JP2020027399A (en) | 2020-02-20 |
JP2020027399A5 true JP2020027399A5 (en) | 2021-04-22 |
JP7036689B2 JP7036689B2 (en) | 2022-03-15 |
Family
ID=69622172
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2018151323A Active JP7036689B2 (en) | 2018-08-10 | 2018-08-10 | Computer system |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP7036689B2 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102192348B1 (en) * | 2020-02-24 | 2020-12-17 | 한국과학기술원 | Electronic device for integrated trajectory prediction for unspecified number of surrounding vehicles and operating method thereof |
JP7412283B2 (en) * | 2020-06-16 | 2024-01-12 | 株式会社Nttドコモ | Prediction model generation system and prediction system |
CN112560953B (en) * | 2020-12-16 | 2023-08-15 | 中国平安财产保险股份有限公司 | Private car illegal operation identification method, system, equipment and storage medium |
CN115791640B (en) * | 2023-02-06 | 2023-06-02 | 杭州华得森生物技术有限公司 | Tumor cell detection equipment and method based on spectroscopic spectrum |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6897446B2 (en) * | 2017-09-19 | 2021-06-30 | 富士通株式会社 | Search method, search program and search device |
US20190122111A1 (en) * | 2017-10-24 | 2019-04-25 | Nec Laboratories America, Inc. | Adaptive Convolutional Neural Knowledge Graph Learning System Leveraging Entity Descriptions |
-
2018
- 2018-08-10 JP JP2018151323A patent/JP7036689B2/en active Active
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2020027399A5 (en) | ||
US11574195B2 (en) | Operation method | |
KR102243036B1 (en) | Data access of multidimensional tensor using adder | |
US20190340510A1 (en) | Sparsifying neural network models | |
US11049003B2 (en) | Analog neuromorphic circuit implemented using resistive memories | |
EP3602280B1 (en) | Accessing prologue and epilogue data | |
EP3407266B1 (en) | Artificial neural network calculating device and method for sparse connection | |
US9785615B1 (en) | Memristive computation of a vector cross product | |
JP2020536337A5 (en) | ||
TWI740274B (en) | System, computer-implemented method, and apparatus for accessing data in multi-dimensional tensors using adders | |
CN106991478A (en) | Apparatus and method for performing artificial neural network reverse train | |
US11775832B2 (en) | Device and method for artificial neural network operation | |
CN106991476A (en) | Apparatus and method for performing artificial neural network forward operation | |
WO2016033506A1 (en) | Processing images using deep neural networks | |
JP2017519268A5 (en) | ||
WO2017148536A1 (en) | Electronic devices, artificial evolutionary neural networks, methods and computer programs for implementing evolutionary search and optimisation | |
US20210374514A1 (en) | Efficient tile mapping for row-by-row convolutional neural network mapping for analog artificial intelligence network inference | |
US20230316080A1 (en) | Sparsity masking methods for neural network training | |
EP4052188B1 (en) | Neural network instruction streaming | |
JP2019200657A (en) | Arithmetic device and method for controlling arithmetic device | |
WO2020087254A1 (en) | Optimization method for convolutional neural network, and related product | |
US20230056869A1 (en) | Method of generating deep learning model and computing device performing the same | |
KR102494952B1 (en) | Method and appauatus for initializing deep learning model using variance equalization | |
KR102358508B1 (en) | Method and apparatus for pruning based on the number of updates | |
GB2567038B (en) | Accessing prologue and epilogue data |