JPH027154A - Neural network circuit - Google Patents

Neural network circuit

Info

Publication number
JPH027154A
JPH027154A JP63158536A JP15853688A JPH027154A JP H027154 A JPH027154 A JP H027154A JP 63158536 A JP63158536 A JP 63158536A JP 15853688 A JP15853688 A JP 15853688A JP H027154 A JPH027154 A JP H027154A
Authority
JP
Japan
Prior art keywords
layer
unit
input
output
neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP63158536A
Other languages
Japanese (ja)
Other versions
JPH0736181B2 (en
Inventor
Koji Akiyama
浩二 秋山
Hiroshi Tsutsu
博司 筒
Tetsu Ogawa
小川 鉄
Hiroshi Tsutsui
博司 筒井
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Priority to JP63158536A priority Critical patent/JPH0736181B2/en
Publication of JPH027154A publication Critical patent/JPH027154A/en
Publication of JPH0736181B2 publication Critical patent/JPH0736181B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

PURPOSE:To improve a converging degree by composing a neural network circuit on the condition of y<0.5=Xk<=2y and x<=y of >=100 input layers (y), intermediate layers in a number Xk, and output layers in a number (z). CONSTITUTION:The neural network circuit consists of input layers 11, intermediate layers 12 and output layers 13, the respective layers consist of plural unit neural elements 14, and the unit neural elements 14 are linked among the layers by linkages 15. The number of the unit neural elements 14 included in the input layers is set at >=1,000. The number Xk of the unit neural elements 14 included in the intermediate layers satisfies the relation of y<0.5=Xk<=2y, and the number (z) of the unit neural elements 14 included in the output layer satisfies the relation of z<=y. Further, a proportion to express from how many unit neural elements ((p) pieces) a certain unit neural elements 14 receives the input out of the total neural element ((q) pieces) belonging to the layer on the previous stage is set at 30 to 80%.

Description

【発明の詳細な説明】 産業上の利用分野 本発明は、神経系と類似な入出力動作、例えばパターン
認識、音声認識、連想記憶、並列演算処理などを行う神
経ネットワーク回路に関するものである。
DETAILED DESCRIPTION OF THE INVENTION Field of Industrial Application The present invention relates to a neural network circuit that performs input/output operations similar to those of the nervous system, such as pattern recognition, speech recognition, associative memory, and parallel arithmetic processing.

従来の技術 現在使用されている計算機はフォノ・ノイマン型計算機
であり、アルゴリズムに沿って直列的に情報を処理して
いく。つまり、−時に1つの命令しか実行しないのが原
則であるため、効率のよい計算法や解法が分からない問
題では、可能性をしらみつぶしに調べて行くことに成り
かねない。直列的な情報処理では、扱える問題に限界が
あることは明かである。
Conventional technology The computer currently in use is a Phono-Neumann computer, which processes information serially according to an algorithm. In other words, since the general rule is to execute only one instruction at a time, if you do not know an efficient calculation method or solution, you may end up exhausting all possibilities. It is clear that serial information processing has limits to the problems it can handle.

このような計算機に対し、生物の脳は情報を処理するた
めに神経細胞からなるネットワークを作り、外界からの
多量の情報を処理するに当たって、神経細胞間の相互作
用を用いて並列に処理を行っている。この方法は、大量
の機械計算や高度の論理演算を余り得意としないが、複
雑かつ曖昧な状況に柔軟に対処し、適切な解を素早く出
す点で高い能力を発揮できる。つまり、計算機を使って
パターン認識や音声認識を行うには、処理・時間がかか
りすぎる。融通性が無いなどの問題があり大変困難であ
るが、脳はなんの苦もなく簡単に行うことができる。ま
た、脳は、計算機がプログラムを取り替えることによる
万能性をもつのに対し、学習と自己組織により自己の性
能を改善し環境の情報構造の自己を適合させる特徴を持
つ。
In contrast to such computers, living brains create a network of neurons to process information, and when processing large amounts of information from the outside world, they use interactions between neurons to perform parallel processing. ing. Although this method is not particularly good at large-scale mechanical calculations or advanced logical operations, it can demonstrate its high ability to flexibly deal with complex and ambiguous situations and quickly come up with appropriate solutions. In other words, it takes too much processing and time to perform pattern recognition or speech recognition using a computer. Although it is very difficult due to problems such as lack of flexibility, the brain can easily do it without any difficulty. Furthermore, while a computer has the versatility of replacing programs, the brain has the characteristic of improving its own performance through learning and self-organization, and adapting itself to the information structure of its environment.

このような脳の持つ特徴を人工的に獲得するためには、
脳と同じような並列情報処理原理を技術的に実現する必
要がある。このような要求の中から、第5図に示すよう
な階層的な構造を持つパーセブトロンが提案されている
In order to artificially acquire these brain characteristics,
It is necessary to technically realize parallel information processing principles similar to those in the brain. In response to these demands, a percebutron having a hierarchical structure as shown in FIG. 5 has been proposed.

第5図のような3層構造のパーセプトロンに対する教師
つき学習法として、逆伝搬学習法が現在のところ最も有
効である。逆伝搬学習法は、入力層50に入カバターン
が与えられるたびに教師が出力層51の細胞の出力を検
査し、もし誤っていたら正しい出力を出せるように中間
層52の細胞との結合係数を修正する学習方法である。
As a supervised learning method for a perceptron having a three-layer structure as shown in FIG. 5, the backpropagation learning method is currently the most effective. In the back-propagation learning method, a teacher inspects the output of cells in the output layer 51 every time an input pattern is given to the input layer 50, and if the output is incorrect, the teacher determines the coupling coefficient with the cells in the intermediate layer 52 so that the output is correct. It is a corrective learning method.

この学習法の有効性については、ジローンズ・ホプキン
ス大学(Johns  napkins)のセノースキ
ー博士(T、J 、Sejnowskl)とプリンスト
ン大学のローゼンベルグ博士(C,R,Rosenbe
rg)が研究した文章の音読を学習するネットワークシ
ステム、ネットトーク(NET  t a l k)で
実証されている。
Regarding the effectiveness of this learning method, Dr. Sejnowskl, T.J., of Johns Hopkins University, and Dr. Rosenberg, C.R., of Princeton University, have discussed the effectiveness of this learning method.
This has been demonstrated in NETTALK, a network system for learning to read sentences aloud, which was researched by R.G.

発明が解決しようとする課題 逆伝搬学習法を使ってパターン認識や音声認識を行わせ
る場合、中間層に含まれる神経細胞の数を幾つにすると
、あるいは中間層の神経細胞のうちどれくらいの割合で
出力細胞と結合させたら、認識できるパターンの数およ
び認識率を低下させることなく最も効率よく学習できる
のかという問題については、まだ明らかにされていなか
った。
Problems to be Solved by the Invention When performing pattern recognition or speech recognition using the back propagation learning method, how many neurons should be included in the intermediate layer, or what percentage of the neurons in the intermediate layer should be used? It has not yet been clarified whether combining this with output cells will enable the most efficient learning without reducing the number of patterns that can be recognized or the recognition rate.

本発明は以上のような従来の間・照点を解決するもので
、効率よく学習でき、しかも認識率の高い神経ネットワ
ーク回路を提供するものである。
The present invention solves the above-mentioned problems of conventional gaps and points of interest, and provides a neural network circuit that can learn efficiently and has a high recognition rate.

課題を解決するための手段 本発明の神経ネットワーク回路は、複数の入力を受け取
り、かつ入力の総和に対する出力値が非線形の関係にあ
る単位神経素子を複数個集めた集団を複数個作成して入
力層と少なくとも1つ以上の中間層と出力層とし、入力
層に含まれる単位神経素子の数が100個以上であり、
入力層に含まれる単位神経素子の出力を中間層に属する
単位神経素子の入力に接続し、出力層の入力に中間層に
属する単位神経素子の出力を接続した階層状のネットワ
ークを構成し、かつ各々の中間層に属する単位神経素子
の数(これをxkとする、但しに=1゜2.3.・・・
l nl nは中間層の数)と入力層に属する単位神経
素子の数(これをyとする)と出力層に属する単位神経
素子の数(これを2とする)との関係が y @ 、 
Ir≦Xk≦2yおよび2≦yを満足することを特徴と
するものである。
Means for Solving the Problems The neural network circuit of the present invention receives a plurality of inputs and creates and inputs a plurality of groups of unit nerve elements whose output values have a nonlinear relationship with respect to the sum of the inputs. layer, at least one intermediate layer, and an output layer, and the number of unit neural elements included in the input layer is 100 or more,
configuring a hierarchical network in which the output of a unit nerve element included in the input layer is connected to the input of a unit nerve element belonging to the intermediate layer, and the output of the unit nerve element belonging to the intermediate layer is connected to the input of the output layer, and The number of unit neural elements belonging to each intermediate layer (this is xk, where = 1゜2.3...
l nl n is the number of intermediate layers), the number of unit neural elements belonging to the input layer (this is set as y), and the number of unit neural elements belonging to the output layer (this is set as 2) is y @,
It is characterized by satisfying Ir≦Xk≦2y and 2≦y.

作用 本発明者らは、入力層、1つ以上の中間層、出力層の階
層的な神経ネットワーク回路において、神経系と類似な
入出力動作の一例としてパターン認識の問題を適用させ
、各層に含まれる単位神経素子の数と学習の効果および
認識率との関連について調べた。その結果、学習アルゴ
リズムとして逆伝搬学習法を用いた場合には、入力層に
含まれる単位神経素子の数を100個以上にした時、中
間層に含まれる単位神経素子の数(X)と入力層に含ま
れる単位神経素子の数(y)及び出力層に含まれる単位
神経素子の数(2)との関係にye・6≦X≦2yおよ
び2≦yを満足させると、学習の際に収束し易く、学習
回数を少なくでき、しかも認識率を余り減少することな
く最もよい状態を得ることができた。また、中間層が複
数個(n)ある場合でも何れの中間層において各層に含
まれる単位神経素子の数Cxk、  k= 1. 2+
  3t  ”’n)について上記の条件を満足させる
と同様の効果を得ることができた。なぜ、上記の条件を
満足するときに最もよい状態を得ることができるかにつ
いて、現在のところ数学的に解析できていないが、以下
のように概ね考えている。
The present inventors applied the problem of pattern recognition as an example of input/output operations similar to those in a nervous system in a hierarchical neural network circuit consisting of an input layer, one or more intermediate layers, and an output layer, and We investigated the relationship between the number of unit neural elements learned and the learning effect and recognition rate. As a result, when using the back propagation learning method as the learning algorithm, when the number of unit neural elements included in the input layer is 100 or more, the number of unit neural elements included in the intermediate layer (X) and the input When the relationship between the number of unit neural elements included in the layer (y) and the number of unit neural elements included in the output layer (2) satisfies ye・6≦X≦2y and 2≦y, during learning It was easy to converge, the number of times of learning could be reduced, and the best state could be obtained without significantly reducing the recognition rate. Furthermore, even if there are a plurality of intermediate layers (n), the number of unit neural elements included in each layer in any intermediate layer Cxk, k=1. 2+
A similar effect could be obtained by satisfying the above conditions for 3t'''n).Currently, there is no mathematical explanation as to why the best condition can be obtained when the above conditions are satisfied. I haven't been able to analyze it, but I think the following is the general idea.

入力層に含まれる単位神経素子数が100個以上である
ような、神経素子数の比較的多いネットワーク回路にお
いて、中間層または出力層の単位神経素子数および結合
数をあまり多くするとネットワーク回路のエネルギーの
極小値の深さの浅いものが増える。・その結果、回路の
状態がこのような浅い極小値に捕られれ、安定な深さの
深い極小値に落ち着き難くなる。従って、学習の際収束
しにくくなり、認識率も余り向上しなくなってしまう。
In a network circuit with a relatively large number of neural elements, such as 100 or more unit neural elements in the input layer, if the number of unit neural elements and the number of connections in the intermediate layer or output layer are too large, the energy of the network circuit will decrease. The number of shallow minimum values increases. - As a result, the state of the circuit becomes trapped in such a shallow minimum value, making it difficult to settle down to a stable deep minimum value. Therefore, it becomes difficult to converge during learning, and the recognition rate does not improve much.

また、中間層の単位神経素子数および結合数を少なくし
過ぎると認識できるパターンの数を増やすことができな
い。
Furthermore, if the number of unit neural elements and the number of connections in the intermediate layer are too small, the number of patterns that can be recognized cannot be increased.

以上のことから、上記の条件を満足する場合において、
認識率を余り減少させることなく、学習の収束度を高め
、学習回数を減少させることができたものと思われる。
From the above, if the above conditions are satisfied,
It seems that the degree of convergence of learning could be increased and the number of times of learning could be reduced without significantly reducing the recognition rate.

実施例 本発明の実施例について、図面を参照しながら説明する
Embodiments An embodiment of the present invention will be described with reference to the drawings.

第1図に本発明の神経ネットワーク回路の一例を示す。FIG. 1 shows an example of the neural network circuit of the present invention.

この図に示すように回路は入力層11゜中間層12.出
力層13からなり、各層間で単位神経素子14どうし結
合しており、この結合15は神経細胞のシナプス結合に
相当し、ある結合強度を持っている。また、入力層に含
まれる単位神経素子14は、100個以上に設定されて
いる。
As shown in this figure, the circuit consists of an input layer 11, an intermediate layer 12. It consists of an output layer 13, and unit nerve elements 14 are connected between each layer, and this connection 15 corresponds to a synaptic connection of nerve cells and has a certain connection strength. Further, the number of unit nerve elements 14 included in the input layer is set to 100 or more.

中間層12に含まれる単位神経素子14の数(これをX
とする)は、入力層11に含まれる単位神経素子14の
数(これをyとする)に対して、ye・6≦X≦2yの
関係を満足しており、かつ出力層13に含まれる単位神
経素子14の数(これを2とする)とyに対しては2≦
yを満足している。
The number of unit neural elements 14 included in the intermediate layer 12 (this is expressed as
) satisfies the relationship ye・6≦X≦2y with respect to the number of unit nerve elements 14 included in the input layer 11 (this is assumed to be y), and the number of unit nerve elements 14 included in the output layer 13 is satisfied. For the number of unit nerve elements 14 (this is assumed to be 2) and y, 2≦
Satisfies y.

また、この例では中間層12は1層であるが、処理を行
う問題の複雑さに合わせて2層以上にしてもよい。ただ
し、各中間層に含まれる単位神経素子の数は常に上記の
条件を満足していなければならない。さらにこの場合、
入力層と結合している中間層に含まれる単位神経素子の
数を他の中間層に属している単位神経素子の数が越えな
いように設定すると学習の最の収束度はより向上する。
Further, in this example, the intermediate layer 12 has one layer, but it may have two or more layers depending on the complexity of the problem to be processed. However, the number of unit nerve elements included in each intermediate layer must always satisfy the above conditions. Furthermore, in this case,
If the number of unit neural elements included in the intermediate layer connected to the input layer is set so that the number of unit neural elements belonging to other intermediate layers does not exceed the number, the highest degree of convergence of learning will be further improved.

また、中間層あるいは出力層に属するある1つの単位神
経素子が受け取る入力の数(これをpとする)の、これ
らの入力を出力している単位神経素子を含む層に属する
単位神経素子の総数(これをqとする)に対する割合(
p/q)の平均値を30%以上85%以下にすることに
より、認識率をあまり低下させることなく学習時の回路
の収束度を高めることができる。このp/qは、言い替
えれば1つの単位神経素子が前段の層に属する全単位神
経素子(q個)のうちいくつの単位神経素子(p個)か
ら入力を受けているかを表わす割合である。またp/q
の平均値は、好適には45%以上80%以下であり、最
適には55%以上80%以下である。
Also, the total number of unit neural elements belonging to the layer containing the unit neural element that outputs these inputs, which is the number of inputs that one unit neural element belonging to the intermediate layer or the output layer receives (this is set as p) (Let this be q) Ratio (
By setting the average value of p/q) to 30% or more and 85% or less, the degree of convergence of the circuit during learning can be increased without significantly reducing the recognition rate. In other words, p/q is a ratio representing how many unit nerve elements (p pieces) one unit nerve element receives input from among all the unit nerve elements (q pieces) belonging to the previous layer. Also p/q
The average value of is preferably 45% or more and 80% or less, and optimally 55% or more and 80% or less.

単位神経素子の入出力特性は、実際の神経細胞がそうで
あるように非線形であり、例えば、第2図(a)および
(b)に示すように、入力が無限大に近づくにつれて出
力の微係数が零に収束する正数であるS字状の特性を持
ち、しきい値の値にしたがって曲線は横軸方向に移動す
るものである。
The input/output characteristics of a unit nerve element are nonlinear, just like real neurons. For example, as the input approaches infinity, the output decreases as the input approaches infinity, as shown in Figures 2 (a) and (b). It has an S-shaped characteristic in which the coefficient is a positive number that converges to zero, and the curve moves in the horizontal axis direction according to the threshold value.

あるいは、第2図(C)および(d)に示すような階段
状の特性があげられる。
Alternatively, step-like characteristics as shown in FIGS. 2(C) and 2(d) can be mentioned.

以下に、具体的な実施例を述べる。Specific examples will be described below.

実施例1 本発明の一実施例として、第3図に示すような神経ネッ
トワーク回路を作製した。第3図(a)。
Example 1 As an example of the present invention, a neural network circuit as shown in FIG. 3 was fabricated. Figure 3(a).

(b)および(C)は、それぞれ回路全体の構造。(b) and (C) are the structures of the entire circuit, respectively.

神経素子間の結合部分の平面図およびその断面図である
。増幅器30は第1図の単位神経素子14に相当し、第
2図(a)または(b)の入出力特性を持つ。増幅器3
0の数はそれぞれ、入力層31では256個、中間層3
2では120個、出力層33では80個である。
FIG. 3 is a plan view and a cross-sectional view of a connecting portion between neural elements. The amplifier 30 corresponds to the unit nerve element 14 in FIG. 1, and has the input/output characteristics shown in FIG. 2(a) or (b). Amplifier 3
The number of 0s is 256 in the input layer 31 and 256 in the middle layer 3.
2, there are 120 pieces, and the output layer 33 has 80 pieces.

このネットワーク回路の作製方法を以下に示す。The method for manufacturing this network circuit is shown below.

絶縁性基板34上にアルミニウム、クロムなどでの導電
性配線パターン35を形成し、酸化シリコン、窒化シリ
コンポリイミドなどの絶縁材料からなる絶縁層36を積
層する。シナプス結合を形成する部分だけ絶縁層36を
除去し、この部分に光導電層37として非晶質シリコン
または非晶質シリコンゲルマニウムなどの光導電材料の
薄膜を埋め込む。この部分は、第1図において結合15
に相当する。続いて5nO1ITOまたは金などの導電
性配線パターン38を交差させて形成し、第3図(a)
〜(C)に示すような神経ネットワーク回路Aを作製し
た。但し、入力層31の増幅器30の総数に対する、中
間層32の1つの増幅器30が結合している入力層31
の増幅器30の数の割合の平均値は85%であり、中間
層32の増幅器30の総数に対する、出力層33の1つ
の増幅器30が結合している中間層32の増幅器30の
数の割合の平均値は70%となるように設計した。
A conductive wiring pattern 35 made of aluminum, chromium, or the like is formed on an insulating substrate 34, and an insulating layer 36 made of an insulating material such as silicon oxide or silicon nitride polyimide is laminated thereon. The insulating layer 36 is removed only in the portion where synaptic connections are to be formed, and a thin film of a photoconductive material such as amorphous silicon or amorphous silicon germanium is embedded in this portion as a photoconductive layer 37. This part is connected to the joint 15 in FIG.
corresponds to Subsequently, conductive wiring patterns 38 made of 5nO1ITO or gold are formed in a crossing manner, as shown in FIG. 3(a).
A neural network circuit A as shown in ~(C) was created. However, for the total number of amplifiers 30 in the input layer 31, one amplifier 30 in the intermediate layer 32 is coupled to the input layer 31.
The average value of the ratio of the number of amplifiers 30 in the intermediate layer 32 is 85%, and the ratio of the number of amplifiers 30 in the intermediate layer 32 to which one amplifier 30 in the output layer 33 is coupled is 85%. The average value was designed to be 70%.

また、この神経ネットワーク回・路Aとは別に、中間層
32の数を550個、出力層83の数を320個とし、
他の条件は上記と同一の神経ネットワーク回路Bも作製
した。
In addition, apart from this neural network circuit/circuit A, the number of intermediate layers 32 is 550, the number of output layers 83 is 320,
Neural network circuit B was also created under the same conditions as above.

これらの神経ネットワーク回路A、  Bを神経系と類
似な入出力動作の一例としてパターン認識に応用した。
These neural network circuits A and B were applied to pattern recognition as an example of input/output operations similar to those in the nervous system.

認識のための学習方法としては、結合部分の光導電層3
7に照射する光39の強度を変化させて結合強度に相当
する抵抗値を制御する方法を用いた。回路Aの場合、学
習の際の収束の度合は優れており、学習回数はパターン
形状で変化するが平均して4〜5回であった。また、認
識率は95%以上であった。一方、回路Bの場合、学習
の際回路の収束はあまり良くなく、学習回数は平均して
20回以上を要した。また認識率も85%程度であった
As a learning method for recognition, the photoconductive layer 3 of the joint part
A method was used in which the intensity of the light 39 irradiated to 7 was changed to control the resistance value corresponding to the coupling strength. In the case of circuit A, the degree of convergence during learning was excellent, and the number of learning times varied depending on the pattern shape, but was 4 to 5 times on average. Moreover, the recognition rate was 95% or more. On the other hand, in the case of circuit B, the convergence of the circuit during learning was not very good, and the number of learning cycles was more than 20 on average. The recognition rate was also about 85%.

実施例2 第4図に示すようように入力140.第1中間層41、
第2中間層42、出力層43の4層からなる神経ネット
ワーク回路を計算機によるシュミレーションで動作確認
した。但し、入力層40の単位神経素子44の数yは1
00〜10000個変化させ、第1中間層41の単位神
経素子44の数X1は30〜20000個変化させ、第
2中間層42の単位神経素子44の数X2は20〜15
000個変化させ1.出力層43の単位神経素子44の
数2は50〜5000個変化させた。また、単位神経素
子44の入出力特性は次式で表わした。
Embodiment 2 As shown in FIG. 4, input 140. first intermediate layer 41,
The operation of a neural network circuit consisting of four layers, the second intermediate layer 42 and the output layer 43, was confirmed by computer simulation. However, the number y of unit neural elements 44 in the input layer 40 is 1
The number X1 of the unit nerve elements 44 in the first intermediate layer 41 is changed 30 to 20000, and the number X2 of the unit nerve elements 44 in the second intermediate layer 42 is 20 to 15.
000 changes 1. The number 2 of unit nerve elements 44 in the output layer 43 was varied from 50 to 5,000. Further, the input/output characteristics of the unit nerve element 44 are expressed by the following equation.

v= (t anh (ku)+1)/2但し、 U:
 入力、 ■= 出力、 k: 定数。
v= (t anh (ku)+1)/2 However, U:
Input, ■= Output, k: Constant.

第1中間層41、第2中間層42および出力層43のそ
れぞれに含まれる単位神経素子44の1個と結合してい
る前段の層に含まれる単位神経素子の平均の数は前段の
層に含まれる単位神経素子の総数に対して、30〜85
%ととした。
The average number of unit neural elements included in the previous layer that are connected to one unit neural element 44 included in each of the first intermediate layer 41, the second intermediate layer 42, and the output layer 43 is 30 to 85 for the total number of unit neural elements involved.
%.

この回路を使って学習させてみたところ、y l 、 
6≦Xn≦2Y(n=1.2)および2≦yを満足する
とき、収束しやすく、特にX2≦X、を同時に満足させ
ることによりさらに向上することが確認できた。
When I tried learning using this circuit, I found that y l ,
It was confirmed that when 6≦Xn≦2Y (n=1.2) and 2≦y are satisfied, convergence is easy, and in particular, further improvement is achieved by simultaneously satisfying X2≦X.

発明の効果 以上のように本発明による神経ネットワーク回路は、効
率よく学習でき、しかも認識率が高い。
Effects of the Invention As described above, the neural network circuit according to the present invention can learn efficiently and has a high recognition rate.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明における神経ネットワーク回路の一実施
例を示す図、第2図(a)、(b)、(c)および(d
)は各々単位神経素子の入出力特性の一例を示す図、第
3図(a)、(b)および(c)は各々本発明における
神経ネットワーク回路の一実施例の全体の構造を示す回
路図、神経素子間の結合部分の平面図およびその断面図
、第4図は本発明の他の実施例における神経ネットワー
ク回路の図、第5図は従来例の神経ネットワーク回路を
示す図である。 11・・・入力層、12・・・中間層、13・・・出力
層、14・・・単位神経素子、15・・・結合、30・
・・増幅器、31・・・入力層、32・・・中間層、3
3・・・出力層、40・・・入力層、41・・・第1中
間層、42・・・第2中間層、43・・・出力層、44
・・・単位神経素子。 代理人の氏名 弁理士 中尾敏男 ほか1名第1図 I  −一 72 −m− 5−−一 ガ鳩 ’27層 1立 神 斯覧 合 累 第2図(五n/) (a) 第 図 (ぞの2) (C) 図(乞n/) 刀−壇 1 魯 31−7カ1 32−  中lvI層 刀 −出 力 場 40−ス 43−・・出力層 μm・−絨位神経卑子 刀−壇 1 己 五、38−・導電v5配罐パターツ 5〇−人力層 51− 出力増 52− 中唱1
FIG. 1 is a diagram showing one embodiment of the neural network circuit according to the present invention, and FIG. 2 (a), (b), (c) and (d)
) are diagrams each showing an example of the input/output characteristics of a unit neural element, and FIGS. 3(a), (b), and (c) are circuit diagrams each showing the overall structure of an embodiment of the neural network circuit according to the present invention. , a plan view and a cross-sectional view of a connecting portion between neural elements, FIG. 4 is a diagram of a neural network circuit according to another embodiment of the present invention, and FIG. 5 is a diagram showing a conventional neural network circuit. DESCRIPTION OF SYMBOLS 11... Input layer, 12... Middle layer, 13... Output layer, 14... Unit nerve element, 15... Connection, 30...
...Amplifier, 31...Input layer, 32...Middle layer, 3
3... Output layer, 40... Input layer, 41... First intermediate layer, 42... Second intermediate layer, 43... Output layer, 44
...Unit neural element. Name of agent: Patent attorney Toshio Nakao and one other person Figure 1 I -172 -m- 5--Ichigahato'27 Layer 1 Tategami Directory Combined Figure 2 (5n/) (a) Figure (Zono 2) (C) Diagram (beginning n/) Sword - Dan 1 Lu 31-7 Ka1 32- Middle lvI layer sword - Output field 40 - S 43 - Output layer μm - Core position nerve base Child sword - platform 1 Self-5, 38-・Conductive v5 can pattern 5〇-Manpower layer 51- Output increase 52- Medium chant 1

Claims (3)

【特許請求の範囲】[Claims] (1)複数の入力を受け取り、かつ前記入力の値の総和
に対し出力値が非線形の関係にある単位神経素子からな
る集団を複数個作成して入力層と少なくとも1つ以上の
中間層と出力層とし、前記入力層に含まれる単位神経素
子の数が100個以上であり、前記入力層に含まれる単
位神経素子の出力を前記中間層のうちの1つに属する単
位神経素子の入力に接続し、前記出力層の入力に前記中
間層のうちの1つに属する単位神経素子の出力を接続し
た階層状のネットワークを構成し、かつ各々の前記中間
層に属する単位神経素子の数(これをx_kとする、た
だしk=1、2、3、・・・、n、nは中間層の数)と
前記入力、層に属する単位神経素子の数(これをyとす
る)と前記出力層に属する単位神経素子の数(これをz
とする)との関係が、y^3^.^5≦x_k≦2y(
k=1、2、3、・・・、n)およびz≦yを満足する
ことを特徴とする神経ネットワーク回路。
(1) Create multiple groups of unit neural elements that receive multiple inputs and whose output values have a nonlinear relationship with respect to the sum of the input values, and output the input layer, at least one intermediate layer, and the like. layer, the number of unit nerve elements included in the input layer is 100 or more, and the output of the unit nerve element included in the input layer is connected to the input of a unit nerve element belonging to one of the intermediate layers. a hierarchical network in which the output of a unit neural element belonging to one of the intermediate layers is connected to the input of the output layer, and the number of unit neural elements belonging to each intermediate layer (this is x_k, where k = 1, 2, 3, ..., n, n is the number of intermediate layers), the input, the number of unit neural elements belonging to the layer (this is y), and the output layer. The number of unit neural elements to which it belongs (this is z
) is y^3^. ^5≦x_k≦2y(
A neural network circuit characterized by satisfying k=1, 2, 3,..., n) and z≦y.
(2)複数の中間層を有し、入力層と結合している中間
層に属する単位神経素子の数に対し、他の中間層に属す
る単位神経素子の数が多くならないことを特徴とする請
求項1に記載の神経ネットワーク回路。
(2) A claim characterized in that it has a plurality of intermediate layers, and the number of unit neural elements belonging to other intermediate layers is not greater than the number of unit neural elements belonging to the intermediate layer connected to the input layer. The neural network circuit according to item 1.
(3)中間層または出力層に属する単位神経素子が受け
取る入力の数(これをpとする)の、前記入力を出力し
ている単位神経素子を含む層に属する単位神経素子の総
数(これをqとする)に対する割合(p/q)の平均値
を30%以上85%以下としたことを特徴とする請求項
1に記載の神経ネットワーク回路。
(3) The total number of unit neural elements belonging to the layer containing the unit neural element outputting the input (this is the number of inputs received by the unit neural element belonging to the intermediate layer or the output layer (this is assumed to be p)) 2. The neural network circuit according to claim 1, wherein the average value of the ratio (p/q) to q) is 30% or more and 85% or less.
JP63158536A 1988-06-27 1988-06-27 Neural network circuit Expired - Fee Related JPH0736181B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP63158536A JPH0736181B2 (en) 1988-06-27 1988-06-27 Neural network circuit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP63158536A JPH0736181B2 (en) 1988-06-27 1988-06-27 Neural network circuit

Publications (2)

Publication Number Publication Date
JPH027154A true JPH027154A (en) 1990-01-11
JPH0736181B2 JPH0736181B2 (en) 1995-04-19

Family

ID=15673863

Family Applications (1)

Application Number Title Priority Date Filing Date
JP63158536A Expired - Fee Related JPH0736181B2 (en) 1988-06-27 1988-06-27 Neural network circuit

Country Status (1)

Country Link
JP (1) JPH0736181B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0581227A (en) * 1990-03-16 1993-04-02 Hughes Aircraft Co Neuron system network signal processor and method of processing signal
US6031484A (en) * 1996-11-19 2000-02-29 Daimlerchrysler Ag Release device for passenger restraint systems in a motor vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0581227A (en) * 1990-03-16 1993-04-02 Hughes Aircraft Co Neuron system network signal processor and method of processing signal
US6031484A (en) * 1996-11-19 2000-02-29 Daimlerchrysler Ag Release device for passenger restraint systems in a motor vehicle

Also Published As

Publication number Publication date
JPH0736181B2 (en) 1995-04-19

Similar Documents

Publication Publication Date Title
WO2022134391A1 (en) Fusion neuron model, neural network structure and training and inference methods therefor, storage medium, and device
Hung et al. A parallel genetic/neural network learning algorithm for MIMD shared memory machines
Lin et al. Canonical piecewise-linear networks
Akai-Kasaya et al. Evolving conductive polymer neural networks on wetware
Tsai et al. Color filter polishing optimization using ANFIS with sliding-level particle swarm optimizer
Zilouchian Fundamentals of neural networks
JPH027154A (en) Neural network circuit
Neftci Stochastic neuromorphic learning machines for weakly labeled data
Stamova et al. Artificial intelligence in the digital age
Wilamowski Neural networks and fuzzy systems for nonlinear applications
US20200125940A1 (en) Fixed-weighting-code learning device
JPH05197701A (en) Information processor using neural network
Nazari et al. Novel systematic mathematical computation based on the spiking frequency gate (SFG): Innovative organization of spiking computer
Aarts et al. Computations in massively parallel networks based on the Boltzmann machine: A review
Russo Distributed fuzzy learning using the MULTISOFT machine
JPH04237388A (en) Neuro processor
Lee et al. Finding knight's tours on an M/spl times/N chessboard with O (MN) hysteresis McCulloch-Pitts neurons
Wan et al. Introducing cost-sensitive neural networks
Noda et al. A learning method for recurrent networks based on minimization of finite automata
Guo et al. Pulse coding off-chip learning algorithm for memristive artificial neural network
Xiong et al. A functions localized neural network with branch gates
JP2654686B2 (en) neural network
DeFigueiredo The OI, OS, OMNI, and OSMAN networks as best approximations of nonlinear systems under training data constraints
Bohossian et al. On neural networks with minimal weights
Shi et al. Approach to controlling robot by artificial brain based on parallel evolutionary neural network

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees