JPH06195488A - Neural network for fuzzy inference - Google Patents
Neural network for fuzzy inferenceInfo
- Publication number
- JPH06195488A JPH06195488A JP4356860A JP35686092A JPH06195488A JP H06195488 A JPH06195488 A JP H06195488A JP 4356860 A JP4356860 A JP 4356860A JP 35686092 A JP35686092 A JP 35686092A JP H06195488 A JPH06195488 A JP H06195488A
- Authority
- JP
- Japan
- Prior art keywords
- cell
- cells
- output
- neural network
- limit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Complex Calculations (AREA)
- Feedback Control In General (AREA)
Abstract
Description
【0001】[0001]
【産業上の利用分野】本発明は、入力信号の加算結果の
リミット出力を出力するセルを有するファジィ推論のた
めのニューラルネットワーク(以下NNと称す)に関す
る。BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a neural network (hereinafter referred to as NN) for fuzzy inference having cells that output a limit output of addition results of input signals.
【0002】[0002]
【従来の技術】ニューロコンピュータは、脳の基本素子
であるニューロン(神経細胞)に着目したものであり、
これらが結合した結果できるNNをヒントとして、脳と
同じような機能を達成しようとするものである。これら
NNの特徴は各ニューロン間の並列情報処理にあり、か
つ学習能力のある点である。そしてNN中はいくつかの
層からなる階層構成を有し、各層は適当な数のセルから
なり、同一層内の結合はなく、各層間の結合は入力層
(第1層)から出力層(最終層)へ向けて一方向の結合
としている。入力層を除く各層のセルは、前の層のセル
からの重み付き入力を受けて、その総和を計算し、それ
に適当な関数fをかけたものを出力としている。NNで
用いるセルの入出力関数としては、しきい関数,区分線
形関数,ロジスティック関数及び恒等関数等がある(参
考文献,産業図書発行,麻生英樹著「ニューラルネット
ワーク情報処理」P.13)。2. Description of the Related Art Neurocomputers focus on neurons (nerve cells), which are the basic elements of the brain,
It aims to achieve the same function as the brain, using the NN resulting from the combination of these as a hint. The feature of these NNs lies in the parallel information processing between the neurons and the ability of learning. The NN has a hierarchical structure consisting of several layers, each layer is composed of an appropriate number of cells, there is no coupling within the same layer, and coupling between layers is from the input layer (first layer) to the output layer ( It is a one-way connection toward the final layer). The cells of each layer except the input layer receive the weighted inputs from the cells of the previous layer, calculate the sum of the weighted inputs, and multiply them by an appropriate function f to output. The input / output functions of cells used in the NN include a threshold function, a piecewise linear function, a logistic function, an identity function, etc. (references, published by Sangyo Tosho, Hideki Aso, "Neural Network Information Processing" P.13).
【0003】[0003]
【発明が解決しようとする課題】NNを構成するセルの
入出力関数として区間線形関数を用いれば、直線からな
るメンバーシップ関数をNNにて作成できる。そして直
線からなるメンバーシップ関数はハードウェアの実現が
容易であると言う利点を持つ。本発明は上記事情に鑑み
てなされたものであり、入力信号の加算結果のリミット
出力を出力するセルを有するファジィ推論のためのNN
を提供することを目的としている。If a piecewise linear function is used as the input / output function of the cells forming the NN, the membership function consisting of straight lines can be created in the NN. And the membership function consisting of a straight line has an advantage that hardware can be easily realized. The present invention has been made in view of the above circumstances, and is an NN for fuzzy inference having a cell that outputs a limit output of an addition result of input signals.
Is intended to provide.
【0004】[0004]
【課題を解決するための手段】上記目的を達成するた
め、本発明の[請求項1]のNNでは入力信号を出力す
るセルと、前記セルの信号を所定の結合係数を介して接
続された複数の中間セルと、これら中間セルが階層構成
を有して複数の最終出力セルに接続されたニューラルネ
ットワークにおいて、前記中間セルは入力端に結合され
る信号の加算結果のリミット出力を出力する機能を具備
するようにした。[請求項2]の学習方法では誤差逆伝
播法を用いて結合係数を更新するに際し、中間セルのリ
ミット領域では誤差を伝播させないようにした。[請求
項3]の学習方法では誤差逆伝播法を用いて結合係数を
更新するに際し、中間セルのハイリミット,ローリミッ
ト及びリニア領域では、夫々決った比率で誤差を伝播さ
せるようにした。In order to achieve the above object, in the NN of [Claim 1] of the present invention, a cell outputting an input signal is connected to a signal of the cell via a predetermined coupling coefficient. In a neural network in which a plurality of intermediate cells and the intermediate cells have a hierarchical structure and are connected to a plurality of final output cells, the intermediate cells output a limit output of addition results of signals coupled to input terminals. It was equipped with. In the learning method of [Claim 2], the error is not propagated in the limit region of the intermediate cell when the coupling coefficient is updated by using the error backpropagation method. According to the learning method of [Claim 3], when the coupling coefficient is updated using the error backpropagation method, the error is propagated at a predetermined ratio in the high limit, the low limit and the linear region of the intermediate cell.
【0005】[0005]
【実施例】以下図面を参照して実施例を説明する。図1
は本発明によるNNの一実施例の構成図である。図1に
おいて、aは区分線形関数を出力するセル、O1 〜Ojm
x はセルaの入力端に結合するセルbj (j=1 〜mx)の
出力、W1 〜Wjmx は結合係数である。この時、セルa
は入力の総和Iの値により、次式に示す値を出力する。Embodiments will be described below with reference to the drawings. Figure 1
FIG. 1 is a configuration diagram of an embodiment of a NN according to the present invention. In FIG. 1, a is a cell that outputs a piecewise linear function, O1 to Ojm.
x is the output of the cell bj (j = 1 to mx) coupled to the input end of the cell a, and W1 to Wjmx are coupling coefficients. At this time, cell a
Outputs the value shown in the following equation according to the value of the total sum I of the inputs.
【数1】 このセルの入出力関数を図3に示す。なお、図3に示す
ようにセル入力Iの3つの区間、I<x1,x1≦I≦
x2,x2<Iを夫々ローリミット,リニア,ハイリミ
ット領域(y1≦y2の場合、y1>y2ではローリミ
ットとハイリミットが入れ替わる)と呼ぶ。[Equation 1] The input / output function of this cell is shown in FIG. As shown in FIG. 3, three sections of the cell input I, I <x1, x1 ≦ I ≦
x2 and x2 <I are called low limit, linear, and high limit regions (when y1 ≦ y2, the low limit and the high limit are switched when y1> y2).
【0006】次に、図2を用いてこのセルを用いた場合
の学習方法を説明する。ここで、Wi(k-1),j(k) はk−
1層i番目のセルとk層j番目のセルとの結合係数を示
し、Ij(k)はk層j番目のセルへの入力の総和を示し、
Oj(k)はk層j番目のセルの出力を示す。結合係数Wij
の更新には次式を用いる。Next, a learning method using this cell will be described with reference to FIG. Where Wi (k-1), j (k) is k-
The coupling coefficient between the i-th cell in the first layer and the j-th cell in the k-layer is shown, and Ij (k) is the sum of the inputs to the j-th cell in the k-layer,
Oj (k) indicates the output of the j-th cell in the k-th layer. Coupling coefficient Wij
The following formula is used for updating.
【数2】 通常のセル(セルへの入力の総和xとセルの出力yが、
微分可能な入出力関数y=f(x) の関係にあるセルを、
通常のセルと呼ぶことにする。具体的には参考文献(1)
のロジスティック関数や恒等関数がある)と本発明によ
るセルの誤差逆伝播法の違いを以下で説明する。[Equation 2] Ordinary cell (sum of input x to cell and output y of cell is
Differentiate the input / output function y = f (x),
We will call it a normal cell. Specifically, reference (1)
, And the cell back error propagation method according to the present invention will be described below.
【数3】 dj(k)は以下の場合で分かれる。[Equation 3] dj (k) is divided in the following cases.
【0007】(i) k層が出力層の場合(I) When the k layer is the output layer
【数4】 [Equation 4]
【0008】(ii) k層が中間層の場合(Ii) When the k layer is the intermediate layer
【数5】 上式のdm(k+1)は、出力側から順にdj(k)と同様にして
求められる。[Equation 5] Dm (k + 1) in the above equation is obtained in the same manner as dj (k) from the output side.
【数6】 よって、本発明によるセルを含むNNにおいても学習が
可能である。[Equation 6] Therefore, learning is possible even in the NN including the cell according to the present invention.
【0009】[0009]
【発明の効果】以上説明したように、本発明によればN
Nを構成するセルに対して、入力信号の加算結果のリミ
ット出力を出力する機能を付加したので、ソフトウェア
のニューラルネットを直線ハードウェアに割付けるのに
有効である。As described above, according to the present invention, N
Since the function of outputting the limit output of the addition result of the input signals is added to the cells forming N, it is effective for allocating the neural network of software to the linear hardware.
【図1】本発明によるNNの一実施例の構成図。FIG. 1 is a configuration diagram of an embodiment of a NN according to the present invention.
【図2】図1の学習方法を説明する図。FIG. 2 is a diagram illustrating the learning method of FIG.
【図3】入出力関数を示す図。FIG. 3 is a diagram showing an input / output function.
a,b1 〜bjmx セル O1 〜Ojmx セルbj の出力 W1 〜Wjmx 結合係数a, b 1 to b jmx cell O 1 to O jmx output of cell b j W 1 to W jmx coupling coefficient
Claims (3)
信号を所定の結合係数を介して接続された複数の中間セ
ルと、これら中間セルが階層構成を有して複数の最終出
力セルに接続されたニューラルネットワークにおいて、
前記中間セルは入力端に結合される信号の加算結果のリ
ミット出力を出力する機能を具備することを特徴とする
ファジィ推論のためのニューラルネットワーク。1. A cell that outputs an input signal, a plurality of intermediate cells in which the signals of the cells are connected via a predetermined coupling coefficient, and these intermediate cells have a hierarchical structure to form a plurality of final output cells. In a connected neural network,
The neural network for fuzzy inference, wherein the intermediate cell has a function of outputting a limit output of an addition result of signals coupled to an input terminal.
て結合係数を更新するに際し、中間セルのリミット領域
では誤差を伝播させないことを特徴とする学習方法。2. The learning method according to claim 1, wherein the error is not propagated in the limit region of the intermediate cell when the coupling coefficient is updated using the error back propagation method.
て結合係数を更新するに際し、中間セルのハイリミッ
ト,ローリミット及びリニア領域では、夫々決った比率
で誤差を伝播させることを特徴とする学習方法。3. The method according to claim 1, wherein when the coupling coefficient is updated using the error backpropagation method, the error is propagated at a determined ratio in the high limit, the low limit and the linear region of the intermediate cell. How to learn.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP35686092A JP3359074B2 (en) | 1992-12-22 | 1992-12-22 | Learning method of neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP35686092A JP3359074B2 (en) | 1992-12-22 | 1992-12-22 | Learning method of neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
JPH06195488A true JPH06195488A (en) | 1994-07-15 |
JP3359074B2 JP3359074B2 (en) | 2002-12-24 |
Family
ID=18451128
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP35686092A Expired - Fee Related JP3359074B2 (en) | 1992-12-22 | 1992-12-22 | Learning method of neural network |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP3359074B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005004032A1 (en) * | 2003-07-02 | 2005-01-13 | Advanced Logic Projects Inc. | Function device |
JP2011242827A (en) * | 2010-05-14 | 2011-12-01 | Iwate Univ | Random number generation system and program |
-
1992
- 1992-12-22 JP JP35686092A patent/JP3359074B2/en not_active Expired - Fee Related
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005004032A1 (en) * | 2003-07-02 | 2005-01-13 | Advanced Logic Projects Inc. | Function device |
JP2011242827A (en) * | 2010-05-14 | 2011-12-01 | Iwate Univ | Random number generation system and program |
Also Published As
Publication number | Publication date |
---|---|
JP3359074B2 (en) | 2002-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | Ensemble learning via negative correlation | |
Ritter et al. | An introduction to morphological neural networks | |
Sakar et al. | Growing and pruning neural tree networks | |
Frean | The upstart algorithm: A method for constructing and training feedforward neural networks | |
Wang | Discrete-time convergence theory and updating rules for neural networks with energy functions | |
Abbass | Pareto neuro-ensembles | |
Yarotsky | Quantified advantage of discontinuous weight selection in approximations with deep neural networks | |
JPH06195488A (en) | Neural network for fuzzy inference | |
JPH05101028A (en) | Integral decision method for plural feature quantity | |
JP3343625B2 (en) | Neural networks for fuzzy inference | |
Clarkson | Applications of neural networks in telecommunications | |
JP3343626B2 (en) | Neural networks for fuzzy inference | |
Kumar et al. | Convergence of artificial intelligence, emotional intelligence, neural network and evolutionary computing | |
Cheng | Derivation of the backpropagation algorithm based on derivative amplification coefficients | |
Georgiou et al. | Evolutionary Adaptive Schemes of Probabilistic Neural Networks | |
CN116050503B (en) | Generalized neural network forward training method | |
Oohori et al. | A new backpropagation learning algorithm for layered neural networks with nondifferentiable units | |
TWI730452B (en) | Stereo artificial neural network system | |
Ito et al. | Bayesian learning of neural networks adapted to changes of prior probabilities | |
JPH0652338A (en) | Neural network for generating membership function | |
JP3296609B2 (en) | Neural network that outputs membership function | |
Pranesh et al. | The impact of social media on polarization in the society | |
Abdallah | The encoded sequence representation in multilayer networks | |
Born et al. | Designing neural networks by adaptively building blocks in cascades | |
JPH03265077A (en) | Feedback neural cell model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
LAPS | Cancellation because of no payment of annual fees |