JPH0318967A - Learning system for neural net - Google Patents

Learning system for neural net

Info

Publication number
JPH0318967A
JPH0318967A JP1153252A JP15325289A JPH0318967A JP H0318967 A JPH0318967 A JP H0318967A JP 1153252 A JP1153252 A JP 1153252A JP 15325289 A JP15325289 A JP 15325289A JP H0318967 A JPH0318967 A JP H0318967A
Authority
JP
Japan
Prior art keywords
learning
data
recognized
degree
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP1153252A
Other languages
Japanese (ja)
Other versions
JPH0778787B2 (en
Inventor
Kazuki Jo
和貴 城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
A T R SHICHIYOUKAKU KIKO KENKYUSHO KK
Original Assignee
A T R SHICHIYOUKAKU KIKO KENKYUSHO KK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by A T R SHICHIYOUKAKU KIKO KENKYUSHO KK filed Critical A T R SHICHIYOUKAKU KIKO KENKYUSHO KK
Priority to JP1153252A priority Critical patent/JPH0778787B2/en
Publication of JPH0318967A publication Critical patent/JPH0318967A/en
Publication of JPH0778787B2 publication Critical patent/JPH0778787B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Feedback Control In General (AREA)

Abstract

PURPOSE:To evenly recognize each data by learning more number of difficult-to- recognize data than the easy-to-recognize ones in a back propagation learning system. CONSTITUTION:An input layer 2 is prepared together with an intermediate layer 3 and an output layer 4 in a neural net where the learning is carried out by a back propagation learning rule. The recognizing result of each learning data and the value of a unit that ignited most with the output against the relevant learning date are checked before the change of the connection 1. Then the value obtained by multiplying the result and value together is defined as a degree of learning. Thus each learning data is correctly recognized and at the same time the degree of learning and the learning data obtained at that time point are wrong recognized. A function M is set in response to that degree of learning, and the weight of the connection 1 is changed based on the value obtained by a prescribed arithmetic formula. Thus the more number of difficult- to-recognized data are learnt and each data is evenly recognized. Then the convergence of learning is urged.

Description

【発明の詳細な説明】 [産業上の利用分野] この発明はニューラルネットにおける学習方式に関し、
特に、パターン認識で必要とされる複雑なデータを認識
するのに有効な学習方式に関する。
[Detailed Description of the Invention] [Field of Industrial Application] This invention relates to a learning method in a neural network,
In particular, it relates to a learning method that is effective for recognizing complex data required for pattern recognition.

[従来の技術および発明が解決しようとする課題]従来
より、多層パーセプトロン型ニューラルネットにおいて
、バックプロパゲーション学習則を用いて学習を行なわ
せる場合、パターン認識など、学習させるデータが複雑
になるほど学習が困難になるという問題点があった。
[Prior art and problems to be solved by the invention] Traditionally, when learning is performed using a backpropagation learning rule in a multilayer perceptron neural network, the more complex the data to be trained, such as pattern recognition, the more difficult the learning becomes. The problem was that it was difficult.

それゆえに、この発明の主たる目的は、バックプロパゲ
ーション型学習方式において、認識しにくいデータを、
認識しやすいデータよりも数多く学習させることにより
、各データを均一に認識できるようなニューラルネット
における学習方式を提供することである。
Therefore, the main purpose of this invention is to use the backpropagation learning method to process data that is difficult to recognize.
It is an object of the present invention to provide a learning method in a neural network that can uniformly recognize each piece of data by learning more data than data that is easy to recognize.

[課題を解決するための手段] この発明は多層パーセプトロン型ニューラルネットにお
いて、バックプロパゲーション学習則を用いて学習を行
なうニューラルネットにおける学習方式であって、コネ
クションの変更を行なう前に、各学習データの二B識結
果およびその学習データに対する出力で最も発火したユ
ニットの値を調べ、それらを掛合わせた値を各データの
学習度とみなし、各学習データが正しく認識されかつそ
の学習度の程度および学習データが誤認識されかつその
学習度の程度に応じて関数Mを作り、所定の演算式によ
って計算される値に従ってコネクションの重みを変更す
るものである。
[Means for Solving the Problems] The present invention is a learning method for a multilayer perceptron neural network that performs learning using a backpropagation learning rule. 2B Check the value of the most fired unit in the recognition result and the output for the learning data, and consider the value obtained by multiplying them as the learning degree of each data to determine whether each learning data is recognized correctly and the degree of learning degree and A function M is created depending on whether learning data is misrecognized and the degree of learning, and the weight of a connection is changed according to a value calculated by a predetermined arithmetic expression.

[作用コ この発明にかかるニューラルネットにおける学習方式は
、コネクションの重みを変更する前に、各データの学習
度を算出し、所定の学習式を用いて算出した値に応じて
コネクションの重みを変更することによって、認識しに
くいデータをより多く学習させ、各データを均一に認識
できるようにして、学習の収束を促す。
[Operation] The learning method in the neural network according to this invention calculates the learning degree of each data before changing the weight of the connection, and changes the weight of the connection according to the value calculated using a predetermined learning formula. By doing this, the system learns more data that is difficult to recognize, allows each piece of data to be recognized uniformly, and promotes convergence of learning.

[発明の実施例〕 第1図はこの発明が適用される多層パーセプトロン型ニ
ューラルネットを示す図である。第1図を参照して、二
二一うルネットは入力層2と中間層3と出力層4とを含
み、それぞれはコネクション1によって接続されている
。入力層2には学習させるべきデータが入力され、バッ
クプロパゲーション学習則によりコネクション1の重み
を変えて学習が行なわれる。
[Embodiments of the Invention] FIG. 1 is a diagram showing a multilayer perceptron neural network to which the present invention is applied. Referring to FIG. 1, the 221 runnet includes an input layer 2, an intermediate layer 3, and an output layer 4, each connected by a connection 1. Data to be learned is input to the input layer 2, and learning is performed by changing the weight of the connection 1 according to the backpropagation learning rule.

第2図はこの発明による学習方式のフロー図であり、第
3図は複雑なデータをこの発明によってニューラルネッ
トに学習させる際の学習の収束状態を示す図である。
FIG. 2 is a flowchart of the learning method according to the present invention, and FIG. 3 is a diagram showing the convergence state of learning when complex data is learned by the neural network according to the present invention.

第2図を参照して、この発明では、学習させるデータを
入力層2に入力し、コネクション1の重みを変える前に
、ステップ(図示ではSPと略称する)SPIにおいて
、学習データを認識し、ステップSP2において、学習
データごとの認識結果と、その出力で最も発火したユニ
ットの値を調べ、それらの値を掛合わせたものを学習デ
ータに対する学習度として算出する。そして、ステップ
SP3において学習データXに対して正しく認識された
か否かを判別する。
Referring to FIG. 2, in the present invention, before learning data is input to input layer 2 and the weight of connection 1 is changed, the learning data is recognized in step (abbreviated as SP in the figure) SPI, In step SP2, the recognition result for each learning data and the value of the unit that fired the most in its output are checked, and the product of these values is calculated as the learning degree for the learning data. Then, in step SP3, it is determined whether the learning data X has been correctly recognized.

ここで、学習データXに対して、 M (X)−fO,1、k、2k、4k、6kl(kは
正の定数であり、 Xが正しく認識されかつ Xの学習度が高い場合M (X)=0、中位の場合M(
X)=1、 低い場合M (X)=k、 Xが誤認識されかつ Xの学習度が低い場合M (X) =2k、中位の場合
M (X)=4k、 高い場合M (X)−6k) なる関数Mを作る。関数Mができあがれば、学習を行な
うが、この発明では従来のバックプロパゲーション学習
則を次のように改善して用いる。
Here, for the learning data X, M ( X) = 0, M(
X) = 1, if low, M (X) = k, if X is misrecognized and the learning level of X is low, M (X) = 2k, if medium, M (X) = 4k, if high, M (X) )-6k) Create a function M. Once the function M is completed, learning is performed. In this invention, the conventional backpropagation learning rule is improved and used as follows.

ΔW(n+1)−aΔW(n)−εΣM(j)dE/d
Wj (d E / d W jは学習データjに対するコネ
クションの重みの変化量) この式の意味するところは以下のとおりである。
ΔW(n+1)−aΔW(n)−εΣM(j)dE/d
Wj (dE/dWj is the amount of change in connection weight for learning data j) The meaning of this equation is as follows.

すなわち、従来の学習式ΔW(n+1)−αΔW(n)
−εdE/dWにおいて、この発明の概念に従って、学
習データjを他の学習データよりL倍多く学習させるた
めには、学習データjをL個に増やして学習させなけれ
ばならない。しかしながら、学習途中にそのような学習
データ数の動的な変更は困難であり、計算時間も多くが
がってしまう。ここでdE/dWのうちdE/dWj部
分をL倍すると、学習データjの付加的な学習も可能で
あるし、計算時間も従来の学習とほとんど変わらないた
め、dE/dWを2M(j)dE/dWjと変更するこ
とにより、学習データjをM(j)倍学習させることが
可能となる。もし、学習データが均等に認識されている
状態ならば、M(j)の値はすべて等しくなり、従来方
式の学習式と同じになる。
In other words, the conventional learning formula ΔW(n+1)−αΔW(n)
-εdE/dW, in order to learn L times more learning data j than other learning data according to the concept of the present invention, the number of learning data j must be increased to L and trained. However, it is difficult to dynamically change the number of learning data during learning, and the calculation time increases. Here, if we multiply the dE/dWj portion of dE/dW by L, additional learning of the learning data j is possible, and the calculation time is almost the same as conventional learning, so dE/dW can be reduced to 2M(j). By changing dE/dWj, it becomes possible to learn the learning data j M(j) times. If the learning data is recognized equally, the values of M(j) will all be equal, and will be the same as the conventional learning formula.

上述の動作を第2図に示すフロー図を参照して説明する
と、ステップSP3において学習データXが正しく認識
されたことを判別すると、ステップSP4においてXが
大きいか否かを判別し、大きければ、ステップSP5に
おいてデータの学習を行なわず、Xが大きくなければ、
ステップSP6においてXが中位か否かを判別し、中位
であればステップSP7においてデータの学習を普通に
行ない、Xが中位でなければステップSP8においてX
が小さいか否かを判別する。Xが小さければデータの学
習をに倍行なう。
The above operation will be explained with reference to the flow diagram shown in FIG. 2. When it is determined in step SP3 that the learning data X has been correctly recognized, it is determined in step SP4 whether or not X is large, and if it is large, If no data learning is performed in step SP5 and X is large,
In step SP6, it is determined whether or not X is medium, and if it is medium, data learning is performed normally in step SP7, and if
Determine whether or not is small. If X is small, the data will be learned twice as much.

また、ステップSP3においてXが誤認識されたことを
判別すると、ステップ5PIOにおいて、Xが小さいか
否かを判別し、Xが小ければステップ5P11において
データの学習を2に倍行ない、Xが小さくなければステ
ップ5P12においてXが中位であるか否かを判別し、
中位であればステップ5P13においてデータの学習を
4に倍行なう。Xが中位でなければステップ5P14に
おいてXが大きいか否かを判別し、大きければステップ
5P15においてデータの学習を6に倍行なう。
Furthermore, if it is determined in step SP3 that X has been erroneously recognized, it is determined in step 5PIO whether or not X is small, and if X is small, data learning is doubled by 2 in step 5P11, and If not, in step 5P12 it is determined whether or not X is in the middle,
If it is medium, data learning is performed by a factor of 4 in step 5P13. If X is not medium, it is determined in step 5P14 whether or not X is large; if it is, data learning is multiplied by 6 in step 5P15.

上述の学習を行なうことによって、従来の方式では第3
図に示す学習すべき状態5に対して学習途中の状態6か
ら徐々に収束していくのに対して、この発明による学習
方式では、多くの学習が必要な部分7の学習を多く行な
わせ、あまり学習を必要としない部分8は最小限の学習
に抑えることができ、状態5への収束を速くすることが
できる。
By performing the above learning, the conventional method
In contrast to the state 5 to be learned shown in the figure, which gradually converges from the state 6 in the middle of learning, in the learning method according to the present invention, much learning is performed in the part 7 that requires a lot of learning, Part 8, which does not require much learning, can be kept to a minimum level of learning, and convergence to state 5 can be made faster.

[発明の効果] 以上のように、この発明によれば、多層パーセプトロン
型ニューラルネットに、バックプロパゲーション学習側
の学習式を改善した式を適用することにより、複雑な学
習データを迅速に学習させることかできる。
[Effects of the Invention] As described above, according to the present invention, complex learning data can be learned quickly by applying a formula that is an improved learning formula on the backpropagation learning side to a multilayer perceptron neural network. I can do it.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は多層パーセプトロン型ニューラルネットの一例
としての3層の場合の構成を示す図である。第2図はこ
の発明による学習方式のフロー図である。第3図は複雑
なデータをニューラルネットに学習させる際の学習の収
束状態を示す図である。 図において、lはコネクション、2は入力層、3は中間
層、4は出力層、5は学習すべき状態、6は学習途中の
状態、7は多(の学習が必要な部分、8はあまり学習が
必要でない部分を示す。 特詐出願人 株式会社エイ・ティ・アール第 図 第2 図 第3 図
FIG. 1 is a diagram showing the configuration of a three-layer neural network as an example of a multilayer perceptron neural network. FIG. 2 is a flow diagram of the learning method according to the present invention. FIG. 3 is a diagram showing the convergence state of learning when making a neural network learn complex data. In the figure, l is a connection, 2 is an input layer, 3 is an intermediate layer, 4 is an output layer, 5 is a state to be learned, 6 is a state in the middle of learning, 7 is a part that requires a lot of learning, 8 is a little Indicates parts that do not require learning. Special fraud applicant A.T.R. Co., Ltd. Figure 2 Figure 3

Claims (1)

【特許請求の範囲】 多層パーセプトロン型ニューラルネットにおいてバック
プロパゲーション学習則を用いて学習を行なうニューラ
ルネットにおける学習方式において、 コネクションの変更を行なう前に、各学習データの認識
結果およびその学習データに対する出力で最も発火した
ユニットの値を調べ、それらを掛合わせた値を各データ
の学習度とみなし、各学習データXに対して、 M(X)={0、1、k、2k、4k、6k}(kは正
の定数であり、 Xが正しく認識されかつ Xの学習度が高い場合M(X)=0、 中位の場合M(X)=1、 低い場合M(X)=k、 Xが誤認識されかつ Xの学習度が低い場合M(X)=2k、 中位の場合M(X)=4k、 高い場合M(X)=6k) なる関数Mを作り、 ΔW(n+1)=αΔW(n)−εΣM(j)dE/d
Wj (dE/dWjは学習データjに対するコネクションの
重みの変化量) によって計算される値に従って、コネクションの重みを
変更することを特徴とする、ニューラルネットにおける
学習方式。
[Claims] In a learning method for a neural network that performs learning using a backpropagation learning rule in a multilayer perceptron type neural network, before changing connections, the recognition results of each learning data and the output for the learning data are Find out the value of the unit that fires the most, consider the value multiplied by them as the learning degree of each data, and for each learning data X, M(X) = {0, 1, k, 2k, 4k, 6k } (k is a positive constant, if X is correctly recognized and the learning level of X is high, M(X) = 0, if it is medium, M(X) = 1, if it is low, M(X) = k, If X is misrecognized and the learning level of X is low, M(X) = 2k, if it is medium, M(X) = 4k, if it is high, M(X) = 6k). =αΔW(n)−εΣM(j)dE/d
A learning method in a neural network characterized by changing the weight of a connection according to a value calculated by Wj (dE/dWj is the amount of change in the weight of a connection with respect to learning data j).
JP1153252A 1989-06-15 1989-06-15 Learning method in neural network Expired - Fee Related JPH0778787B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1153252A JPH0778787B2 (en) 1989-06-15 1989-06-15 Learning method in neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1153252A JPH0778787B2 (en) 1989-06-15 1989-06-15 Learning method in neural network

Publications (2)

Publication Number Publication Date
JPH0318967A true JPH0318967A (en) 1991-01-28
JPH0778787B2 JPH0778787B2 (en) 1995-08-23

Family

ID=15558395

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1153252A Expired - Fee Related JPH0778787B2 (en) 1989-06-15 1989-06-15 Learning method in neural network

Country Status (1)

Country Link
JP (1) JPH0778787B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009150497A (en) * 2007-12-21 2009-07-09 Nippon Pop Rivets & Fasteners Ltd Welding stud

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009150497A (en) * 2007-12-21 2009-07-09 Nippon Pop Rivets & Fasteners Ltd Welding stud

Also Published As

Publication number Publication date
JPH0778787B2 (en) 1995-08-23

Similar Documents

Publication Publication Date Title
US5606646A (en) Recurrent neural network-based fuzzy logic system
Obaidat et al. A multilayer neural network system for computer access security
CN106372720B (en) Method and system for realizing deep pulse neural network
EP0327817B1 (en) Associative pattern conversion system and adaptation method thereof
US20070043452A1 (en) Artificial neural network
Yam et al. A new method in determining initial weights of feedforward neural networks for training enhancement
US5592589A (en) Tree-like perceptron and a method for parallel distributed training of such perceptrons
JPH08227408A (en) Neural network
JPH03288285A (en) Learning method for data processor
JPH0318967A (en) Learning system for neural net
US5239619A (en) Learning method for a data processing system having a multi-layer neural network
JPH04237388A (en) Neuro processor
Bozoki et al. Neural networks and orbit control in accelerators
Armitage Neural networks in measurement and control
JPH02100757A (en) Parallel neural network learning system
CA2898216C (en) Methods and systems for implementing deep spiking neural networks
JPH0318966A (en) Learning system for neural net
JPH05128285A (en) Neuro-processor
JPH04291662A (en) Operation element constituted of hierarchical network
JPH09138786A (en) Learning device for neural network
JPH02309447A (en) Method for learning mutual connection type neural network
KR910018921A (en) Learning method of data processing device
JPH04186402A (en) Learning system in fuzzy inference
JPH0325562A (en) Information processor using neural net
JPH03246747A (en) Learning processing system for network comprising data processor

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees