JPH0318964A - Learning system for neural net - Google Patents

Learning system for neural net

Info

Publication number
JPH0318964A
JPH0318964A JP1153249A JP15324989A JPH0318964A JP H0318964 A JPH0318964 A JP H0318964A JP 1153249 A JP1153249 A JP 1153249A JP 15324989 A JP15324989 A JP 15324989A JP H0318964 A JPH0318964 A JP H0318964A
Authority
JP
Japan
Prior art keywords
learning
connection
layer
weight
zero
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP1153249A
Other languages
Japanese (ja)
Other versions
JPH0769894B2 (en
Inventor
Kazuki Jo
和貴 城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
A T R SHICHIYOUKAKU KIKO KENKYUSHO KK
Original Assignee
A T R SHICHIYOUKAKU KIKO KENKYUSHO KK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by A T R SHICHIYOUKAKU KIKO KENKYUSHO KK filed Critical A T R SHICHIYOUKAKU KIKO KENKYUSHO KK
Priority to JP1153249A priority Critical patent/JPH0769894B2/en
Publication of JPH0318964A publication Critical patent/JPH0318964A/en
Publication of JPH0769894B2 publication Critical patent/JPH0769894B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

PURPOSE:To quickly get out a learning balanced state and to urge the high speed convergence of learning by changing forcibly the weight of connection when the learning balanced state is caused. CONSTITUTION:An input layer 2, an intermediate layer 3, and an output layer 4 are connected to each other via the connections 1. The data to be learnt is inputted to the layer 2 and the weight of the connection 1 is changed by a back propagation learning rule for execution of the learning. As a result, the weight of the connection 1 connected to a specific unit of the layer 4 is extremely reduced. Then the weight of a part connected to the layer 4 of the connection 1 is forcibly eliminated to zero in such a learning balanced state where the learning is difficult for the learning data to be taken change of by an output unit. Then the back propagation learning rule is immediately carried on. Thus a state set before a zero-clear mode is recovered with several tens of learning operations although the recognizing factor of a neural net is temporarily deteriorated. Then the learning balanced state can be immediately eliminated.

Description

【発明の詳細な説明】 [産業上の利用分野] この発明はニューラルネットにおける学習方式に関し、
特に、パターン認識で必要とされる複雑なデータを認識
するのに有効な学習方式に関する。
[Detailed Description of the Invention] [Field of Industrial Application] This invention relates to a learning method in a neural network,
In particular, it relates to a learning method that is effective for recognizing complex data required for pattern recognition.

[従来の技術および発明が解決しようとする課題]従来
、多層バーセブトロン型ニューラルネットにおいてバッ
クプロパゲーション学習則を用いて学習を行なわせる場
合、パターン認識などの学習させようとするデータが複
雑なものになるほど学習は困難になる。このため、特定
の学習データを学習できないまま学習が平衡状態に陥る
事態(学習平衡状@)からの脱出が問題になっていた。
[Prior art and problems to be solved by the invention] Conventionally, when learning is performed using a backpropagation learning rule in a multilayer bersebutron type neural network, the data to be trained for pattern recognition etc. is complex. I see, learning becomes difficult. For this reason, escaping from a situation where learning falls into an equilibrium state (learning equilibrium state) without being able to learn specific learning data has become a problem.

それゆえに、この発明の主たる目的は、バックプロパゲ
ーション学習則で学習途中のニューラルネットが学習平
衡状態に陥ったとき、迅速にその平衡状態を脱出し、学
習の高速な収束を促すことのできるようなニューラルネ
ットにおける学習方式を提供することである。
Therefore, the main purpose of this invention is to quickly escape from the equilibrium state when a neural network in the middle of learning falls into a learning equilibrium state using the backpropagation learning rule, and to promote rapid convergence of learning. The purpose of this study is to provide a learning method for neural networks.

【課題を解決するための手段] この発明にかかるニューラルネットにおける学習方式は
、多層パーセプトロン型ニューラルネットにおける学習
方法においてバックプロパゲーション学習剤を用いて学
習を行なわせているとき、学習平衡状態により、特定の
学習データが学習困難な状態に陥ったとき、出力層に接
続されているすべてのコネクションの重みをゼロ・クリ
アした後、学習を継続させる。
[Means for Solving the Problems] The learning method in the neural network according to the present invention is such that when learning is performed using a backpropagation learning agent in the learning method in the multilayer perceptron type neural network, due to the learning equilibrium state, When specific training data becomes difficult to learn, the weights of all connections connected to the output layer are cleared to zero, and then learning is continued.

[作用] この発明にかかるニューラルネットの学習方式は、従来
の学習方式で学習平衡状態に陥った場合に、コネクショ
ンの重みを強制的に変更することにより、学習平衡状態
を迅速に脱出し、学習の高速な収束を促す。
[Operation] The neural network learning method according to the present invention quickly escapes from the learning equilibrium state by forcibly changing the connection weights when the learning equilibrium state occurs in the conventional learning method, and the learning promotes rapid convergence.

〔発明の実施例〕[Embodiments of the invention]

第1図はこの発明が適用される多層パーセプトロン型ニ
ューラルネットを示す図である。第1図を参照して、ニ
ューラルネットは入力層2と中間層3と出力層4とを含
み、それぞれはコネクション1によって接続されている
。入力層2には学習させるデータが入力され、バックプ
ロパゲーション学習剤によりコネクション1の重みを変
えて学習が行なわれる。
FIG. 1 is a diagram showing a multilayer perceptron type neural network to which the present invention is applied. Referring to FIG. 1, the neural network includes an input layer 2, an intermediate layer 3, and an output layer 4, each connected by a connection 1. Data to be learned is input to the input layer 2, and learning is performed by changing the weight of the connection 1 using a backpropagation learning agent.

第2図はバックプロパゲーション学習則で学習を行なわ
せた結果、学習平衡状態に陥り、特定の学習データを認
識できない状態を示し、第3図はこの発明の一実施例に
よって学習平衡状態にあったニューラルネットの出力層
に接続されているコネクションをゼロ・クリアした状態
を示す図であり、第4図は学習平衡状態を脱出したこと
を示す図である。
FIG. 2 shows a state in which learning is performed using the backpropagation learning rule, resulting in a state of learning equilibrium in which specific learning data cannot be recognized, and FIG. FIG. 4 is a diagram showing a state in which the connections connected to the output layer of the neural network have been cleared to zero, and FIG. 4 is a diagram showing that the learning equilibrium state has been escaped.

バックプロパゲーション学習則によりコネクション1の
重みを変えてい(学習を行なった結果、第2図に示すよ
うに、出力層4の特定のユニットに接続されているコネ
クションの重みが非常に小さくなり、その結果、その出
カニニットが担当すべき学習データが学習困難となるよ
うな学習型&状態に陥ったとする。第2図に示した例で
は、出力層4の特定ユニット5に接続するコネクション
の重みがすべて小さすぎるため、ユニット5が発火不可
能の状態に陥っているような学習平衡状態の例である。
The weight of connection 1 is changed using the backpropagation learning rule (as a result of learning, as shown in Figure 2, the weight of the connection connected to a specific unit of output layer 4 becomes very small, As a result, suppose that the learning data that the output unit is in charge of falls into a learning type & state where it becomes difficult to learn.In the example shown in Fig. 2, the weight of the connection connected to the specific unit 5 of the output layer 4 is This is an example of a learning equilibrium state in which the unit 5 is unable to fire because all of them are too small.

このとき、コネクション6の線の太さはコネクションの
重みを表わし、太いコネクションはど接続するユニット
との関係が深いことを示している。
At this time, the thickness of the line of connection 6 represents the weight of the connection, and a thick connection indicates a deep relationship with the unit to which it is connected.

この状態でコネクション6の出力層4に接続されている
部分の重みを強制的にゼロにする(ゼロ・クリア)。第
3図は、その結果、出力層4に接続されているコネクシ
ョン7の重みがすべてゼロになっている様子を表わして
いる。このとき、コネクション1は入力層2と中間層3
だけの間でゼロでない重みを持ち、入力データの基本特
徴を抽出して、中間層3の各ユニットに分配するような
機能を有している。
In this state, the weight of the portion of connection 6 connected to output layer 4 is forcibly set to zero (zero clear). FIG. 3 shows that as a result, the weights of the connections 7 connected to the output layer 4 are all zero. At this time, connection 1 connects input layer 2 and intermediate layer 3.
It has a function of extracting basic features of input data and distributing them to each unit of the intermediate layer 3.

上述のごとくゼロ・クリアした後、すぐにバックプロパ
ゲーション学習剤を継続する。このとき、−時的にニュ
ーラルネットの認識率は低下するが、数千回程度の学習
によりゼロ・クリア以前の状態まで回復し、そのまま学
習平衡状態を脱出する。
After clearing to zero as described above, immediately continue using the backpropagation learning agent. At this time, the recognition rate of the neural network decreases over time, but after several thousand times of learning, it recovers to the state before zero clearing and exits the learning equilibrium state.

そして、第4図に示すコネクション8のように出力層4
のすべてのユニットに適当なコネクションの重みが割当
てられる。
Then, as shown in connection 8 shown in FIG.
Appropriate connection weights are assigned to all units in .

従来の方法では、第2図に示したような学習平衡状態に
陥った場合、数百、数千回の学習を繰返して学習平衡状
態から脱出を行なわなければならなかったが、この発明
では、数千回の学習で学習平衡状態からの脱出を行なう
ことができる。
In the conventional method, when the learning equilibrium state as shown in Fig. 2 is reached, learning must be repeated hundreds or thousands of times to escape from the learning equilibrium state, but with this invention, It is possible to escape from the learning equilibrium state after several thousand times of learning.

[発明の効果コ 以上のように、この発明によれば、多層バーセブトロン
型ニューラルネットをパックブロバゲション学習則で学
習させ、学習平衡状態に陥ってしまい、特定の学習デー
タが学習できなくなった場合、出力層につながるコネク
ションの重みをゼロ・クリアするという簡単な操作だけ
で学習平衡状態を脱出することができ、ニューラルネッ
トの学習における高速な収束を行なわせることができる
[Effects of the Invention] As described above, according to the present invention, when a multilayer bersebutron type neural network is trained using the pack-broadcasting learning rule and the learning equilibrium state is reached, specific learning data cannot be learned. , it is possible to escape from the learning equilibrium state by simply clearing the weights of connections connected to the output layer to zero, and it is possible to achieve high-speed convergence in neural network learning.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は多層パーセプトロン型ニューラルネットの一例
として3層のニューラルネットを示す図である。第2図
は第1図に示したニューラルネットをバックプロパゲー
ション学習則で学習を行なわせた結果学習平衡状態に陥
り、特定の学習データを認識できない状態を示す図であ
る。第3図はこの発明の一実施例により、学習平衡状態
にあったニューラルネットの出力層に接続されているコ
ネクションをゼロ・クリアした状態を示す図である。第
4図は学習平衡状態を脱出したことを示す図である。 図において、1はコネクション、2は入力層、3は中間
層、4は出力層、5は学習平衡状態にあるニューラルネ
ットの出力層、6は学習平衡状態にあるニューラルネッ
トのコネクションの重み、7はこの発明を適用した直後
のコネクションの重み、8は学習平衡状態を脱出したニ
ューラルネットのコネクションの重みを示す。
FIG. 1 is a diagram showing a three-layer neural network as an example of a multilayer perceptron type neural network. FIG. 2 is a diagram showing a state in which the neural network shown in FIG. 1 is trained using the backpropagation learning rule and as a result falls into a learning equilibrium state and is unable to recognize specific learning data. FIG. 3 is a diagram showing a state in which connections connected to the output layer of a neural network in a learning equilibrium state are cleared to zero according to an embodiment of the present invention. FIG. 4 is a diagram showing escape from the learning equilibrium state. In the figure, 1 is a connection, 2 is an input layer, 3 is an intermediate layer, 4 is an output layer, 5 is an output layer of a neural network in a learning equilibrium state, 6 is a connection weight of a neural network in a learning equilibrium state, and 7 is the weight of the connection immediately after applying this invention, and 8 is the weight of the connection of the neural network that has escaped the learning equilibrium state.

Claims (1)

【特許請求の範囲】[Claims] 多層パーセプトロン型ニューラルネットにおける学習方
式において、バックプロパゲーション学習則を用いて学
習を行なわせているとき、学習平衡状態により、特定の
学習データが学習困難な状態に陥ったとき、出力層に接
続されているすべてのコネクションの重みをゼロ・クリ
アした後、学習を継続させるようにしたことを特徴とす
る、ニューラルネットにおける学習方式。
In the learning method of a multilayer perceptron neural network, when learning is performed using the backpropagation learning rule, when specific learning data falls into a state where it is difficult to learn due to the learning equilibrium state, it is difficult to connect to the output layer. A learning method for neural networks that is characterized by continuing learning after the weights of all connections are cleared to zero.
JP1153249A 1989-06-15 1989-06-15 Learning method in neural network Expired - Fee Related JPH0769894B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1153249A JPH0769894B2 (en) 1989-06-15 1989-06-15 Learning method in neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1153249A JPH0769894B2 (en) 1989-06-15 1989-06-15 Learning method in neural network

Publications (2)

Publication Number Publication Date
JPH0318964A true JPH0318964A (en) 1991-01-28
JPH0769894B2 JPH0769894B2 (en) 1995-07-31

Family

ID=15558328

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1153249A Expired - Fee Related JPH0769894B2 (en) 1989-06-15 1989-06-15 Learning method in neural network

Country Status (1)

Country Link
JP (1) JPH0769894B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5313559A (en) * 1991-02-15 1994-05-17 Hitachi, Ltd. Method of and system for controlling learning in neural network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01271888A (en) * 1988-04-22 1989-10-30 Nec Corp Learning method for pattern recognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01271888A (en) * 1988-04-22 1989-10-30 Nec Corp Learning method for pattern recognition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5313559A (en) * 1991-02-15 1994-05-17 Hitachi, Ltd. Method of and system for controlling learning in neural network

Also Published As

Publication number Publication date
JPH0769894B2 (en) 1995-07-31

Similar Documents

Publication Publication Date Title
US9489622B2 (en) Event-driven universal neural network circuit
Mukherjee et al. Application of artificial neural networks in structural design expert systems
KR102333730B1 (en) Apparatus And Method For Generating Learning Model
Anderson et al. Reinforcement learning with modular neural networks for control
Ying et al. Artificial neural network prediction for seismic response of bridge structure
JPH0318964A (en) Learning system for neural net
Aguilar et al. Recognition algorithm using evolutionary learning on the random neural networks
JPH0281160A (en) Signal processor
Wang et al. A new approach for byzantine agreement
Wan et al. Introducing cost-sensitive neural networks
JP3262340B2 (en) Information processing device
Oubbati et al. Meta-learning for adaptive identification of non-linear dynamical systems
JPH0149985B2 (en)
JPH05204885A (en) Device and method for accelerating learning of neural network
JPH04501327A (en) pattern transfer neural network
JPH0394364A (en) Neural network
JPH04215170A (en) Information processor
JPH04186402A (en) Learning system in fuzzy inference
JPH0318967A (en) Learning system for neural net
Noda A Model of Recurrent Networks that Learn the Finite Automaton from Given Input-Output Sequences
De Wilde et al. Backpropagation
JPH08166934A (en) Function generator using neural network
KR100241359B1 (en) Adaptive learning rate and limited error signal
Ishibuchi et al. Learning of neural networks from linguistic knowledge and numerical data
JPH0682354B2 (en) Learning method in neural network

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees