US5764860A - Learning method for multi-level neural network - Google Patents

Learning method for multi-level neural network Download PDF

Info

Publication number
US5764860A
US5764860A US08/923,333 US92333397A US5764860A US 5764860 A US5764860 A US 5764860A US 92333397 A US92333397 A US 92333397A US 5764860 A US5764860 A US 5764860A
Authority
US
United States
Prior art keywords
signal
output unit
neural network
level
binary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US08/923,333
Inventor
Youtaro Yatsuzuka
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KDDI Corp
Original Assignee
Kokusai Denshin Denwa KK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP07716895A external-priority patent/JP3367264B2/en
Priority claimed from JP22701495A external-priority patent/JP3367295B2/en
Application filed by Kokusai Denshin Denwa KK filed Critical Kokusai Denshin Denwa KK
Priority to US08/923,333 priority Critical patent/US5764860A/en
Application granted granted Critical
Publication of US5764860A publication Critical patent/US5764860A/en
Assigned to KDD CORPORATION reassignment KDD CORPORATION CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: KOKUSAI DENSHIN DENWA CO., LTD.
Assigned to DDI CORPORATION reassignment DDI CORPORATION MERGER (SEE DOCUMENT FOR DETAILS). Assignors: KDD CORPORATION
Assigned to KDDI CORPORATION reassignment KDDI CORPORATION CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: DDI CORPORATION
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • the present invention relates to learning methods for multi-level neural networks which are applied to large scale logic circuits, pattern recognizers, associative memories, code converters and image processors, providing a desired output signal stably and quickly by a simple training process with controls of signals in polarity and amplitude for updating weighting factors and a detection of tenacious states trapped in local minima.
  • One of neural network is a multi-layered neural network, in which a back-propagation algorithm has been widely used as a supervised learning method for the neural network.
  • the difference signal is derived by subtracting an output unit signal of an output layer replying to a training input signal fed in an input layer from a prepared teacher signal T (teacher signal element: T1, T2 . . . , T M ) through a subtractor, and the weighting factors between adjacent layers are updated by using the output unit signal and the difference signal to minimize the power of the difference signal.
  • T teacher signal element: T1, T2 . . . , T M
  • the learning Process using the whole training input signal is repeatedly conducted for updating weighting factors to achieve convergence.
  • a minimum power of the difference signal can provide complete convergence of the neural network in binary space, resulting in coincidence between the multi-level output unit signal for the training input signal and the multi-level teacher signal.
  • a new teacher signal T' having values of 0.1 and 0.9 has been utilized to improve the convergence speed instead of a teacher signal T having values of 0 and 1, as described in "Parallel Distributed Processing" by D. E. Rumelhart, MIT Press, and the learning process starts to learn after setting initial conditions under the control of the mode controller 9.
  • the difference signal is obtained by subtracting the output unit signal of a multi-layered neural network 1 for the training input signal fed through a terminal 2 from the teacher signal T' through a subtractor 4, and is fed into a weighting factor controller 5 to update the weighting factors by a back-propagation algorithm and set them again in the multi-layered neural network 1.
  • a binary output unit signal 23 from the output unit signal through a binary threshold means 6 and the binary teacher signal is also obtained from the teacher signal T' through a binary threshold means 7.
  • a coincidence detector 8 By detecting the coincidence between the teacher signal T and the binary output unit signal 23 through a coincidence detector 8, it is judged whether the multi-layered neural network 1 achieved convergence or not. These procedures are repeated in the training process until convergence is achieved.
  • the teacher signal T' having values of 0.1 and 0.9 can reduce the necessary number of training cycles for achieving convergence in comparison with the teacher signal T having 0 and 1 in a conventional learning method. This is because the updating speed of the weighting factors becomes slower due to small gradients for input values very close to 0 and 1 in a sigmoidal transfer function.
  • a speed up in training can therefore be achieved by a larger gradient obtained by setting the teacher signal T' to 0.1 and 0.9.
  • This method cannot however provide a sufficient improvement in convergence as well, because the multi-layered neural network is frequently captured in a very tenacious state trapped in local minima.
  • the training process is terminated for the neural network 1 trapped in a state having local minima and the weighting factors are set in the neural network 1 in an execution process, it can provide not only completely correct binary output unit signal for the training input signal, but also a large number of correct binary output signals for a test input signal, resulting in a low generalization ability.
  • the multi-level neural networks using the present invention can resolve these problems by achieving a stable convergence 10 to 100 times faster than that of the conventional learning method and also achieve a very high generalization ability.
  • the multi-level neural network for a new supervised learning process comprises an updating of weighting factors by using an error signal which has either an opposite polarity to that of a difference signal by subtracting the output unit signal from the corresponding teacher signal and an amplitude smaller in proportion to that of the difference, when the absolute value of the difference signal is equal to or smaller than a given threshold for a correct multi-level output unit signal, or has the same polarity as that of the difference signal and an amplitude equal to or smaller than that of the difference signal, when the absolute value is larger than the given threshold for a correct multi-level output unit signal, and also by using an error signal which has the same polarity as that of the difference signal and an amplitude equal to or smaller than that of the difference signal for an erroneous multi-level output unit signal.
  • a multi-level neural network with another learning method comprises as least a detection of tenacious states trapped in local minima by using at least a minimum absolute value of the difference signals among erroneous binary output unit signals, and an updating of weighting factors by using the error signal adjusted in polarity and amplitude.
  • a multi-level neural network with yet another learning method comprises at least an updating of weighting factors by using directly the difference signal as the error signal without adjusting the polarity and the amplitude after the neural network for the learning process has converged once.
  • a tenacious state trapped in local minima can be easily detected when a minimum value of the difference signals among wrong binary output unit signals exceeds a given threshold. Accordingly, by flexibly adjusting the error signal in amplitude, and further, adjusting the detection threshold, quicker convergence can be achieved within a very small number of training cycles. Once the convergence is achieved, an error signal which is the same as the difference signal may be used for updating the weighting factors. When the minimum margin to provide the correct binary output unit signal exceeds a given threshold, the training process can be terminated. This procedure gives a very high generalization ability without over-learning and also with a extremely small dependency on the number of hidden units.
  • the updating of weighting factors by using the error signal adjusted in polarity and amplitude, the detection of the tenacious state trapped in local minima and the detection of the minimum margin of the correct binary output unit signals can reduce the necessary numbers of hidden units and layers, providing significantly quick convergence without dependency on the initial conditions of weighting factors and a very high generalization ability.
  • the invented learning method can easily provide the desired multi-level output unit signal under conditions with very quick and reliable convergence in multi-level space and without dependency on the initial conditions of weighting factors due to the evasion of tenacious states trapped in local minima and easy release from them.
  • termination of the learning process by judging with the minimum margin of the correct binary output unit signal can provide a high generalization ability for a large scale multi-level neural network, and can also provide a flexible design freedom in size.
  • Real time logic systems with a learning capability or multi-level logic systems having a large number of inputs can be easily implemented by using these multi-level neural networks.
  • Pattern recognizers, image processors and data converters can also be designed flexibly in cases where a desired output signal cannot be easily obtained by using neural networks with conventional learning methods due to slow and unstable convergence.
  • FIG. 1 is a functional diagram of a multi-level neural network with a conventional learning method
  • FIG. 2 is a functional diagram of the first embodiment of a multi-level neural network for a learning process, according to this invention
  • FIG. 3 is a functional diagram of a second embodiment of a parallel binary neural network for a learning process, according to this invention.
  • FIG. 4 is a functional diagram of the second embodiment of the parallel binary neural network for an execution process, according to this invention.
  • FIG. 5 is a functional diagram of a third embodiment of the multi-level neural network for a learning process, according to this invention.
  • Embodiments of the multi-level neural network according to the present invention are illustrated for structures having only a binary teacher signal in the detailed descriptions given hereafter.
  • a binary neural network for the learning process is provided in FIG. 2, which comprises a multi-layered neural network 1 fed a training input signal through a terminal 2 and providing an output unit signal from an output unit, an error signal generator 10 for weighting factor updating (WFUESG) which generates an error signal by using a binary teacher signal T and an error discrimination signal fed from an error pattern detector 11, binary threshold means 6 which outputs a binary output unit signal 23 converted from an output unit signal, a weighting factor controller 5 which updates the weighting factors in the multi-layered neural network 1 by using the error signals, the error pattern detector 11 which outputs the error discrimination signals indicating whether or not an error in the binary output unit signal 23 on each output unit exists by comparing the binary unit output signal 23 with the corresponding binary teacher signal on each output unit, and a mode controller 12 which sets initial conditions both in the multi-layered neural network 1 and the weighting factor controller 5, and controls the start and termination of the learning process.
  • WFUESG weighting factor updating
  • the multi-layered neural network 1 learns with the training input signal fed through the terminal 2 and the teacher signal fed through the terminal 3.
  • the error signal generator 10 WFUESG
  • the error signal is generated according to the error discrimination signal from the error pattern detector 11 and the binary teacher signal, and then is fed to the weighting factor controller 5.
  • an error signal with an amplitude equal to or smaller than that of the difference signal derived from subtracting the output unit signal from the corresponding teacher signal and the same polarity as that of the difference is generated.
  • an error signal with an amplitude smaller in proportion to the distance between the output unit signal and the binary teacher signal and the opposite polarity to that of the difference signal is generated, when the absolute value of the difference signal is equal to or smaller than a given threshold, and further an error signal with an amplitude equal to or smaller than that of the difference signal and the same polarity as that of the difference is generated, when the absolute value of the difference signal is larger than the given threshold.
  • m is the m-th location of the output unit (1 ⁇ m ⁇ M)
  • E m is the error signal on the m-th output unit
  • T m is the binary teacher signal on the mth output unit (0 or 1)
  • Y m is the output unit signal on the m-th output unit
  • dm is a threshold on the m-th output unit ( ⁇ 0)
  • D m1 is a constant defined for the m-th output unit (D m1 ⁇ dm ⁇ 0)
  • D m2 is a constant defined for the m-th output unit (D m2 ⁇ 0)
  • D m3 is also a constant defined for the m-th output unit ( ⁇ 0).
  • Equation (1), (2) and (3) provide an error signal on the m-th output unit with the binary teacher signal T m and the output unit signal Y m , respectively.
  • the learning procedures are repeated to update the weighting factors by using error signals in the weighting factor controller 5 in order to minimize the power of the difference signal by the back-propagation algorithm, for example.
  • T m -Y m the absolute value of the difference signal
  • the error signal has an opposite polarity to the difference signal and an amplitude smaller in proportion to the distance between the binary teacher signal and the output unit signal, as shown in Eq.(1). Outside of this range, the error signal has an amplitude reduced by D m2 from that of the difference signal, as shown in Eq.(2).
  • the error signal On the m-th output unit where an error exists in the binary output unit signal, the error signal has an amplitude reduced by D m3 from that of the difference signal, as shown in Eq.(3), where D m has a different value from D m1 and D m2 and can be 0.
  • the method for obtaining the error signal is quite different from the conventional method.
  • the weighting factors are updated in the opposite direction from that in the difference signal only when the output unit signal becomes very close to the binary teacher signal, and in other cases are updated in the same direction as that of the difference signal according to the error signal with an amplitude equal to or smaller than that of the difference signal.
  • This learning method can provide quick convergence with an optimum state within a small number of training cycles due to the evasion of tenacious states trapped in local minima and the easy release from them.
  • a convergence speed 10 times to 100 times faster than that of the conventional methods is reliably obtained, and the numbers of hidden units or hidden layers can be also drastically reduced to converge the binary neural network.
  • the training process can still be continued with D m1 and D m2 having smaller values than those before convergence, and finally with values of zero to achieve a high generalization ability, maintaining complete convergence in binary space.
  • the invented learning method can be also widely applied to neural networks handling continuous signals by preparing ranges of the difference signal in which the output unit signal is considered to converge to the teacher signal.
  • a parallel binary neural network 13 for the learning process is provided in which a main multi-layered neural network 1 and a sub multi-layered neural network 16 connected in parallel are used.
  • a binary error involved in a binary output unit signal 29 on an output unit of the main neural network 1 is compensated with a binary output unit signal 30 on an output unit of the sub neural network 16 by using a binary modulo add processing.
  • the parallel binary neural network 13 in FIG. 3 operates in a training mode in which weighting factors are updated with a training input signal, and in an execution mode in which the weighting factors learned in the training mode are set and the binary output unit signal is output for an input signal.
  • the parallel binary neural network 13 in the learning process with the learning method of the present invention is illustrated in FIG. 3. It comprises the main multi-layered neural network 1, which is trained by using the training input signal fed through a terminal 2 and a main binary teacher signal T, the sub multi-layered neural network 16 which is sequentially trained with the training input signal with a compensatory binary teacher signal Tc derived from a compensatory binary teacher signal generator 17, an error signal generator 10 (WFUESG) in which an error signal is generated by using the output unit signal from the main neural network 1, the corresponding main binary teacher signal T and an error discrimination signal from an error pattern detector 14, an error signal generator 18 (WFUESG) in which an error signal is generated by using the output unit signal of the sub neural network 16, the compensatory binary teacher signal Tc and an error discrimination signal from a coincidence detector 21, weighting factor controllers 5, 19 in which the weighting factors in the main and sub neural networks 1, 16 are updated by, for example, a back-propagation algorithm using the error signals from
  • the error signals are obtained by Eqs. (1), (2) and (3) as in the first embodiment, and then the weighting factors are updated in the weighting factor controllers 5 and 19.
  • the weighting factors are updated in the opposite direction from the polarity of the difference signal. Tenacious states trapped in local minima can be almost evaded to achieve quick convergence within small number of training cycles, rarely can however be maintained under certain conditions.
  • the amplitude of the error signal having an opposite polarity to that of the difference signal is instantaneously enlarged to release the state in the weighting factor controller 5, 19.
  • the learning process for the main neural network 1 can be terminated through the weighting factor controller 5 by a method in which the number of binary errors between the binary output unit signal 29 and the main binary teacher signal is compared with a threshold in the error pattern detector 14, a method in which a minimum distance between a decision level of the binary threshold means 6 and the output unit signals among the erroneous binary output signals is compared with a given threshold for detection of the tenacious state trapped in local minima, or a method in which the number of the training cycles is compared with a given threshold.
  • the compensatory binary teacher signal Tc is generated by using XOR modulo adder and is memorized in the compensatory binary teacher signal generator 17.
  • the controller 15 After finishing the procedures for the main neural network 1, the controller 15 then starts learning the training input signal by the sub neural network 16.
  • the error signal is generated by using the output unit signal of the sub neural network 16, the compensatory binary teacher signal Tc and the error discrimination signal from the coincidence detector 21, according to Eqs. (1), (2) and (3), and is fed to the weighting factor controller 19 to update the weighting factors for minimizing the power of the difference signal.
  • the learning process is continued until the full coincidence between the compensatory binary teacher signals and the binary output unit signals 30 is detected in the coincidence detector 21.
  • the main binary neural network 1 it is not necessary for the main binary neural network 1 to converge, if the sub binary neural network 16 completely learns the errors in the binary output unit signal of the main binary neural network 1 as the compensatory binary teacher signal 29 Tc.
  • the rate of correct binary unit output signals of the main neural network 1 can easily attains a value higher than 95% within an extremely small number of training cycles, and therefore the compensatory binary teacher signal Tc having a small number of clusters also makes the sub neural network 16 converge easily and reliably within a few number of training cycles.
  • the parallel binary neural network 13 can drastically reduce the number of training cycles so as to achieve complete convergence in binary space without dependency on the initial conditions of the weighting factors, and also provide a very high generalization ability for the input signal in comparison to the conventional neural networks.
  • the parallel binary neural network 13 in the execution process is illustrated in FIG. 4. It comprises the main neural network 1, the sub neural network 16 in parallel, binary threshold means 6, 20 to output binary output unit signal for the output unit signal, the binary modulo adder 22 to add in modulo the binary unit output signals 29, 30 derived from the binary threshold means 6 and 20, and a mode controller 15 to control the main and sub neural networks 1, 16 for setting the weighting factors obtained in the learning process and performing the execution process.
  • the binary modulo adder 22 provides a binary output signal 0 of the parallel binary neural network 13 through a terminal 23.
  • the main binary neural network 1 does not necessarily provide completely provide a desired binary output unit signal 29 without binary errors.
  • the sub binary neural network 16 can provide the same binary output unit signals 30 as the binary errors, and these binary errors are corrected completely through the binary modulo adder 22. Therefore, the parallel binary neural network 13 provides the desired binary output signal 0 through the terminal 23 for the training input signal, resulting in complete convergence in binary space for the parallel binary neural network 13.
  • the compensatory binary teacher signal having a very small number of clusters can in principle provide a high generalization ability superior to that of the main binary neural network 1 and the avoidance of the unnecessary over-learning by the error pattern detector 14 can also expand the generalization ability for the input signal.
  • a large scale binary neural network can be realized which equivalently provides very quick convergence without dependency on the initial conditions of weighting factors and also a high generalization ability.
  • the non-necessity of complete convergence in binary space of the main binary neural network 1 results in a huge reduction of the number of hidden units and layers and also a reduction of processing accuracy and further amount of calculations in the main and sub neural networks due to the error compensation technique using the binary modulo adder process.
  • multi-level threshold means instead of the binary threshold means 6, 20
  • a multi-level modulo adder instead of the binary modulo adder 22
  • a compensatory multi-level teacher signal generator having a multi-level modulo subtractor in stead of the compensatory binary teacher signal generator 17.
  • FIG. 5 A binary neural network for the learning process according to the third embodiment of the present invention, is illustrated in FIG. 5.
  • the network comprises a multi-layered neural network 1 in which a training input signal is fed through a terminal 2, an error signal generator 10 (WFUESG) which generates an error signal by using an output unit signal, a binary teacher signal T and an error discrimination signal fed from an error pattern detector 24, binary threshold means 6, a weighting factor controller 5, the error pattern detector 24 which outputs error discrimination signals indicating the existence of errors in binary output unit signals 23 by comparing them to the corresponding binary teacher signals T, an erroneous output unit minimum error detector 25 which detects a minimum absolute value of the differences between the output unit signals and the decision level of the binary threshold means 6 (a minimum error) among the erroneous binary output unit signals, a correct output unit minimum margin detector 26 which detects a minimum absolute value of the differences between the output unit signals and the decision level of the binary threshold means 6 (a minimum margin) among the correct binary output unit signals, a learning state detector 27
  • the multi-layered neural network 1 learns with the training input signal fed through a terminal 2 and the teacher signal fed through a terminal 3.
  • the error signal is generated by using the output unit signal, the teacher signal and the error discrimination signal from the error pattern detector 11, and is fed to the weighting factor controller 5.
  • the output unit providing a erroneous binary output unit signal, an error signal which has the same polarity as a difference signal derived from subtracting the output unit signal from the corresponding binary teacher signal and an amplitude reduced by D m3 from that of the difference signal is generated.
  • an error signal having the opposite polarity and an amplitude smaller in proportion to the distance from the binary teacher signal and smaller than Dm1 is generated, when the absolute value of the difference signal is equal to or smaller than a given threshold dm, and an error signal having the same polarity of the difference signal and an amplitude reduced by Dm2 from that of the difference signal is generated, when the absolute value of the difference signal is larger than the given threshold dm.
  • the learning state detector 27 outputs a local minima capture signal through a terminal 28.
  • the learning process can be performed with reduced values of dm, D m1 , D m2 and D m3 .
  • the learning process in the multi-layered neural network 1 can be terminated according to control by the mode controller 12.
  • the multi-layered neural network having a large number of hidden units can provide the highest generalization ability without over-learning in a small number of training cycles, maintaining complete convergence in binary space.
  • the present invention related to the learning method can also be widely applied to neural networks handling continuous signals by preparing ranges in which the output unit signal is considered to correctly converge to the teacher signal. Though detailed descriptions were given only for the multi-layered neural network, this invention can be applied to other neural networks with teacher signals.
  • Dm1 and D m2 0 are set for the learning process, and the termination of the learning process when the minimum margin of the correct binary output unit signals exceeds the given threshold gives a extremely high generalization ability for test input signals without over-learning for wide range of number of hidden units. Accordingly, The binary neural network with smaller sizes of hardware complexity and calculation in the necessary numbers of hidden units and layers can realize reliable convergence 10 to 100 times faster than that in the conventional learning method and an extremely generalization performance .
  • a multi-level neural network or a parallel multi-level neural network related to a learning method, quick and reliable convergence without dependence on initial conditions of weighting factors having small bits is achieved for smaller numbers of hidden units and layers, in comparison to conventional learning methods.
  • a multi-level neural network according to present invention related to the learning method can be easily and flexibly designed to realize large scale multi-level logic circuits which are difficult for conventional methods, can also be widely applied to neural networks for artificial intelligence systems, information retrieval systems, pattern recognitions, data conversions, data compressions and multi-level image processings in which complete and quick convergence and a very high generalization ability are necessary, and is furthermore applicable to communication systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

A learning method supervised by a binary teacher signal for a binary neural network comprises at least an error signal generator 10 for weighting factor updating, which generates an error signal for weighting factor updating having an opposite polarity to that of a difference signal between an output unit signal of the binary neural network and the binary teacher signal on an output unit whereat a binary output unit signal coincides with the binary teacher signal, and an amplitude which decreases by increase of distance from the binary teacher signal, when an absolute value of the difference signal is smaller than a threshold, generates an error signal which has the same polarity as that of the difference signal and an amplitude smaller than that of the difference signal, when the absolute value of the difference signal is larger than the threshold, or generates an error signal which has an amplitude equal to or smaller than that of the difference signal on an output unit providing a wrong binary output unit signal which is different from the binary teacher signal. Updating the weighting factors by the error signal which is optimally generated according to discrimination between the correct binary output unit signal and the wrong one, can provide a binary neural network which converges very quickly and reliably to obtain a desired binary output and also realizes a high generalization ability.

Description

This application is a continuation of application Ser. No. 08/618,419 filed Mar. 8, 1996, now abandoned.
BACKGROUND OF THE INVENTION
The present invention relates to learning methods for multi-level neural networks which are applied to large scale logic circuits, pattern recognizers, associative memories, code converters and image processors, providing a desired output signal stably and quickly by a simple training process with controls of signals in polarity and amplitude for updating weighting factors and a detection of tenacious states trapped in local minima.
One of neural network is a multi-layered neural network, in which a back-propagation algorithm has been widely used as a supervised learning method for the neural network. In a learning process, the difference signal is derived by subtracting an output unit signal of an output layer replying to a training input signal fed in an input layer from a prepared teacher signal T (teacher signal element: T1, T2 . . . , TM) through a subtractor, and the weighting factors between adjacent layers are updated by using the output unit signal and the difference signal to minimize the power of the difference signal. The learning Process using the whole training input signal is repeatedly conducted for updating weighting factors to achieve convergence. In the learning process, a minimum power of the difference signal can provide complete convergence of the neural network in binary space, resulting in coincidence between the multi-level output unit signal for the training input signal and the multi-level teacher signal.
However, when a state minimizing the power of the difference signal is captured once into local minima, global minima cannot be obtained even if the training cycles are increased, or a significantly large number of the training cycles is necessary to obtain the global minima. A dependency on initial conditions of the weighting factors is also one of the problems to achieve quick convergence.
On the other hand, in a multi-layered neural network with a conventional learning method, as shown in FIG. 1, a new teacher signal T' having values of 0.1 and 0.9 has been utilized to improve the convergence speed instead of a teacher signal T having values of 0 and 1, as described in "Parallel Distributed Processing" by D. E. Rumelhart, MIT Press, and the learning process starts to learn after setting initial conditions under the control of the mode controller 9.
The difference signal is obtained by subtracting the output unit signal of a multi-layered neural network 1 for the training input signal fed through a terminal 2 from the teacher signal T' through a subtractor 4, and is fed into a weighting factor controller 5 to update the weighting factors by a back-propagation algorithm and set them again in the multi-layered neural network 1.
A binary output unit signal 23 from the output unit signal through a binary threshold means 6 and the binary teacher signal is also obtained from the teacher signal T' through a binary threshold means 7. By detecting the coincidence between the teacher signal T and the binary output unit signal 23 through a coincidence detector 8, it is judged whether the multi-layered neural network 1 achieved convergence or not. These procedures are repeated in the training process until convergence is achieved.
It has already been noted that the teacher signal T' having values of 0.1 and 0.9 can reduce the necessary number of training cycles for achieving convergence in comparison with the teacher signal T having 0 and 1 in a conventional learning method. This is because the updating speed of the weighting factors becomes slower due to small gradients for input values very close to 0 and 1 in a sigmoidal transfer function.
A speed up in training can therefore be achieved by a larger gradient obtained by setting the teacher signal T' to 0.1 and 0.9. This method cannot however provide a sufficient improvement in convergence as well, because the multi-layered neural network is frequently captured in a very tenacious state trapped in local minima.
Conventional learning methods have the defects that it is very difficult to quickly achieve complete convergence in binary space due to the easy capture of the tenacious state trapped in local minima and the difficulty of slipping away from it, as aforementioned. A design method for achieving reliable convergence has particularly not been established yet for either a three layered or multi-layered neural network having a large amount of input nodes. Only heuristic approaches are conducted by adjusting initial conditions of weighting factors and/or changing the number of hidden units. Easy methods for detecting tenacious states trapped by local minima could not also be found.
If the training process is terminated for the neural network 1 trapped in a state having local minima and the weighting factors are set in the neural network 1 in an execution process, it can provide not only completely correct binary output unit signal for the training input signal, but also a large number of correct binary output signals for a test input signal, resulting in a low generalization ability.
As aforementioned, conventional learning methods for multi-layered neural network with a multi-level teacher signal have disadvantages that either a large number of training cycles for updating the weighting factors is required to achieve a convergent state which provides a desired multi-level output unit signal for the training input signal or a desired multi-level output unit signal cannot be obtained even by continuing the training process due to trapping into tenacious states with local minima, and the convergence speed also severely depends on the initial conditions of weighting factors.
Particularly, an easy design method of multi-level neural network with a large number of input units, a small number of output units and a teacher pattern having a distributed representation, which has a difficulty of complete convergence in binary space, has not been established. Though the use of a large number of hidden units can make the neural network converge, the generalization ability degrades due to a surplus number of hidden units and an over-learning. These approaches inevitably require a huge amount of computations and a very large hardware complexity.
These shortcomings make very difficult to realize multi-level neural networks which can provide complete convergence in multi-level space within a short real time and provide a high generalization ability.
SUMMARY OF THE INVENTION
It is an object, therefore, of the present invention to overcome the disadvantages and limitations of a prior learning method for multi-level neural networks by providing a new and improved learning method.
It is also an object of the present invention to provide a learning method for a multi-level neural network which detects the capture of tenacious states trapped in local minima, and has rapid and reliable convergence within a very small number of training cycles, and also has a high generalization ability.
The multi-level neural networks using the present invention can resolve these problems by achieving a stable convergence 10 to 100 times faster than that of the conventional learning method and also achieve a very high generalization ability.
The multi-level neural network for a new supervised learning process comprises an updating of weighting factors by using an error signal which has either an opposite polarity to that of a difference signal by subtracting the output unit signal from the corresponding teacher signal and an amplitude smaller in proportion to that of the difference, when the absolute value of the difference signal is equal to or smaller than a given threshold for a correct multi-level output unit signal, or has the same polarity as that of the difference signal and an amplitude equal to or smaller than that of the difference signal, when the absolute value is larger than the given threshold for a correct multi-level output unit signal, and also by using an error signal which has the same polarity as that of the difference signal and an amplitude equal to or smaller than that of the difference signal for an erroneous multi-level output unit signal.
A multi-level neural network with another learning method comprises as least a detection of tenacious states trapped in local minima by using at least a minimum absolute value of the difference signals among erroneous binary output unit signals, and an updating of weighting factors by using the error signal adjusted in polarity and amplitude.
A multi-level neural network with yet another learning method comprises at least an updating of weighting factors by using directly the difference signal as the error signal without adjusting the polarity and the amplitude after the neural network for the learning process has converged once.
By using these methods, a tenacious state trapped in local minima can be evaded and also can be easily released even if the state is tightly captured, resulting in quick convergence with a state having global minima within a small number of training cycles.
A tenacious state trapped in local minima can be easily detected when a minimum value of the difference signals among wrong binary output unit signals exceeds a given threshold. Accordingly, by flexibly adjusting the error signal in amplitude, and further, adjusting the detection threshold, quicker convergence can be achieved within a very small number of training cycles. Once the convergence is achieved, an error signal which is the same as the difference signal may be used for updating the weighting factors. When the minimum margin to provide the correct binary output unit signal exceeds a given threshold, the training process can be terminated. This procedure gives a very high generalization ability without over-learning and also with a extremely small dependency on the number of hidden units.
The updating of weighting factors by using the error signal adjusted in polarity and amplitude, the detection of the tenacious state trapped in local minima and the detection of the minimum margin of the correct binary output unit signals can reduce the necessary numbers of hidden units and layers, providing significantly quick convergence without dependency on the initial conditions of weighting factors and a very high generalization ability.
As aforementioned, the invented learning method can easily provide the desired multi-level output unit signal under conditions with very quick and reliable convergence in multi-level space and without dependency on the initial conditions of weighting factors due to the evasion of tenacious states trapped in local minima and easy release from them. After achieving complete convergence in multi-level space once, termination of the learning process by judging with the minimum margin of the correct binary output unit signal can provide a high generalization ability for a large scale multi-level neural network, and can also provide a flexible design freedom in size. Real time logic systems with a learning capability or multi-level logic systems having a large number of inputs can be easily implemented by using these multi-level neural networks. Pattern recognizers, image processors and data converters can also be designed flexibly in cases where a desired output signal cannot be easily obtained by using neural networks with conventional learning methods due to slow and unstable convergence.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects, features, and attendant advantages of the present invention will be appreciated as the same become better understood by means of the following descriptions and accompanying drawings wherein:
FIG. 1 is a functional diagram of a multi-level neural network with a conventional learning method,
FIG. 2 is a functional diagram of the first embodiment of a multi-level neural network for a learning process, according to this invention,
FIG. 3 is a functional diagram of a second embodiment of a parallel binary neural network for a learning process, according to this invention,
FIG. 4 is a functional diagram of the second embodiment of the parallel binary neural network for an execution process, according to this invention,
FIG. 5 is a functional diagram of a third embodiment of the multi-level neural network for a learning process, according to this invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Embodiments of the multi-level neural network according to the present invention are illustrated for structures having only a binary teacher signal in the detailed descriptions given hereafter.
Descriptions of the multi-level neural network are also separately given for the learning and execution processes, respectively.
(Embodiment 1)
In a first embodiment of the present invention related to the learning method, a binary neural network for the learning process is provided in FIG. 2, which comprises a multi-layered neural network 1 fed a training input signal through a terminal 2 and providing an output unit signal from an output unit, an error signal generator 10 for weighting factor updating (WFUESG) which generates an error signal by using a binary teacher signal T and an error discrimination signal fed from an error pattern detector 11, binary threshold means 6 which outputs a binary output unit signal 23 converted from an output unit signal, a weighting factor controller 5 which updates the weighting factors in the multi-layered neural network 1 by using the error signals, the error pattern detector 11 which outputs the error discrimination signals indicating whether or not an error in the binary output unit signal 23 on each output unit exists by comparing the binary unit output signal 23 with the corresponding binary teacher signal on each output unit, and a mode controller 12 which sets initial conditions both in the multi-layered neural network 1 and the weighting factor controller 5, and controls the start and termination of the learning process.
Only procedures in the learning process for the binary neural network are described hereafter. The multi-layered neural network 1 learns with the training input signal fed through the terminal 2 and the teacher signal fed through the terminal 3. In the error signal generator 10 (WFUESG), the error signal is generated according to the error discrimination signal from the error pattern detector 11 and the binary teacher signal, and then is fed to the weighting factor controller 5.
On the output unit providing an erroneous binary output unit signal 23, an error signal with an amplitude equal to or smaller than that of the difference signal derived from subtracting the output unit signal from the corresponding teacher signal and the same polarity as that of the difference is generated. On the other hand, on the output unit providing a correct binary output unit signal 23, an error signal with an amplitude smaller in proportion to the distance between the output unit signal and the binary teacher signal and the opposite polarity to that of the difference signal is generated, when the absolute value of the difference signal is equal to or smaller than a given threshold, and further an error signal with an amplitude equal to or smaller than that of the difference signal and the same polarity as that of the difference is generated, when the absolute value of the difference signal is larger than the given threshold.
Error signals are given by the following equations.
If the binary output unit signal on the m-th output unit is correct due to coincidence with the binary teacher signal, and
when |T.sub.m -Y.sub.m |≦dm,
then
E.sub.m =T.sub.m -Y.sub.m -D.sub.m1 *sgn(T.sub.m -Y.sub.m),(1)
and
|T.sub.m -Y.sub.m |>dm,
then
E.sub.m =T.sub.m -Y.sub.m -D.sub.m2 *sgn(T.sub.m -Y.sub.m),(2)
If the binary output unit signal on the m-th output unit is wrong due to difference from the binary teacher signal, then
E.sub.m =T.sub.m -Y.sub.m -D.sub.m3 *sgn(T.sub.m -Y.sub.m),(3)
where |z| is an absolute value of z,
sgn(x)=1 for x≧0,
=-1 for x<0,                                               (4)
m is the m-th location of the output unit (1≦m≦M), Em is the error signal on the m-th output unit, Tm is the binary teacher signal on the mth output unit (0 or 1), Ym is the output unit signal on the m-th output unit, dm is a threshold on the m-th output unit (≧0), Dm1 is a constant defined for the m-th output unit (Dm1 ≧dm≧0), Dm2 is a constant defined for the m-th output unit (Dm2 ≧0), and Dm3 is also a constant defined for the m-th output unit (≧0). The difference signal on the m-th output unit is given by Tm -Ym. Equations (1), (2) and (3) provide an error signal on the m-th output unit with the binary teacher signal Tm and the output unit signal Ym, respectively.
In neural networks handling continuous signals, the same equations can be applied when dm is defined as the convergence region.
The learning procedures are repeated to update the weighting factors by using error signals in the weighting factor controller 5 in order to minimize the power of the difference signal by the back-propagation algorithm, for example. If Ym is very close to Tm and the absolute value of the difference signal (Tm -Ym) exists in the range of dm on the m-th output unit where no error exists in the binary output unit signal due to coincidence of the binary output unit signal and the binary teacher signal, the error signal has an opposite polarity to the difference signal and an amplitude smaller in proportion to the distance between the binary teacher signal and the output unit signal, as shown in Eq.(1). Outside of this range, the error signal has an amplitude reduced by Dm2 from that of the difference signal, as shown in Eq.(2).
On the m-th output unit where an error exists in the binary output unit signal, the error signal has an amplitude reduced by Dm3 from that of the difference signal, as shown in Eq.(3), where Dm has a different value from Dm1 and Dm2 and can be 0. As described above, the method for obtaining the error signal is quite different from the conventional method. The weighting factors are updated in the opposite direction from that in the difference signal only when the output unit signal becomes very close to the binary teacher signal, and in other cases are updated in the same direction as that of the difference signal according to the error signal with an amplitude equal to or smaller than that of the difference signal.
This learning method can provide quick convergence with an optimum state within a small number of training cycles due to the evasion of tenacious states trapped in local minima and the easy release from them. A convergence speed 10 times to 100 times faster than that of the conventional methods is reliably obtained, and the numbers of hidden units or hidden layers can be also drastically reduced to converge the binary neural network.
If the network captures a tenacious state trapped in local minima, providing no improvement in error performance, a slight enlargement of amplitude of the error signal can tangibly release and evades the tenacious state by temporarily setting Dm1 and Dm3 large.
After detecting the achievement of convergence providing full coincidence between the binary teacher signals and the binary output unit signals at the error pattern detector 11, the training process can still be continued with Dm1 and Dm2 having smaller values than those before convergence, and finally with values of zero to achieve a high generalization ability, maintaining complete convergence in binary space.
The invented learning method can be also widely applied to neural networks handling continuous signals by preparing ranges of the difference signal in which the output unit signal is considered to converge to the teacher signal.
(Embodiment 2)
In a second embodiment of the present invention related to the learning method, a parallel binary neural network 13 for the learning process is provided in which a main multi-layered neural network 1 and a sub multi-layered neural network 16 connected in parallel are used.
In an execution process, a binary error involved in a binary output unit signal 29 on an output unit of the main neural network 1 is compensated with a binary output unit signal 30 on an output unit of the sub neural network 16 by using a binary modulo add processing. The parallel binary neural network 13 in FIG. 3 operates in a training mode in which weighting factors are updated with a training input signal, and in an execution mode in which the weighting factors learned in the training mode are set and the binary output unit signal is output for an input signal.
Only parallel binary multi-layered neural network 13 is described in detail here. The parallel binary neural network 13 in the learning process with the learning method of the present invention is illustrated in FIG. 3. It comprises the main multi-layered neural network 1, which is trained by using the training input signal fed through a terminal 2 and a main binary teacher signal T, the sub multi-layered neural network 16 which is sequentially trained with the training input signal with a compensatory binary teacher signal Tc derived from a compensatory binary teacher signal generator 17, an error signal generator 10 (WFUESG) in which an error signal is generated by using the output unit signal from the main neural network 1, the corresponding main binary teacher signal T and an error discrimination signal from an error pattern detector 14, an error signal generator 18 (WFUESG) in which an error signal is generated by using the output unit signal of the sub neural network 16, the compensatory binary teacher signal Tc and an error discrimination signal from a coincidence detector 21, weighting factor controllers 5, 19 in which the weighting factors in the main and sub neural networks 1, 16 are updated by, for example, a back-propagation algorithm using the error signals from the WFUESGs 10, 18 to minimize the powers of the difference signals obtained by subtracting the output unit signals from the main binary teacher signal T and the compensatory binary teacher signal Tc, respectively, these factors then being reset in the main and sub neural networks, respectively, binary threshold means 6, 20 in which the output unit signal is converted to the binary output unit signal 29, 30, the error pattern detector 14 in which the error discrimination signal is output by detecting the binary errors between the main binary teacher signal T and the binary output unit signal, the compensatory binary teacher signal generator 17 in which the compensatory binary teacher signal Tc is generated by adding in binary modulo the main binary teacher signal and the binary output unit signal 29, the coincidence detector 21 in which coincidence is detected between the binary output unit signal 30 of the sub neural network 16 and the compensatory binary teacher signal Tc, and the mode controller 15 in which initial conditions and start and termination controls for the learning process are set first for the main neural network 1, and then after generating the compensatory binary teacher signal Tc, initial conditions and start and termination controls for the learning process are also set for the sub neural network 16, and finally the training mode is switched to the execution mode after finishing both the learning processes.
In the WFUESGs 10, 18, the error signals are obtained by Eqs. (1), (2) and (3) as in the first embodiment, and then the weighting factors are updated in the weighting factor controllers 5 and 19. As in the first embodiment, when the output unit signal is very close to the binary teacher signal, the weighting factors are updated in the opposite direction from the polarity of the difference signal. Tenacious states trapped in local minima can be almost evaded to achieve quick convergence within small number of training cycles, rarely can however be maintained under certain conditions. If the number of binary errors detected in the error pattern detector 14 does not decrease by increase of the number of training cycles due to capture of a rigidly stable state trapped in local minima, the amplitude of the error signal having an opposite polarity to that of the difference signal is instantaneously enlarged to release the state in the weighting factor controller 5, 19.
The learning process for the main neural network 1 can be terminated through the weighting factor controller 5 by a method in which the number of binary errors between the binary output unit signal 29 and the main binary teacher signal is compared with a threshold in the error pattern detector 14, a method in which a minimum distance between a decision level of the binary threshold means 6 and the output unit signals among the erroneous binary output signals is compared with a given threshold for detection of the tenacious state trapped in local minima, or a method in which the number of the training cycles is compared with a given threshold.
If all the binary output unit signals 29 of the main neural network 1 are coincident completely with the main binary teacher signals, providing convergence in binary space, it is not necessary for the sub neural network 16 to be trained and the weighting factors may be set to zero.
On the other hand, when the learning process is terminated, if the binary output unit signals 29 have binary errors in comparison with the main binary teacher signals, the compensatory binary teacher signal Tc is generated by using XOR modulo adder and is memorized in the compensatory binary teacher signal generator 17.
After finishing the procedures for the main neural network 1, the controller 15 then starts learning the training input signal by the sub neural network 16.
In the error signal generator 18 (WFUESG), the error signal is generated by using the output unit signal of the sub neural network 16, the compensatory binary teacher signal Tc and the error discrimination signal from the coincidence detector 21, according to Eqs. (1), (2) and (3), and is fed to the weighting factor controller 19 to update the weighting factors for minimizing the power of the difference signal.
The learning process is continued until the full coincidence between the compensatory binary teacher signals and the binary output unit signals 30 is detected in the coincidence detector 21.
In the parallel binary neural network 13 related to the learning method according to the present invention , as aforementioned, it is not necessary for the main binary neural network 1 to converge, if the sub binary neural network 16 completely learns the errors in the binary output unit signal of the main binary neural network 1 as the compensatory binary teacher signal 29 Tc.
The rate of correct binary unit output signals of the main neural network 1 can easily attains a value higher than 95% within an extremely small number of training cycles, and therefore the compensatory binary teacher signal Tc having a small number of clusters also makes the sub neural network 16 converge easily and reliably within a few number of training cycles.
The parallel binary neural network 13 can drastically reduce the number of training cycles so as to achieve complete convergence in binary space without dependency on the initial conditions of the weighting factors, and also provide a very high generalization ability for the input signal in comparison to the conventional neural networks.
The parallel binary neural network 13 in the execution process is illustrated in FIG. 4. It comprises the main neural network 1, the sub neural network 16 in parallel, binary threshold means 6, 20 to output binary output unit signal for the output unit signal, the binary modulo adder 22 to add in modulo the binary unit output signals 29, 30 derived from the binary threshold means 6 and 20, and a mode controller 15 to control the main and sub neural networks 1, 16 for setting the weighting factors obtained in the learning process and performing the execution process.
The binary modulo adder 22 provides a binary output signal 0 of the parallel binary neural network 13 through a terminal 23.
The main binary neural network 1 does not necessarily provide completely provide a desired binary output unit signal 29 without binary errors. The sub binary neural network 16, however, can provide the same binary output unit signals 30 as the binary errors, and these binary errors are corrected completely through the binary modulo adder 22. Therefore, the parallel binary neural network 13 provides the desired binary output signal 0 through the terminal 23 for the training input signal, resulting in complete convergence in binary space for the parallel binary neural network 13.
The compensatory binary teacher signal having a very small number of clusters can in principle provide a high generalization ability superior to that of the main binary neural network 1 and the avoidance of the unnecessary over-learning by the error pattern detector 14 can also expand the generalization ability for the input signal.
For these reasons, a large scale binary neural network can be realized which equivalently provides very quick convergence without dependency on the initial conditions of weighting factors and also a high generalization ability. The non-necessity of complete convergence in binary space of the main binary neural network 1 results in a huge reduction of the number of hidden units and layers and also a reduction of processing accuracy and further amount of calculations in the main and sub neural networks due to the error compensation technique using the binary modulo adder process.
For the expansion to the neural network handling multi-levels more than 2, it is only necessary to utilize a multi-level teacher signal having a number of levels more than 2, multi-level threshold means instead of the binary threshold means 6, 20, a multi-level modulo adder instead of the binary modulo adder 22, and a compensatory multi-level teacher signal generator having a multi-level modulo subtractor in stead of the compensatory binary teacher signal generator 17.
(Embodiment 3)
A binary neural network for the learning process according to the third embodiment of the present invention, is illustrated in FIG. 5. The network comprises a multi-layered neural network 1 in which a training input signal is fed through a terminal 2, an error signal generator 10 (WFUESG) which generates an error signal by using an output unit signal, a binary teacher signal T and an error discrimination signal fed from an error pattern detector 24, binary threshold means 6, a weighting factor controller 5, the error pattern detector 24 which outputs error discrimination signals indicating the existence of errors in binary output unit signals 23 by comparing them to the corresponding binary teacher signals T, an erroneous output unit minimum error detector 25 which detects a minimum absolute value of the differences between the output unit signals and the decision level of the binary threshold means 6 (a minimum error) among the erroneous binary output unit signals, a correct output unit minimum margin detector 26 which detects a minimum absolute value of the differences between the output unit signals and the decision level of the binary threshold means 6 (a minimum margin) among the correct binary output unit signals, a learning state detector 27 in which the capture of the tenacious state trapped in local minima is detected by using the minimum error and a converged state is also detected to monitor both the minimum error and margin of the binary output unit signals, and a mode controller 12 which controls the set of initial conditions of the multi-layered neural network 1, the weighting factor controller 5, error pattern detector 24, an erroneous output unit minimum error detector 25, the correct output unit minimum margin detector 26 and the learning state detector 27 and also controls the start and termination of the learning process.
Only the procedures in the learning process for the binary neural network are described hereafter. The multi-layered neural network 1 learns with the training input signal fed through a terminal 2 and the teacher signal fed through a terminal 3. In the error signal generator 10 (WFUESG), the error signal is generated by using the output unit signal, the teacher signal and the error discrimination signal from the error pattern detector 11, and is fed to the weighting factor controller 5. On the output unit providing a erroneous binary output unit signal, an error signal which has the same polarity as a difference signal derived from subtracting the output unit signal from the corresponding binary teacher signal and an amplitude reduced by Dm3 from that of the difference signal is generated. On the other hand, on the output units providing a correct binary output unit signal, an error signal having the opposite polarity and an amplitude smaller in proportion to the distance from the binary teacher signal and smaller than Dm1 is generated, when the absolute value of the difference signal is equal to or smaller than a given threshold dm, and an error signal having the same polarity of the difference signal and an amplitude reduced by Dm2 from that of the difference signal is generated, when the absolute value of the difference signal is larger than the given threshold dm.
As the capture of a tenacious state trapped in local minima is detected by the minimum error for the erroneous binary output unit signals exceeding a given threshold in the erroneous output unit minimum error detector 25, the learning state detector 27 outputs a local minima capture signal through a terminal 28. By enlarging Dm1 and Dm3, furthermore Dm2 and dm, respectively, according to the local minima capture signal in the WFUESG, the tenacious state trapped in the local minima can be easily released.
When both the minimum error for the erroneous binary output unit signals and the minimum margin for the correct binary output unit signals exceed thresholds, respectively, then fall below them once and exceed them again, a transition between different tenacious states trapped in local minima is recognized in the learning state detector 27.
When the minimum error of the erroneous binary output unit signals becomes close to zero by increase of the number of learning cycles, indicating evasion of the tenacious state trapped in local minima, the learning process can be performed with reduced values of dm, Dm1, Dm2 and Dm3.
After achieving convergence providing full coincidence between the binary output unit signals 23 and the corresponding binary teacher signals once, the learning process may be continued with the further reduced values of dm, Dm1, Dm2 and Dm3, and finally with Dm1, Dm2 and Dm3 =0 which means the direct use of the difference signal as the error signal. It is also able to continue the learning process, adjusting a momentum and/or a learning factor simultaneously.
When the minimum margin of the correct binary output unit signals exceeds a given threshold in the learning state detector 27, after achieving complete convergence in binary space, the learning process in the multi-layered neural network 1 can be terminated according to control by the mode controller 12. By this procedure, the multi-layered neural network having a large number of hidden units can provide the highest generalization ability without over-learning in a small number of training cycles, maintaining complete convergence in binary space.
The present invention related to the learning method can also be widely applied to neural networks handling continuous signals by preparing ranges in which the output unit signal is considered to correctly converge to the teacher signal. Though detailed descriptions were given only for the multi-layered neural network, this invention can be applied to other neural networks with teacher signals.
As aforementioned, there is almost no capture of a tenacious state trapped in local minima by using the invented learning method. Even if the state is detected in the learning state detector 27 by using the minimum error of the wrong binary output unit signals or by using the minimum margin of the correct binary output unit signals, the adjustment of the polarity and amplitude of the error signal for the correct binary output unit signal or the adjustment of the amplitude of the error signal for the wrong binary output unit signal can easily evade the tenacious state having local minima and and also release it, if captured.
After achieving complete convergence in binary space once, Dm1 and Dm2 =0 are set for the learning process, and the termination of the learning process when the minimum margin of the correct binary output unit signals exceeds the given threshold gives a extremely high generalization ability for test input signals without over-learning for wide range of number of hidden units. Accordingly, The binary neural network with smaller sizes of hardware complexity and calculation in the necessary numbers of hidden units and layers can realize reliable convergence 10 to 100 times faster than that in the conventional learning method and an extremely generalization performance .
The advantages of the present invention will now be summarized.
In a multi-level neural network or a parallel multi-level neural network according to the present invention related to a learning method, quick and reliable convergence without dependence on initial conditions of weighting factors having small bits is achieved for smaller numbers of hidden units and layers, in comparison to conventional learning methods.
A multi-level neural network according to present invention related to the learning method can be easily and flexibly designed to realize large scale multi-level logic circuits which are difficult for conventional methods, can also be widely applied to neural networks for artificial intelligence systems, information retrieval systems, pattern recognitions, data conversions, data compressions and multi-level image processings in which complete and quick convergence and a very high generalization ability are necessary, and is furthermore applicable to communication systems.
From the foregoing it is apparent that a new and improved learning method for neural networks has been found. It should be understood of course that the embodiments disclosed are merely illustrative and are not intended to limit the scope of the invention. Reference should be made to the appended claims, therefore, for indicating the scope of the invention.

Claims (9)

What is claimed is:
1. A neural network having a learning method supervised by a teacher signal, said neural network comprising:
a neural network having input means for inputting at least one input signal, output means for outputting at least one output unit signal for controlling a device, reach output unit signal being obtained from the input signals at least through weighting factors;
means for generating a first error signal for updating said weighting factors of said neural network, wherein said first error signal has an opposite polarity to that of a difference signal between an output unit signal of said neural network and said teacher signal, and an amplitude which decreases accordance to a distance from said teacher signal, when an absolute value of said difference signal is smaller than a first threshold,
means for generating a second error signal for updating said weighting factors, wherein said second error signal has the same polarity as that of said difference signal and an amplitude smaller than that of said difference signal, when said absolute value of said difference signal is in a range between said first threshold and a second threshold,
means for generating a third error signal for updating said weighting factors, wherein said third error signal has an amplitude equal to or smaller than that of said difference signal, when said absolute value of said difference signal is larger than said second threshold, and
means for updating said weighting factors by using said first, second, and third error signals.
2. A multi-level neural network having a learning method supervised by a multi-level teacher signal, said multi-level neural network comprising:
a multi-level neural network having input means for inputting at least one input signal, output means for outputting at least one output unit signal for controlling a device each output unit signal being obtained from the input signals at least through weighting factors;
means for generating a first error signal for updating said weighting factors of said multi-level neural network, wherein said first error signal has an opposite polarity to that of a difference signal between an output unit signal of said multi-level neural network and said multi-level teacher signal on an output unit whereat a multi-level output unit signal derived from said output unit signal through multi-level threshold means coincides with said multi-level teacher signal, and an amplitude which decreases according to a distance from said multi-level teacher signal, when an absolute value of said difference signal is smaller than a threshold,
means for generating a second error signal for updating said weighting factors, wherein said second error signal has the same polarity as that of said difference signal on said output unit and an amplitude smaller than that of said difference signal, when said absolute value of said difference signal is larger than said threshold,
means for generating a third error signal for updating said weighting factors, wherein said third error signal has an amplitude equal to or smaller than that of said difference signal on an output unit whereat said multi-level output unit signal is different from said multi-level teacher signal, and
means for updating said weighting factors by using said first, second, and third error signals.
3. A multi-level neural network according to claim 2, wherein:
said weighting factors are updated by using said error signal whereof the amplitude of said error signal is controlled by detection of states trapped in local minima.
4. A multi-level neural network according to claim 3, wherein:
said amplitude of said error signal having said opposite polarity is temporarily increased, when the number of errors in said multi-level output unit signals caused by the difference from said multi-level teacher signal cannot be made to decrease in spite of increase of the number of training cycles.
5. A multi-level neural network according to claim 3, wherein:
said amplitude of said error signal is controlled with detection of states trapped in local minima by comparison of the minimum absolute value of said difference signals among erroneous multi-level output unit signals with a threshold.
6. A multi-level neural network according to claim 5, wherein:
said weighting factors are updated by said error signal whereof the amplitude of said error signal is controlled with detection of states trapped in local minima by comparison of the minimum difference between said output unit signals and a decision level of said multi-level threshold means among correct multi-level output unit signals with a threshold.
7. A multi-level neural network according to one of claims 2 to 6 wherein said neural network further comprising a main neural network and a sub neural network coupled parallel with said main neural network wherein:
said main neural network is trained first by using said multi-value teacher signal,
said sub neural network is trained next by using a compensatory multi-level teacher signal consisting of errors between a multi-level output unit signal of said main neural network derived through said multi-level threshold means and said multi-level teacher signal,
updating operation of said weighting factors for said main neural network is terminated, when the minimum difference between said output unit signals and said decision level of said multi-level threshold means among erroneous multi-level output unit signals of said main neural network, exceeds a predetermined threshold.
8. A neural network according to one of claims 1 to 6, wherein:
after said neural network has converged once by said training input signal such that all multi-level output unit signals coincide with said multi-level teacher signals, updating said weighting factors by using said difference signal as said error signal is continued until conditions of termination of learning are satisfied.
9. A multi-level neural network according to one of claims 2 to 6, wherein:
after said multi-level neural networks converged once for said training input signal such that all multi-level output unit signals coincide with said multi-level teacher signals, updating of weighting factors is continued until the minimum difference with said output unit signals and said multi-level threshold of said multi-level threshold means among correct multi-level output unit signals, exceeds a threshold.
US08/923,333 1995-03-09 1997-09-04 Learning method for multi-level neural network Expired - Fee Related US5764860A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/923,333 US5764860A (en) 1995-03-09 1997-09-04 Learning method for multi-level neural network

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP7-077168 1995-03-09
JP07716895A JP3367264B2 (en) 1995-03-09 1995-03-09 Neural network learning method
JP22701495A JP3367295B2 (en) 1995-08-14 1995-08-14 Multi-valued neural network learning method
JP7-227014 1995-08-14
US61841996A 1996-03-08 1996-03-08
US08/923,333 US5764860A (en) 1995-03-09 1997-09-04 Learning method for multi-level neural network

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US61841996A Continuation 1995-03-09 1996-03-08

Publications (1)

Publication Number Publication Date
US5764860A true US5764860A (en) 1998-06-09

Family

ID=27302354

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/923,333 Expired - Fee Related US5764860A (en) 1995-03-09 1997-09-04 Learning method for multi-level neural network

Country Status (1)

Country Link
US (1) US5764860A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030061184A1 (en) * 2001-09-27 2003-03-27 Csem Centre Suisse D'electronique Et De Microtechnique S.A. Method and a system for calculating the values of the neurons of a neural network
US20100067762A1 (en) * 2008-09-01 2010-03-18 Benjamin Glocker Method for combining images and magnetic resonance scanner
US8626684B2 (en) 2011-12-14 2014-01-07 International Business Machines Corporation Multi-modal neural network for universal, online learning
US20140078348A1 (en) * 2012-09-20 2014-03-20 Gyrus ACMI. Inc. (d.b.a. as Olympus Surgical Technologies America) Fixed Pattern Noise Reduction
US8738554B2 (en) 2011-09-16 2014-05-27 International Business Machines Corporation Event-driven universal neural network circuit
US8799199B2 (en) 2011-12-14 2014-08-05 International Business Machines Corporation Universal, online learning in multi-modal perception-action semilattices
US20140222727A1 (en) * 2013-02-05 2014-08-07 Cisco Technology, Inc. Enhancing the reliability of learning machines in computer networks
US8874498B2 (en) 2011-09-16 2014-10-28 International Business Machines Corporation Unsupervised, supervised, and reinforced learning via spiking computation
US9875440B1 (en) 2010-10-26 2018-01-23 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
US10510000B1 (en) 2010-10-26 2019-12-17 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
US11221990B2 (en) 2015-04-03 2022-01-11 The Mitre Corporation Ultra-high compression of images based on deep learning
US12124954B1 (en) 2022-11-28 2024-10-22 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02178759A (en) * 1988-12-28 1990-07-11 Sharp Corp Neural circuit network
US5095443A (en) * 1988-10-07 1992-03-10 Ricoh Company, Ltd. Plural neural network system having a successive approximation learning method
US5479538A (en) * 1991-11-26 1995-12-26 Nippon Steel Corporation Error diffusing method in image reproducing process and image processing apparatus using such a method
US5481293A (en) * 1993-03-25 1996-01-02 Kabushiki Kaisha Toshiba Image processing device for correcting an error in a multilevel image signal and for printing the image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5095443A (en) * 1988-10-07 1992-03-10 Ricoh Company, Ltd. Plural neural network system having a successive approximation learning method
JPH02178759A (en) * 1988-12-28 1990-07-11 Sharp Corp Neural circuit network
US5479538A (en) * 1991-11-26 1995-12-26 Nippon Steel Corporation Error diffusing method in image reproducing process and image processing apparatus using such a method
US5481293A (en) * 1993-03-25 1996-01-02 Kabushiki Kaisha Toshiba Image processing device for correcting an error in a multilevel image signal and for printing the image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Mori et al. "A Recurrent Neural Network for Short Term Load Forecasting," ANNPS '93. Proc. of the 2nd Intern. Forum on Appl. of Neural Network to Power Systems, p. 395-400, Jan. 31, 1993.
Mori et al. A Recurrent Neural Network for Short Term Load Forecasting, ANNPS 93. Proc. of the 2nd Intern. Forum on Appl. of Neural Network to Power Systems, p. 395 400, Jan. 31, 1993. *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030061184A1 (en) * 2001-09-27 2003-03-27 Csem Centre Suisse D'electronique Et De Microtechnique S.A. Method and a system for calculating the values of the neurons of a neural network
US7143072B2 (en) * 2001-09-27 2006-11-28 CSEM Centre Suisse d′Electronique et de Microtechnique SA Method and a system for calculating the values of the neurons of a neural network
US20100067762A1 (en) * 2008-09-01 2010-03-18 Benjamin Glocker Method for combining images and magnetic resonance scanner
US8712186B2 (en) * 2008-09-01 2014-04-29 Siemens Aktiengesellschaft Method for combining images and magnetic resonance scanner
US11514305B1 (en) 2010-10-26 2022-11-29 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
US9875440B1 (en) 2010-10-26 2018-01-23 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
US10510000B1 (en) 2010-10-26 2019-12-17 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
US11164080B2 (en) 2011-09-16 2021-11-02 International Business Machines Corporation Unsupervised, supervised and reinforced learning via spiking computation
US10019669B2 (en) 2011-09-16 2018-07-10 International Business Machines Corporation Unsupervised, supervised and reinforced learning via spiking computation
US8874498B2 (en) 2011-09-16 2014-10-28 International Business Machines Corporation Unsupervised, supervised, and reinforced learning via spiking computation
US9245223B2 (en) 2011-09-16 2016-01-26 International Business Machines Corporation Unsupervised, supervised and reinforced learning via spiking computation
US9292788B2 (en) 2011-09-16 2016-03-22 International Business Machines Corporation Event-driven universal neural network circuit
US9390372B2 (en) 2011-09-16 2016-07-12 International Business Machines Corporation Unsupervised, supervised, and reinforced learning via spiking computation
US9489622B2 (en) 2011-09-16 2016-11-08 International Business Machines Corporation Event-driven universal neural network circuit
US10891544B2 (en) 2011-09-16 2021-01-12 International Business Machines Corporation Event-driven universal neural network circuit
US8738554B2 (en) 2011-09-16 2014-05-27 International Business Machines Corporation Event-driven universal neural network circuit
US10445642B2 (en) 2011-09-16 2019-10-15 International Business Machines Corporation Unsupervised, supervised and reinforced learning via spiking computation
US11481621B2 (en) 2011-09-16 2022-10-25 International Business Machines Corporation Unsupervised, supervised and reinforced learning via spiking computation
US8799199B2 (en) 2011-12-14 2014-08-05 International Business Machines Corporation Universal, online learning in multi-modal perception-action semilattices
US10282661B2 (en) 2011-12-14 2019-05-07 International Business Machines Corporation Multi-modal neural network for universal, online learning
US9697461B2 (en) 2011-12-14 2017-07-04 International Business Machines Corporation Universal, online learning in multi-modal perception-action semilattices
US9639802B2 (en) 2011-12-14 2017-05-02 International Business Machines Corporation Multi-modal neural network for universal, online learning
US11087212B2 (en) 2011-12-14 2021-08-10 International Business Machines Corporation Multi-modal neural network for universal, online learning
US8626684B2 (en) 2011-12-14 2014-01-07 International Business Machines Corporation Multi-modal neural network for universal, online learning
US9854138B2 (en) * 2012-09-20 2017-12-26 Gyrus Acmi, Inc. Fixed pattern noise reduction
US20140078348A1 (en) * 2012-09-20 2014-03-20 Gyrus ACMI. Inc. (d.b.a. as Olympus Surgical Technologies America) Fixed Pattern Noise Reduction
US20140222727A1 (en) * 2013-02-05 2014-08-07 Cisco Technology, Inc. Enhancing the reliability of learning machines in computer networks
US11221990B2 (en) 2015-04-03 2022-01-11 The Mitre Corporation Ultra-high compression of images based on deep learning
US12124954B1 (en) 2022-11-28 2024-10-22 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks

Similar Documents

Publication Publication Date Title
Murata A statistical study of on-line learning
Purushothaman et al. Quantum neural networks (QNNs): inherently fuzzy feedforward neural networks
US5764860A (en) Learning method for multi-level neural network
Charalambous Conjugate gradient algorithm for efficient training of artificial neural networks
Zhang et al. Convergence of gradient method with momentum for two-layer feedforward neural networks
US5398302A (en) Method and apparatus for adaptive learning in neural networks
US5768476A (en) Parallel multi-value neural networks
EP0384689A2 (en) A learning system for a data processing apparatus
Ng et al. Fast convergent generalized back-propagation algorithm with constant learning rate
US5870728A (en) Learning procedure for multi-level neural network
JP3367295B2 (en) Multi-valued neural network learning method
JP3368774B2 (en) neural network
JP3277648B2 (en) Parallel neural network
JP3367264B2 (en) Neural network learning method
JP3757722B2 (en) Multi-layer neural network unit optimization method and apparatus
JPH1049509A (en) Neural network learning system
JPH0991264A (en) Method and device for optimizing neural network structure
Goryn et al. Conjugate gradient learning algorithms for multilayer perceptrons
Oh et al. Adaptive fuzzy morphological filtering of impulse noise in images
JP4586420B2 (en) Multilayer neural network learning apparatus and software
International Neural Network Society (INNS), the IEEE Neural Network Council Cooperating Societies et al. Convergence of the vectors in Kohonen’s learning vector quantization
KR20190125694A (en) Learning and inference apparatus and method
Mangasarian et al. Serial and Parallel Backpropagation Convergence Via Nonmonotone Perturbed Minimization
JP4258268B2 (en) Multi-layer neural network learning method
Amin et al. Dynamically pruning output weights in an expanding multilayer perceptron neural network

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: KDD CORPORATION, JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:KOKUSAI DENSHIN DENWA CO., LTD.;REEL/FRAME:013835/0725

Effective date: 19981201

AS Assignment

Owner name: DDI CORPORATION, JAPAN

Free format text: MERGER;ASSIGNOR:KDD CORPORATION;REEL/FRAME:013957/0664

Effective date: 20001001

AS Assignment

Owner name: KDDI CORPORATION, JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:DDI CORPORATION;REEL/FRAME:014083/0804

Effective date: 20010401

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20100609