WO1991014226A1 - Neuro-fuzzy fusion data processing system - Google Patents

Neuro-fuzzy fusion data processing system Download PDF

Info

Publication number
WO1991014226A1
WO1991014226A1 PCT/JP1991/000334 JP9100334W WO9114226A1 WO 1991014226 A1 WO1991014226 A1 WO 1991014226A1 JP 9100334 W JP9100334 W JP 9100334W WO 9114226 A1 WO9114226 A1 WO 9114226A1
Authority
WO
WIPO (PCT)
Prior art keywords
rule
consequent
unit
function
antecedent
Prior art date
Application number
PCT/JP1991/000334
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
Nobuo Watanabe
Akira Kawamura
Ryusuke Masuoka
Yuri Owada
Kazuo Asakawa
Shigenori Matsuoka
Hiroyuki Okada
Original Assignee
Fujitsu Limited
Fujifacom Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2060263A external-priority patent/JP2763371B2/ja
Priority claimed from JP2060256A external-priority patent/JP2763366B2/ja
Priority claimed from JP2060261A external-priority patent/JP2763369B2/ja
Priority claimed from JP2060258A external-priority patent/JP2544821B2/ja
Priority claimed from JP2060260A external-priority patent/JP2763368B2/ja
Priority claimed from JP2060257A external-priority patent/JP2744321B2/ja
Priority claimed from JP2060262A external-priority patent/JP2763370B2/ja
Priority claimed from JP2060259A external-priority patent/JP2763367B2/ja
Priority claimed from JP2066851A external-priority patent/JP2501932B2/ja
Priority claimed from JP2066852A external-priority patent/JP2761569B2/ja
Priority claimed from JP2197919A external-priority patent/JPH0484356A/ja
Priority to AU74509/91A priority Critical patent/AU653146B2/en
Application filed by Fujitsu Limited, Fujifacom Corporation filed Critical Fujitsu Limited
Priority to KR1019910701593A priority patent/KR950012380B1/ko
Priority to EP91905520A priority patent/EP0471857B1/en
Priority to US07/773,576 priority patent/US5875284A/en
Priority to CA002057078A priority patent/CA2057078C/en
Publication of WO1991014226A1 publication Critical patent/WO1991014226A1/ja

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/02Computing arrangements based on specific mathematical models using fuzzy logic
    • G06N7/04Physical realisation
    • G06N7/046Implementation by means of a neural network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/043Architecture, e.g. interconnection topology based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS]

Definitions

  • the present invention relates to a data processing system having a hierarchical network configuration that executes adaptive data processing according to an easy-to-understand execution format. More specifically, the present invention relates to a high-precision, short-term data processing function to be constructed. The present invention relates to a neuro-fuzzy fusion data processing system having a hierarchical network configuration in which fuzzy and neuro are fused in order to make it possible to construct a network. Background technology
  • a new adaptive data processing device that follows the parallel distributed processing method with a hierarchical network configuration has been proposed, mainly in the field of filters and the like.
  • a network output in response to the presentation of an input signal (input pattern) prepared for learning without creating an explicit program.
  • Output signal from the structure Force pattern The weight value of the internal connection of the hierarchical network structure is determined according to a predetermined learning algorithm so as to match the force, ', and the teacher signal (teacher pattern).
  • the soft data processing function is realized.
  • the data processing device of this hierarchical network configuration has the advantage that if the learning signal is obtained, the weight value of the internal connection can be determined mechanically. It has the property that it is difficult to understand the data processing contents realized by the values.From now on, in order to increase the use of the data processing device of the floor network configuration, the data processing contents must be understood. It is necessary to prepare a means that can solve the difficulties, and to enhance the practical control of this hierarchical network configuration data processing device. Therefore, it is necessary to prepare means that can construct the desired data processing in a short period of time and with high accuracy.
  • a hierarchical network is composed of a kind of node called a basic unit and an internal connection having a weight corresponding to an internal state value.
  • Fig. 1 shows the basic configuration of the basic unit 1.
  • This basic unit 1 has multiple inputs It is an output system, a multiplication processing unit 2 that multiplies a plurality of inputs by the weight value of each internal connection, an accumulation processing unit 3 that adds all the multiplication results, and a non- A function conversion processing unit 4 that performs a decimal conversion processing unit such as a linear value processing and outputs one final output.
  • the accumulation processing unit 3 of the i-th basic unit 1 of the i-th layer executes the following equation (1), and the function conversion processing unit 4 Then, for example, processing is performed to execute the ⁇ -value Hamano processing of the following equation (2).
  • the hierarchical network configuration data processing device a large number of basic units 1 having such a configuration use the input unit 1 ', which distributes and outputs the input signal value as it is, as an input layer, as shown in FIG.
  • the hierarchical network is constructed by hierarchical connection as shown in Fig. 2 and the input signal is converted to the corresponding output signal in parallel. ! It will exhibit a powerful data processing function.
  • the threshold value S i is set in the weight value W ih .
  • the value 0i as a weight value.
  • ⁇ Wit (t) ⁇ ⁇ a Pj y pi + 4 ⁇ W ji (t-1)
  • the update amount AW di (t) of the weight value between the i-th layer and the j-th layer is calculated.
  • t represents the number of times of learning.
  • ⁇ Wih (t) ⁇ ⁇ /? P j y ph + ⁇ W ih (t-1)
  • the update amount ⁇ W ih (t) of the weight value between the h-th layer and the i-th layer is calculated.
  • the weight value for the next update cycle is calculated according to the calculated update amount.
  • W ih (t) W ih (t-1) + ⁇ Wi h (t)
  • the update amount of the weight value between the g layer and the h layer AW h9 (t) Processing is performed in such a way that it is determined using the values obtained from the subsequent stage and the network output data.
  • r Ph ⁇ : ⁇ PJ W ji (t-1) OS).
  • the hierarchical network configuration data processing device By assigning the weights learned in this way to the internal connections of the hierarchical network, the hierarchical network configuration data processing device presents the input signals for learning to the input layer. In some cases, processing is performed so that a desired teacher signal is output from the output layer, and even if an unexpected input signal is input, an appropriate output signal is obtained from this hierarchical network structure. This means that the adaptive data processing of outputting the data is executed.
  • the hierarchical network configuration data processing device can realize the data conversion function of the desired input / output characteristics to execute the adaptive data processing, and if a new learning signal can be obtained, although it has the great advantage of being able to mechanically learn the weight values of inner joins that have higher precision, the problem is that the contents of the data conversion executed inside the hierarchical network structure cannot be understood. There was a point. In addition, there is a problem that an output signal cannot be predicted for input of data other than the learning signal. From now on, for example, in the case where the control target is controlled according to the hierarchical network configuration data processing device, there is a problem that the operator has a psychological anxiety even when it is operating normally. With abnormal There was a problem that it was difficult to respond to the situation. Furthermore, since a learning signal is indispensable to construct a hierarchical network configuration data processing device, a desired data processing function cannot be realized when a sufficient number of learning signals cannot be obtained. There was
  • the present invention has been made in view of such a background, and by integrating a hierarchical network configuration data processing apparatus and a fuzzy model, it is easy to improve the accuracy of the data processing function. It provides a hierarchical network configuration data processing device that makes it easy to understand while executing it, and also uses the hierarchical network configuration data processing device to build data processing functions. The purpose of this is to provide a data processing system that makes it possible to construct high-precision data in a short time.
  • FIG. 3 is a diagram showing the principle of the present invention.
  • reference numeral 10 denotes a fuzzy model that describes the processing model of the data processing target in fuzzy inference format, with the antecedent member function that converts the linguistic ambiguous expression of the input signal into numerical values. According to the consequent member functions that numerically express the linguistic ambiguous expression with respect to the output signal and the rules for expanding the connection relation of these member functions in the if-then format. It describes the processing model of the data processing object that executes the processing.
  • this fuzzy model 10 has the advantage of being relatively easy to establish if it is a rough model, it determines the exact function value of the membership function and the accurate rule description. Is difficult.
  • Reference numeral 11 denotes an adaptive data processor which executes desired data processing in accordance with a hierarchical network structure which is completely connected as shown in FIG. 2, and in the present invention, this is called a pure neuron. I will call it.
  • a rule part pre-wired neuro 12 and a rule part fully connected new part 13 are network configuration data processing systems characteristic of the present invention.
  • Fig. 4 is a block diagram of the basic configuration of the rule part pre-wired neuro 12 and the rule part fully connected neuro 13. In the figure, the entire ruled part pre-wired 2 euros and the ruled part fully connected neuro, or the consequent part of the implementation of the member function are implemented.
  • Fig. 4 is a diagram of the hierarchical neural network in which each part of the network is separated from the input side of the hierarchical neural network by the action of each part of the network.
  • an input unit 15 receives one or more input signals indicating, for example, a control state quantity of a control target
  • the antecedent membership function realization unit 16 displays the fitness of one or more antecedent membership functions corresponding to one or more input signals distributed via the input unit 15. Outputs the grade value.
  • the rule part 17 is often composed of a hierarchical neural network of two or more layers, and uses the grade value of the antecedent member function output from the antecedent member function realization part 16. Then, the expansion or reduction ratio of one or more consequent membership functions corresponding to one or more output signals is output as a fuzzy rule grade value.
  • Consequence member function realization ⁇ The non-fuzzy part 18 expands the consequent member function by using the enlargement or reduction rate of the consequent member function output from the rule part 1 ⁇ . After performing, or reduction, calculate the non-fuzzification and output the output signal.
  • the present invention will be described as generally referring to the calculation of the center of gravity performed in the final step of fuzzy inference.
  • Fig. 5 is a block diagram of the configuration of a typical example of pre-wired neuro in the rule section.
  • This figure almost corresponds to Fig. 4, but the consequent part membership function realization of Fig. 4
  • the non-fuzzification part 18 has the consequent part membership function realization part 18a and the center of gravity calculation. The difference is that it is configured by the realization unit 18b.
  • the rule part pre-wired neuro is composed of a hierarchical neural network.
  • the input units of the membership function realizing part 16 of the antecedent part, the rule part 17, the member function realizing part 18 a of the consequent part and the centroid calculation realizing part 18 b are the input units of the preceding layer.
  • the linear function units 22a to 22d in the antecedent member function realizing section 16 output grade values of the antecedent member function.
  • unit 21a outputs a grade value representing the fitness of the membership function that the proposition "input X is small".
  • the sigmoid function units 23 a to 23 e in the rule part 17 are output units of the antecedent member function realizing part 16 according to the description of the rules of the fuzzy model.
  • Units 22a to 22e are connected to output units 24a, 2, 24c of the rule part.
  • Fuzzy Model Rule 1 ⁇ Fuzzy Model Rule 1 ⁇
  • Output unit 24 of rule part 17 24 a, 24 b, 2 4 c outputs the enlargement or reduction ratio of the membership function of the consequent part.
  • unit 24b outputs the enlarging or reducing rate of the consequent member number of output "Z is middle", and uses unit 24a, 2b, and 24c output to output
  • the result of enlarging or reducing the consequent member function is output by the linear unit 25a to 25n in the part member function realizing part 18a, and the center of gravity is determined using these.
  • the two linear units 26 a and 26 b in the output device 26 output two center-of-gravity determinants z a and z b for calculating the center of gravity, and use these for the center-of-gravity calculator 27.
  • the output Z of the system as the value of the center of gravity is obtained.
  • the layers are not completely connected in the antecedent member membership function realizing section 16 but are connected corresponding to the antecedent member function. If the rules of the fuzzy model are clear, the rule part pre-wired neuro can be used. As mentioned earlier, Rule 1
  • Fuzzy model 10 can be converted to rule part pre-wire neuro 12 and if the rules are not always clear, for example, only the antecedent part and consequent part of the membership function realization part are pre-wired. It is possible to convert the fuzzy model 10 into a rule part fully connected new bit 13.
  • the output units 22 a to 22 e of the membership function realization part 16 of the antecedent part, the units 23 a to 23 e of the rule part, and the output of the rule part Units 24a to 24c are completely connected.
  • connection neuron In FIG. 5, between the output units 22 a to 22 e of the antecedent member function realizing unit 16 and the units 23 a to 23 e of the rule unit 17, Units 23a to 23e and the output units 24a, 24b and 24c of the rule section are completely connected, and this completes the hierarchical network configuration data processing system. That is why it is called a connection neuron.
  • the input / output data of the fuzzy model 10 Is trained to an adaptive data processing device, that is, a pure neural network 11 that is a hierarchical neural network in which each adjacent layer is completely connected, thereby transforming the fuzzy model 10 into a pure neural network 11. be able to.
  • the adaptive data processor ie, the pure neuro
  • Fig. 1 shows the basic configuration of the basic unit.
  • Fig. 2 is a basic configuration diagram of the hierarchical network
  • Fig. 3 is a basic configuration diagram of the present invention
  • Fig. 4 shows the rule section pre-wired neuro
  • rule section Fig. 5 is a block diagram showing the basic configuration of a fully connected neuro
  • Fig. 5 is a block diagram showing the configuration of a typical example of a pre-wired neuro in the rule section
  • FIG. 6 is a diagram showing an embodiment of a fuzzy rule
  • FIG. 7 is a diagram showing an embodiment of a pre-neuro neuron corresponding to the fuzzy of FIG. 6,
  • Fig. 8 is an illustration of the membership function.
  • Figures 9) and (b) are explanatory diagrams (part 1) of the function for calculating the grade value of the number of members in the antecedent part.
  • Figures 10) and (b) are explanatory diagrams of the grade value calculation function of the antecedent member function (part 2).
  • Fig. 11 (a) and () are explanatory diagrams of the output function of the stalled value of the consequent member function.
  • Fig. 12 is an explanatory diagram of the setting method of neuron weights and threshold values for realizing logical operations
  • FIG. 13 is a diagram showing an embodiment of a rule part using the setting method of FIG. 12,
  • the first 4 view (a), (b) is a diagram showing a logical AND operation of the X,, and chi 2,
  • FIG. 15) to (h) show two-input fuzzy logic operation.
  • Fig. 16 is a diagram showing the membership function of the approximation target of ⁇ .
  • FIG. 17 is a flowchart of a processing example for obtaining an approximation that clarifies the upper limit of sensitivity.
  • FIG. 19 is a flowchart of the processing for obtaining an approximation that minimizes the integration of the absolute value of the error.
  • FIG. 20 is a flowchart of a processing embodiment for obtaining an approximation that minimizes the square integral of the error
  • FIG. 21 is a flowchart of a processing embodiment for obtaining an approximation that minimizes the maximum error
  • Fig. 22 shows the membership function of the second approximation object.
  • FIG. 23 is a diagram showing a three-layer network for approximating the membership function of FIG. 22.
  • Fig. 24 is a flow chart of an example of the processing for determining the weight and the value of the energy value for obtaining an approximation of the sensitivity upper limit clarification.
  • Fig. 25 is a flow chart of the embodiment of the weight and threshold value determination processing for obtaining the approximation that minimizes the maximum error
  • Fig. 26 is a diagram showing an example of a special fuzzy rule
  • Fig. 27 is Fig. 26 shows an embodiment of the hierarchical network following the rule part corresponding to the fuzzy rule in Fig. 6-Fig. 28 is a block diagram showing the basic configuration of the center-of-gravity determining element output device.
  • Fig. 29 is a flowchart of the method of calculating the center of gravity
  • Fig. 30 is a block diagram of the configuration of the embodiment of the output device of the center of gravity determining element
  • Figs. 31 (a) and (b) show the first implementation of the center of gravity determination factor output. Diagram showing an example,
  • FIGS. 32 (a) and (b) are diagrams showing a second embodiment of the output of the center of gravity determining element
  • FIGS. 33 (a) and (b) are block diagrams showing the basic configuration of the teacher signal determination device.
  • FIG. 34 is a block diagram showing the configuration of the first embodiment of the teacher signal determination device.
  • FIG. 35 is a block diagram showing the configuration of a second embodiment of the teacher signal determination device.
  • FIGS. 36 (a) and (b) are diagrams showing an embodiment of a calculation method used in the teacher signal calculation device.
  • Fig. 37 (a)> (b) shows an embodiment of teacher signal output.
  • Fig. 38 shows an embodiment of a hierarchical neural network as a center of gravity calculation realizing unit.
  • Fig. 39 is a block diagram showing the overall configuration of the center-of-gravity detector that houses the fuzzy inference unit.
  • Fig. 40 is a block diagram showing the detailed configuration of the neural network control unit and the center of gravity learning device.
  • Fig. 41 is a flowchart of the processing embodiment of the center of gravity learning device
  • Fig. 42 is a diagram showing an output example of the center of gravity output device.
  • Fig. 43 is an explanatory diagram of a dividing neural network as a center of gravity calculation realizing unit.
  • FIG. 44 is a diagram showing an embodiment of a wari network
  • FIG. 45 is a diagram showing network structure conversion and fuzzy mode. Diagram showing the concept of Dell extraction
  • Figure 46 shows the configuration of the pure neuro
  • Fig. 47 is a diagram showing the configuration of the rule part fully connected neuro only by the unit.
  • Fig. 48 is a diagram showing the configuration of the rule section prewired neuro unit only.
  • Fig. 49 shows the rule part pre-wired neuro before the fuzzy rule extraction processing.
  • FIG. 50 is a diagram showing a state in which the rule part of the rule part pre-wired neuro is made into a logic element.
  • Fig. 51 shows a state in which the unit of the rule section is left for the desired number of fuzzy rules.
  • Fig. 52 is a diagram showing the state before the rule extraction process of two euros of the rule part fully combined
  • Fig. 53 is a diagram showing the grouping of the units of the membership function output part of the antecedent part of the neuro with full connection of rules.
  • Fig. 54 shows a simplified diagram of the connection between the rule output and the member output unit of the antecedent part of the rule-all fully connected neuro.
  • Fig. 55 shows a simplified diagram of the connection between the rule part of the fully connected neural network of the rule part and the output of the consequent part of the member function.
  • Fig. 56 is a diagram showing the rule unit unit of the rule part fully connected neuro converted into a logic element.
  • Fig. 57 shows a state in which the rule unit units of the rule part fully connected neuro are left as many as the number of fuzzy rules.
  • 'Fig. 58 shows the embodiment of the pure neuro, and
  • Fig. 59 shows Fig. 58.
  • FIG. 60 is a diagram showing the connection weights after learning of the pure neuro of FIG. 58.
  • Fig. 61 is a diagram showing the conversion procedure (1) for the pure neuro of Fig. 58,
  • Fig. 62 shows the conversion process (2) for the pure neuro of Fig. 58
  • Fig. 63 shows the conversion procedure (3) for the pure neuro of Fig. 58
  • Fig. 64 shows the conversion procedure (4) for the pure neuro of Fig. 58
  • Fig. 65 shows the conversion procedure (5) for the pure neuro of Fig. 58
  • FIG. 66 is a diagram showing a rule part pre-neural neuron converted from the pure neuron of FIG. 58,
  • FIG. 67 is a diagram showing the membership functions corresponding to the units of the fifth layer in FIG.
  • FIG. 68 is a diagram showing the connection weights of the pre-wire new port of the rule part of FIG.
  • FIG. 70 is a diagram showing the configuration of the rule pre-transfer neuro after the conversion according to FIG. 69.
  • Fig. 71 is a diagram showing the connection weights of the rule part pre-wired opening in Fig. 70.
  • Fig. 73 is a diagram showing the weights of the rule part fully connected neuro of Fig. 72,
  • Fig. 74 is a flowchart (part 1) of the embodiment of the network structure conversion processing.
  • FIG. 75 is a flowchart (part 2) of the embodiment of the network structure conversion processing.
  • FIG. 76 is a flowchart (part 3) of the embodiment of the network structure conversion processing
  • FIG. 77 is a flowchart (part 4) of the embodiment of the network structure conversion processing
  • Fig. 78 is a flowchart (part 5) of the embodiment of the network structure conversion processing
  • Fig. 79 is a block diagram showing the overall configuration of the neuro-fuzzy fusion data processing system.
  • Fig. 80 is a diagram showing an example of initialization of the consequent part's member-based function realizing part.
  • Fig. 81 is a diagram showing an example of initialization of the center-of-gravity calculation unit.
  • Fig. 82 is a pro- gram diagram showing a detailed configuration of the learning processing device.
  • Figure 83 is a block diagram showing the detailed configuration of the fuzzy rule extraction unit.
  • Fig. 84 shows a flowchart of the fuzzy model extraction processing embodiment.
  • Figures 85) to (f) show examples of logical operations performed by the unit.
  • FIGS. 86 (a) to (e) show examples of management data of the logical operation management unit.
  • Fig. 87 shows an example of management data (part 1) of the logical operation input / output characteristic information management unit.
  • FIG. 88 shows an example of management data (part 2) of the logical operation input / output characteristic information management unit.
  • Fig. 89 is an explanatory diagram of the logic unit processing of the rule unit in the rule part pre-wire neuro
  • Fig. 90 is the configuration diagram of the first learning method
  • Fig. 91 is a block diagram of the second learning method
  • Fig. 92 is a block diagram of the third learning method
  • Figure 93 is a block diagram of the fourth learning method
  • Figure 94 is a block diagram of the fifth learning method
  • Figure 95 is a block diagram of the sixth learning method
  • Figure 96 is a block diagram of the seventh learning method
  • Figure 97 is a block diagram of the eighth learning method
  • Figure 98 is a block diagram of the ninth learning method
  • Fig. 99 is a block diagram of the tenth learning method
  • FIG. 100 shows the learning operation flow of the first embodiment of the learning information. Law diagram,
  • FIG. 101 is a learning operation diagram of the second embodiment of the learning method.
  • FIG. 102 is a diagram showing the learning operation of the third embodiment of the learning method.
  • FIG. 103 is a ⁇ — diagram of the learning operation of the fourth embodiment of the learning method.
  • FIG. 104 is a learning operation flow diagram of the fifth embodiment of the learning method.
  • FIG. 105 is a learning operation flow diagram of the sixth embodiment of the learning method.
  • FIG. 106 shows a learning operation diagram ⁇ of the seventh embodiment of the learning method
  • FIG. 107 is a diagram illustrating the learning operation of the eighth embodiment of the learning method.
  • FIG. 108 is a diagram showing the learning operation of the ninth embodiment of the learning method.
  • FIG. 109 is a learning operation flow diagram of the tenth embodiment of the learning method.
  • Fig. 110 is a block diagram of an embodiment of a pre-wired neuro having two rule parts, an antecedent part and a consequent part.
  • Fig. 11 is an explanatory diagram of the neuron group and the connection group of the rule part pre-wired neuro.
  • FIG. 112-(k) are illustrations of the Fuse 'group and group / connection correspondence table.
  • Fig. 113 is a flow diagram of an embodiment of the learning process for the rule part pre-wired new mouth.
  • FIG. 114 is a block diagram of an embodiment of a device for learning processing
  • FIG. 115 is a block diagram of an embodiment of a learning adjuster
  • FIG. 116 is a flowchart of an embodiment of a weight learning process
  • FIG. Fig. 17 is an explanatory diagram of each parameter at the time of weight learning in Fig.
  • Fig. 118 is an explanatory diagram of the input signal of the data processing function to be built assumed in simulation, and Figs. 119 to (d) show the fuzzy model generated by simulation. Explanatory diagram of the described membership function,
  • Fig. 120 illustrates the input / output signals of the generated fuzzy model
  • Fig. 121 is an explanatory diagram of the hierarchical network part constructed by the fuzzy model
  • FIG. 122 is an explanatory diagram of input / output signals of the data processing function of the hierarchical network section of FIG.
  • Figs. 123 (a) and (b) are explanatory diagrams of another example of the hierarchical network part constructed by the fuzzy model.
  • Fig. 124 is an explanatory diagram (part 1) of the learning signal used in the learning process of the hierarchical network part in Fig. 123
  • Fig. 125 is the learning signal of the hierarchical network part in Fig. 123
  • An explanatory diagram (part 2) of the learning signal used in the processing, Fig. 126 is the hierarchical network part of Fig. 123. Illustration of input / output signals of the data processing function, Fig. 127 is an illustration of membership function tuned by learning,
  • Fig. 128 is an explanatory diagram of the learning signal used for the learning process of the adaptive data processing device
  • Fig. 129 is an explanatory diagram of an adaptive data processing device constructed by learning
  • Fig. 130 is an illustration of the input / output signals of the data processing function of the adaptive data processor of Fig. 129
  • Fig. 131 is the hierarchical network used for simulation of fuzzy rule generation.
  • Figure 13)) to (d) are explanatory diagrams of the real numbers used for simulation of fuzzy rule generation
  • Figure 133 is a schematic diagram of fuzzy rule generation. Illustration of the control data for learning used.
  • Fig. 134 is an illustration of the fuzzy control rules used in the tuning simulation.
  • Fig. 135 is an explanatory diagram of the control state quantities and control manipulated variables used for tuning simulation.
  • Fig. 136 shows the parameter values for realizing the membership function used for the simulation of tuning and the learning data (part 1), and Fig. 137 shows the tuning data. Parameters for realizing the membership number used for the simulation Values and their illustrations (Part 2),
  • Figs. 13 (a) and 13 (b) are explanatory diagrams (part 1) of the member function of the control operation amount used in the tuning simulation.
  • Fig. 13-39 is an explanatory diagram (part 2) of the membership of the control manipulated variables used in the tuning simulation.
  • Fig. 140 is an illustration of the hierarchical network used in the simulation of Tunejung.
  • Fig. 141 is an explanatory diagram of the learning control data used for the tuning simulation.
  • Figs. 142 (a)-(c) are explanatory diagrams of membership functions tuned by learning weight values.
  • Fig. 144 is an explanatory diagram of the learning data of the weight value obtained by the simulation of the tuning
  • Fig. 144 is a diagram showing an embodiment of the configuration of the basic unit by hardware
  • FIG. 145 is a diagram showing an embodiment of the hardware configuration of the hierarchical network section. BEST MODE FOR CARRYING OUT THE INVENTION
  • Fig. 6 shows an embodiment of the fuzzy rule.
  • the figure shows a fuzzy rule between inputs X and Y, for example, indicating a control state quantity, and an output Z, for example, indicating a control quantity
  • Rule 1 through Rule 5 are five rules, Rule 1 through Rule 5.
  • FIG. 7 is an embodiment of a rule part pre-wire neuron corresponding to the fuzzy rule of FIG.
  • the inside of the rule part is connected according to the fuzzy rules in Fig. 6 between the prerequisite part realization part and the rule part.
  • Fig. 8 shows an example of the membership function.
  • the function of calculating the grade value of the membership function of the antecedent part is specifically the output value y output from the basic unit 1.
  • the function shape is similar to the membership function “low temperature” in Fig. 8, and ⁇ > 0 and ⁇ > 0
  • this basic unit has the shape shown in Fig. 9 (b).
  • the membership function with the function shape of “low temperature” or “high temperature” in Fig. 8 can be realized. Become.
  • Subtracter 1a composed of a basic unit 1 that receives the input and does not include the threshold processing unit 4 (calculates the difference between the output values of these two basic units 1 according to this configuration) comprise to) and operable, weights, omega 2 and ⁇ theta 2 to be to the right, the function of "temperature is medium” in Figure 8 shape with respect to the input of the two basic Interview two Tsu sheet 1
  • the allocation processing of the member function is realized.
  • the output function of the death value of the consequent part membership function specifically divides the consequent part membership function finely, as shown in Fig. 11 (a). Then, the grade value yi of each section is specified, and then, as shown in Fig. 11 (b), the grade value output unit 1 consisting of the basic unit 1 without the end value processing unit 4 This is realized by preparing b for the number (n) of the grade values and setting " yi " as the weight value for the input of the grade value output unit 1b .
  • outputs of the same type, for example, the opening of valve A for example, the grade value of the consequent member function related to the control manipulated variable are prepared.
  • a configuration is adopted in which the same grade value output unit 1b is input.
  • These grade value output units 1 b are configured to output a function sum of the grade values of the reduced membership functions for the allocated control operation amount c
  • the function shape of the membership function of the consequent part can be changed by changing the weight value for the input value of these ladder value output devices 1b.
  • FIG. 12 shows an embodiment of a method of setting weights and values when such a logic function is realized by one dual-uron element. In the figure, a method of determining the weight W i and the threshold 6 is shown.
  • FIG. 13 shows an embodiment of a rule unit using the method of determining weights and threshold values shown in FIG.
  • eight fuzzy rules are shown, in which X (SS) indicates the proposition "input X is small”, X (LA) indicates the proposition "input X is large”, and ⁇ X Indicates the negation of X, that is, if true is 1 and false is 0, then 1 — X.
  • FIG. 15 shows an example of implementing a product operation and a sum operation as a two-input fuzzy logic operation, and the weights and thresholds are determined by FIG.
  • the neuron that executes the fuzzy logic function of the antecedent member function, consequent member function, and rule part described above is the square of the error by the back-propagation method described above. Learning to minimize the sum is performed.
  • a membership function by a neuron is not limited to the minimum sum of squares of the error.
  • the maximum value of the change of the output with respect to the change of the input is adjusted to the slope of the membership function, that is, the upper limit of the sensitivity is clarified.
  • Various conditions such as minimizing the sum of errors and minimizing the maximum error can be considered.
  • the knock-propagation method was unable to answer such demands.
  • f (x) is the neuron sigmo
  • W (1 + ex P (0 )) is a z.
  • FIG. 17 is an embodiment of a processing flowchart for obtaining an approximation that clarifies the upper limit of sensitivity.
  • the value of the definition F is determined in step (S) 30.
  • exp is used as a characteristic of neuron
  • F 2
  • F 1.
  • the attribute values a and b of the membership X-function are changed in S31.
  • f (x) be the membership number shown in Fig. 16.
  • Figure 19 is a flowchart of the process for finding the approximation that minimizes the integration of the absolute value of the error. Comparing FIG. 17 with the processing of FIG. 17, it is exactly the same as the processing of FIG.
  • the shape of the membership function is the same as in Fig. 16.
  • Figure 20 shows an approximation that minimizes the integral of the error ⁇ 21 power.
  • Figure 21 is a flowchart of the process for finding the approximation that minimizes the maximum error. Comparing this figure with Figures 17, 19, and 20, the only difference is that the weights and 12 values are multiplied by 1.412 in S 47 and S 48.
  • FIG. 23 is an explanatory diagram of a three-layer neural network for realizing such approximation.
  • the first layer is composed of one neuron, and this neuron outputs the input value as it is.
  • the second layer consists of two neurons. These neurons have non-linear characteristics, and their weights and thresholds are determined by the flowchart described later.
  • the third layer consists of a single neuron and outputs the sum of the outputs of the two neurons in the second layer minus one.
  • the method of determining the neuron weight and ⁇ value of the second layer will be described.
  • f (w) is the neuron sigmo
  • the weight and energy of the second neuron are
  • FIG. 24 is a flowchart for determining the weight and threshold of the second layer for obtaining an approximation that clarifies the upper limit of sensitivity.
  • two layers of the second layer are obtained at S51 and S52.
  • the weight is determined for the unit, the threshold value is determined by S53 and S54, and the process ends.
  • the points of maximum error are X 0 (0 ⁇ X 0 ⁇ s) and 1
  • f (X) be the uphill portion of the membership function shown in Fig. 22.
  • Figure 25 is a flowchart of the process of determining the weights and thresholds of the two units in the second layer for finding an approximation that minimizes the maximum error. Comparing this figure with Fig. 24 to clarify the upper limit of sensitivity, the difference is that the weight and threshold are multiplied by 1.412 when calculating weights and thresholds from 357 to 360. If precision is not required, the significant figure of 1.412 can be reduced. As described with reference to FIG. 11, the realization function of the consequent member membership function can be described with reference to FIG. 5 by using the linear unit of the consequent member membership function realizing unit 18a.
  • FIG. 27 is an embodiment of a hierarchical network after the rule section corresponding to the fuzzy rule of FIG.
  • the weight of the connection between unit 61b and unit 62e that outputs the grade value of the consequent membership function at 0.8 on the abscissa 0.8 is set to 1
  • the weight of the connection between unit 61c and unit 62d that outputs the grade value of the consequent membership function at abscissa 0.6 is set to 1.
  • FIG. 27 describes the case where there is only one input to the neuron of the second layer, that is, 6 2 a to 6 2 ⁇ , but naturally there are cases where more than two inputs are given. is there. In this case, if units 6 2 a to 62 f are linear units, these units should output the algebraic sum of the output of the grade values of multiple rules. become. If the units 62a to 62f output the maximum value of the input, the logical sum of the grade values of a plurality of rules is output.
  • Fig. 28 is a block diagram of the basic configuration of the center-of-gravity determining element output device. Although the input unit in the center-of-gravity determining element output device is omitted in Fig. 5, it is shown as units 65a to 65n in Fig. 28, and the output layer
  • Units 66a and 66b are the units in Fig. 5.
  • Units 65a to 65n of the input layer in Fig. 28 are
  • the linear units 25a to 25n of the consequent member membership realizing unit in Fig. 5 are connected one-to-one with the unit units 65a to 65n, and these unit units 65a to 65n are Enlargement of the consequent membership function at each abscissa that subdivides the constitutive membership function
  • FIG. 29 is a flowchart of the method of calculating the center of gravity in the center of gravity calculation realizing unit 18b of FIG.
  • the sum of the product of the input and the first weight for each coordinate value is determined as the first centroid determining element, and the sum of the product of the input and the second weight is determined as the second centroid determining element .
  • the final calculation of the center of gravity is performed by the center of gravity calculating device 27 in FIG.
  • the difference between the product of the maximum coordinate value and the first centroid determinant and the product of the minimum coordinate value and the second centroid determinant is obtained.
  • the value of the center of gravity is calculated by dividing by the difference between the first and the second center of gravity determining factors.
  • FIG. 30 is an explanatory diagram of a method of determining the weight of the connection between the input layer and the output layer in FIG. 28.
  • the units 6 ⁇ a to 65n of the input layer are omitted for simplicity.
  • Units 66a and 66b in the output layer are linear units that only take the sum of their inputs, and two centroid determinants y (1) y (2) are output as their outputs.
  • the weight of the connection between the unit of the input layer (not shown) and the linear units 66 a and 66 b is such that the minimum value of the coordinates is ⁇ (1), the maximum value is x (n), and the maximum value is x (n). any The coordinate value is given by the following equation as x (i).
  • the weight can be determined as in the following equation.
  • FIG. 31 is a first embodiment of the output of the center of gravity determining element.
  • FIG. 3 (a) shows each coordinate value and the weight of each connection calculated from the coordinate value
  • FIG. 3 (b) shows an example of the input value and the output value of the centroid determining element corresponding thereto.
  • Eq. (16) the value is given as 1.
  • FIG. 32 shows a second embodiment of the output of the center of gravity determining element.
  • the figure shows an example in which the constant c is set to 1 in equation (15), where (a) is the weight of each coordinate and the connection, and (b) is the input.
  • the force value and the output value of the center-of-gravity determinant are shown.
  • the value of the center of gravity becomes 1 as in Fig. 31.
  • the figure shows a block diagram of a center-of-gravity determining element output device using a neural network that outputs two center-of-gravity determining elements required for calculating the center of gravity in the final process of inference of a fuzzy neuro-fusion system.
  • FIG. 75 a basic configuration block diagram of a teacher signal determination device of a center of gravity determination element output device that outputs an appropriate teacher signal is shown in FIG.
  • the center-of-gravity determining element output device 75 has two center-of-gravities required to calculate the center of gravity from a plurality of coordinate values on the number line and the input values for each of the plurality of coordinate values as described above. Decision required To output.
  • the end point coordinate and inclination storing means 76 stores two end point coordinates which are the maximum and minimum coordinate values among the plurality of coordinate values, and the inclination of a straight line passing through the center of gravity.
  • the teacher signal calculating means 77 is stored in the true center of gravity value input at the time of learning a neural network corresponding to, for example, the centroid determining element output device 75, and the end point coordinate and inclination storage means 76.
  • a teacher signal for the two center of gravity determining elements for calculating the center of gravity is obtained, and is output to the center of gravity determining element output device 75.
  • the teacher signal is determined by the equation of a straight line having the inclination stored in the end point coordinate and the inclination storage means 76, passing through the input true center of gravity.
  • FIG. 33 (b) is a basic configuration block diagram for a second embodiment described later.
  • the end point coordinate storage means 78 stores two end point coordinates that are the maximum and minimum coordinate values among the plurality of coordinate values.
  • the teacher signal calculating means 79 includes a value of a true center of gravity input at the time of learning a neural network corresponding to the centroid determining element output device 75, and two centroids output by the centroid determining element output device 75.
  • a teacher signal for two gravity center determining elements is obtained and output.
  • the teacher signal has the same slope as the straight line determined by the output values of the two center-of-gravity determining elements, and is determined by the equation of the straight line passing through the true center of gravity.
  • FIG. 33 (a) it passes through the true center of gravity input when learning the neural network constituting the center of gravity determining element output device 75, and is stored in the end point coordinate and inclination storage means 76.
  • the magnitude of the vector and the sign of the vector at the two end point coordinates are obtained from the equation of the straight line having the slope, and these values are used as the teacher signal for the two center of gravity determining elements. 5 given.
  • the true center of gravity has the same slope as the straight line connecting the tips of the two vectors corresponding to the two center of gravity determining elements output from the center of gravity determining element output device 75.
  • the magnitude and sign of the vector at the coordinates of the two end points are obtained by the equation of a straight line that passes through, and these values are given to the centroid determining element output device as a teacher signal.
  • FIGS. 34 and 35 are configuration block diagrams of an embodiment of the teacher signal determination device.
  • FIG. 34 corresponds to FIG. 33 (a)
  • FIG. 35 corresponds to FIG. 33 (b).
  • the centroid determining element output device 26 is shown in FIG. Fig. 34
  • the teacher signal determination device 80 a is composed of two neural networks that constitute the end point coordinates and inclination storage device 81 that stores the end point coordinates and the value of the inclination of the straight line passing through the center of gravity, and the center of gravity determination element output device 26. It is composed of a teacher signal calculation unit 82 that outputs a teacher signal using the value of the center of gravity input at the time of learning and the value stored in the end point coordinate / slope storage unit 81.
  • the teacher signal determination device 80 b includes an end point coordinate storage device 83 that stores end point coordinate values
  • FIG. 36 shows an embodiment of the calculation method used in the teacher signal measuring device.
  • FIG. 3A shows the calculation method used in the first embodiment
  • FIG. 3B shows the calculation method used in the second embodiment.
  • the inclination of the straight line passing through the center of gravity C is a
  • the minimum coordinate value that is, the first center-of-gravity determining element as the output of the center-of-gravity determining element output device at X
  • the maximum coordinate value that is, X 2
  • the value of the teacher signal is given by the following equation.
  • Fig. 37 shows an example of teacher signal output.
  • the coordinates of the end point—5, 10 and the slope 0.2 of the straight line are stored in advance in the end point coordinates / slope storage device 81 in FIG. 34, and the coordinate 5 of the true center of gravity is
  • the centroid operation as the final step of fuzzy inference is used as the final output of the hierarchical neural network.
  • the entire center-of-gravity calculation realizing unit 18b may be configured as a neural network.
  • the division by the center-of-gravity detector 27 is also performed in FIG. 5, and the entire center-of-gravity calculation realizing unit becomes a hierarchical neural network, and the center of gravity is calculated from one output unit. The value is output.
  • learning of the hierarchical network is performed using, for example, a back-propagation method so that the hierarchical neural network can correctly calculate the center of gravity value.
  • FIG. 38 shows an embodiment of a hierarchical neural network as a center of gravity calculation realizing unit.
  • the centroid calculation realizing unit 88 is composed of an input layer 91, an intermediate layer 92 composed of one or more layers, and an output layer 93.
  • the input layer is a consequent member of FIG.
  • the unit that receives values from the linear unit 25a to 25n of the loop function realization unit consists of the units 91a to 91n, and the units of the output layer 93 are 93a and 1 There are only pieces.
  • the input normalizers 89a to 89n placed in front of the center of gravity calculation realizing unit 88 in correspondence with the input units 91a to 91n, respectively,
  • a range where the sensitivity is high that is, for example, a range in which the slope of the sigmoid function is large (for example, the characteristic of the unit is the sigmoid function f), ⁇ x; I f '(x) I> a ⁇ (a> 0), the range of x is determined), and the outputs of the consequent part of the generic function realizing part 25 a to 25 n
  • a range where the sensitivity is high that is, for example, a range in which the slope of the sigmoid function is large (for example, the characteristic of the unit is the sigmoid function f), ⁇ x; I f '(x) I> a ⁇ (a> 0), the range of x is determined), and the outputs of the consequent part of the generic function realizing part 25 a to 25 n
  • the output restoration device 90 provided at the subsequent stage of the center-of-gravity calculation realizing unit 88 maps the output value output from the output unit 93a by an appropriate function corresponding to the range of coordinate values. . Note that these input normalization devices and output restoration devices are not always necessary.
  • Fig. 39 is a block diagram of the overall configuration of the center-of-gravity output device that houses the fuzzy inference unit.
  • the system includes a fuzzy inference unit 94, a neural network control unit 95, and a center of gravity learning device 96 in addition to the configuration shown in FIG.
  • the fuzzy inference unit 94 shows the hierarchical neural network up to the consequent member function realizing unit 18a in Fig. 5, and the output of the linear unit 25a to 25n Are input to the center-of-gravity calculating neural network 88 via input normalizers 89a to 89n, respectively.
  • the neural network control unit 95 controls the weight of the connection in the neural network for calculating the center of gravity 88, and controls the setting and change of the internal state such as the ⁇ value of the unit.
  • the learning device 96 is used for training the neural network 88 for training the center of gravity. It is used to generate information.
  • FIG. 40 shows an embodiment of a center-of-gravity output device showing the configurations of the neural network control section 95 and the center-of-gravity learning device 96 in further detail.
  • the center-of-gravity learning device 96 includes a control unit 100 that performs various controls of the center-of-gravity learning device 96, and a constant storage unit 1002 that stores externally input coordinate values, input / output ranges, and the number of teacher data. And a random number generation unit 103 for generating a random number; and a constant stored in the constant storage unit 102 and teacher data used for learning based on the random number generated from the random number generation unit 103. It has a teacher data generator 101 and-to generate.
  • the neural network control unit 95 includes a learning data storage unit 98 that stores the teacher data generated from the teacher data generation unit 101 as learning data,
  • the neural network 88 such as the weights associated with the connection lines between the units of the neural network 88, the thresholds used by each unit for threshold processing, the learning constants, and the moments, etc.
  • an instruction is given from an internal state storage unit 99 for storing internal state data to be followed and the control unit 100 of the center-of-gravity learning device 96, the instruction is stored in a learning data storage unit 98.
  • Learning data is input to the neural network 88, and the center of gravity output from the neural network 88 is calculated.
  • the learning control unit 97 controls the change of the internal state stored in the internal state storage unit 99 by comparing the output value to be displayed with the teacher data indicating the center of gravity.
  • step S 10 As shown in the flowchart of FIG. 41, step S 10
  • step 4 the control unit 100 of the center-of-gravity learning device 96 reads the coordinate values, the input / output range, and the number of teacher data from outside, and stores them in the constant storage unit 102.
  • step S105 the control unit 100 instructs the teacher data generation unit 101 to generate teacher data.
  • the teacher data generation unit 101 reads out the constant data from the constant storage unit 102 in step S106. ⁇ In step S107, the teacher data generation unit 101 changes the random number generation unit 10 Then, at step S108, the teacher data generator 101 generates teacher data from the read constant and the random number.
  • the output of Teacher Data is calculated according to the input data generated based on random numbers.
  • step S109 the teacher data generation unit 101 transfers the teacher data to the learning data storage unit 98 of the neural network control unit 95 for storage.
  • step S110 the control unit 100 of the learning device 96 issues a learning instruction command to the learning control unit 97 of the neural network control unit 95. Become.
  • Fig. 42 shows the value of the center of gravity output by the center of gravity output device using the neural network obtained in the embodiment, and the actual center of gravity.
  • the maximum value is 0.12249, and the average error is 0.036018.
  • the functions of the input normalizing device and the output restoring device are linear functions.
  • the present invention is not limited to this, and may be non-linear functions.
  • the center of gravity calculation unit 18b is the final step of fuzzy inference.
  • the output of the system as the center of gravity value is determined by the following equation.
  • Fig. 43 is an explanatory diagram of the division network as the center of gravity calculation realizing unit that calculates the center of gravity based on this concept.
  • FIG. 5 output Yuni' DOO 2 4 rules section 1 in 7 of a s 2 4 b is not required such Hua Jiiruru, i.e. as shown in the second 6 Figure It is intended to cover all fuzzy rules.
  • the units 112a to l12k correspond to the sigmoid functions 23a to 23e in the rule part 17 in FIG.
  • the grade value at mark Z corresponds to the grade value of the rule. Therefore, the child of the connection between the unit 111a to 112k and the first input unit A in the division network 111 is the abscissa of the membership function of the consequent.
  • the value of z! ⁇ ⁇ Zi ⁇ ⁇ ⁇ z k and the weight of the connection with the other unit B is all '1.
  • the division network 1 1 1 By dividing the output (A) of the unit 113a by the output (B) of 113b, the value of the center of gravity is obtained.
  • the division network 111 learns the division by, for example, the back projection method.
  • FIG. 44 shows an embodiment of the division network. This figure directly corresponds to the fuzzy rule in FIG. 26, and the weight of the connection between unit 112a to 112c and unit 113a is later.
  • the values of the abscissas of the subject member function are 0.2, 0.8, and 0.6, and the weight of the connection with the unit 113 b is 1.
  • Figure 45 is a conceptual diagram of network structure conversion and fuzzy model extraction.
  • FIG. 46 shows the configuration of a pure neuro, that is, the adaptive data processing apparatus 11 shown in FIG.
  • the neuro-fuzzy fusion system is composed of only units, such as this pure neuro, the calculation of defuzzification as the final step of fuzzy inference is also performed using only the neural network.
  • Fig. 46 there are an input layer (the first layer) of the input unit that performs input and an output layer (the nth layer) that outputs the control operation amount of the processing result.
  • n — 1) layer The intermediate layer can be a multilayer. It is a pure nuance that all of these layers are completely connected to each other by wiring.
  • Fig. 47 shows an example of a rule unit with a total of two euros 6 consisting only of units.
  • the rule part fully connected neuron consists of the input layer (first layer) of the input part, the second layer used to realize the antecedent member's member function, and the gray of each antecedent member function. 3rd layer that expresses the value of each unit, from the 4th layer of the rule section to the (n ⁇ 2) th layer. Where each unit in the ( ⁇ — 2) layer represents the scaling factor of each consequent membership function.
  • the consequent part is composed of the ( ⁇ -1) layer used for realizing the membership function and the output layer ( ⁇ layer) of the output unit that outputs the result of non-fuzzification. It is a characteristic of the rule part fully connected that the part from the fourth layer to the ( ⁇ -2) layer of the rule part is completely connected.
  • FIG. 48 shows an example in which the rule part pre-wired neuro 7 is composed of only a unit.
  • the hierarchical structure from the input layer (first layer) to the output layer ( ⁇ th layer) is the same as the example in Fig. 47, but it differs from the case of the rule part fully connected neuro in Fig. 47.
  • the connections and weights are set so that the connection from the fourth layer to the ( ⁇ -3) layer in the rule section is not a full connection but a structure corresponding to the fuzzy rule is created. This is a characteristic of the rule part pre-wired neuro.
  • Units 23a to 23e, layers (n-2) are the same, units 24a and 24b, and layer (n-1) is the consequent membership function It corresponds to the realization part 18a, but the nth layer (output layer) performs some defuzzification, and its calculation is not the center of gravity calculation.
  • Fig. 46 the input weight of the connection between the input layer and the second layer of the pure neuron and the weight of the connection between the second and third layers are deleted by deleting the connection with less influence.
  • the membership function for can be clarified.
  • each of the (n-1) th layer units and the output layer (nth layer) unit weights are arranged in ascending order of weight. Reorder the weights of the connections from the (n-1) th layer unit connected to each connection and the (n-2) th layer unit connected to that unit.
  • the connection with a small influence is deleted, and the membership weight for the input is removed.
  • the number can be extracted.
  • the fuzzy rules can be extracted by removing the connections having a low degree of influence from the third layer to the (n ⁇ 2) th layer.
  • the weights of the connections between the unit of the (n-1) th layer and the unit of the output layer (the nth layer) are arranged in ascending order. l Reorder the unit weights from the (n-1) th layer unit connected to each connection and the (n-2) th layer unit connected to that unit.
  • the K mouth can be converted to a rule part pre-wired neuro.
  • a fully connected neural network in the rule section can also be converted to a pre-wired neuro in the rule section.
  • the internal rules of the trained system are all 0.
  • the rule part pre-neuro neuron or rule part fully connected neuron is used.
  • the internal rules can be extracted in the form of a fuzzy model.
  • the extraction of the fuzzy model is performed by deleting the connections and units that have a small influence so as not to change the information processing performance of the entire network as much as possible.
  • the process of breaking the rule part pre-wire neuron consists of a logic element conversion process that converts the unit of the rule part into a function of a logic element, and a process of deleting the rule unit.
  • the processing of (1) and (2) is a preprocessing to unify the weight of the sign of the eclipse in the connection to a positive sign, and avoid the logic being incomprehensible due to the negative sign being blocked. Done for.
  • the input / output characteristics of each unit are examined, and the amplitudes (dynamic range) of the units are aligned, and the connection between the rule part, the consequent part, and the member function part is determined. This is performed so that the magnitude of the contribution can be determined only by the value of the weight.
  • the processes (7) and (8) are performed to determine the input / output characteristics of the unit and the logic element to be matched.
  • Step (a) Connection between the rule part and the antecedent member membership function realization part Step (a) is performed for each line weight (that denote the ⁇ value).
  • the weight of the connection between the rule part and the consequent part is converted. (The converted weight can be interpreted as the importance of the fuzzy rule.)
  • the antecedent member function realization part 16 in Fig. 5 consists of the antecedent member function realization part (layer 2) and the antecedent member membership.
  • the function part (third layer) is divided into the rule part 1 ⁇ and the rule part four layers and the consequent part membership function part (fifth layer).
  • the center-of-gravity determining element output device 26 corresponds to the median value section for the center of gravity
  • the center-of-gravity calculating device 27 corresponds to the center-of-gravity calculating section.
  • Fig. 49 shows the state of pre-wired two euros in the rule section before being made into a logic element.
  • the state shown in Fig. 50 can be obtained.
  • the function of each unit in the rule part is the logical sum, average, only X, constant truth, algebraic sum, and algebraic product.
  • the process of breaking the rule part fully connected neuro is performed by connecting the rule part fully connected neuro between the antecedent member function function part and the rule part, and the rule part and the consequent member part. Processing to simplify the connection between the function part and the structural conversion of the fully connected neuro of the rule part to the rule part pre-wired neuro, and to extract the fuzzy rules from the conversion result rule 20 part pre-wired neuro Processing. Next, the processing procedure 2 _ 1 to 2-3 is shown.
  • Steps (1) and (2) are performed for each unit in the rule section.
  • connection with the greatest absolute value of the weight is determined for the connection between the several parts, and only that connection is left.
  • Nos. 52 to 57 2 show specific examples.
  • Fig. 52 shows an example of the rule part fully connected neuro, which shows the processing before the fuzzy rule extraction processing.
  • Figure 53 shows the membership of the antecedent part for each unit in the rule part. The process of grouping! Units by input variables x and y is shown.
  • Fig. 54 shows the state of Fig. 53, where the weight of the connection from the group in the membership function part of the antecedent part is checked for each unit in the rule part, and the connection with the maximum weight is determined.
  • FIG. 56 shows a state in which each unit in the rule section is converted into a logic element in the state of FIG. Fig. 57 shows the relationship between the logic unit of the rule part in Fig. 57 and the desired number of fuzzy rules according to the magnitude of the connection weight to the consequent member function part in Fig. 56. Shows the state where only 3 hits are selected.
  • the target system is a controlled variable for temperature and humidity. This indicates the degree of control performed at a certain temperature and a certain humidity.
  • the formula is as follows.
  • Control amount 0.81T +0.1 ⁇ (0.99 ⁇ -14.3) X 46.3 where ⁇ represents temperature (te) and ⁇ represents humidity (%).
  • Fig. 58 shows a hierarchical net 2 euro network trained on the data of this system.
  • This network consists of seven layers from the input layer to the output layer. The inputs are temperature and humidity, and the controlled variable is the output. There are 54 learning patterns given from the current system and this target system, and we used a normalized version of this pattern. This is shown in Figure 59.
  • the first layer is 2 units
  • the second layer is 5 units
  • the third layer is 3 units
  • the fourth layer is 4 units
  • the fifth layer is 3 units
  • the 6th layer was composed of 5 units
  • the 7th layer was composed of 1 unit
  • all the layers were connected by wiring.
  • the weights of the connections are shown in Fig. 60.
  • Fig. 66 The above procedure is the pre-wired neuro in the rule section in Fig. 66.
  • the first layer enter two variables, temperature and humidity.
  • Each unit in the second and third layers corresponds to the antecedent membership function, and the fourth layer corresponds to the rules.
  • Each unit 0 in the 5th and 6th layers corresponds to the consequent membership function.
  • the 7th layer non-fuzzy calculation is performed and the output value is output.
  • the weights of the second layer, the first layer, and the connection from each unit of the third layer (3 units in this example), which outputs the grade value of the number of members of the antecedent part, are output.
  • each unit in the fourth layer (4 units in this example) corresponds to a fuzzy rule. Fuzzy rules can be read by breaking the weight of the connection between the front and back layers of this layer.
  • the weights of the connections between the fifth and sixth layers with respect to the weights of the connections between the sixth and seventh layers are normalized and graphed in Figs.
  • the horizontal axis is the weight of the connection between the sixth and seventh layers, that is, the output value of the unit of the sixth layer is the droop of any point on the coordinate axis of the output value coordinate of the membership of the consequent part.
  • the vertical axis indicates the weight of the connection from the sixth layer to the fifth unit, i.e., the number of memberships in the consequent part corresponding to each unit in the fifth layer. Corresponds to the grade value at the previously specified point on the output value axis.
  • each d, the weight of the connection from the Yuni' bets sixth layer to Interview Knitting preparative seventh layer is a d 2, d 3 d 4, d 5.
  • 1 is the consequent membership function associated with the first unit in the fifth layer, and the weights of the connections from the first unit to all the units in the sixth layer are ai and a 2, a 3, a 4 , a a 5.
  • 2 is the number of consequent members corresponding to the second unit, and the weights of the connections from the second unit to all the units in the sixth layer are b2, b3, b, respectively. 4 and b. 3
  • the consequent part main Nba membership function corresponding to the third Yuni' DOO, respectively from the third Yuni' preparative weights connections to all Interview Knitting bets sixth layer is C,, C 2, c 3 C 4, C 5
  • connection weights of the rule part pre-wired two euros after conversion are shown in Fig. 68.
  • * * * * * * * Indicates the weight of the deleted connection.
  • each unit in the third layer can be graded for the input variables into those related to temperature and those related to humidity.
  • remove the others leaving only one connection from the noted unit to the humidity grade.
  • the dotted line indicates the weight of the deleted connection.
  • FIG. 70 shows the rule pre-wire neuron that has been converted in this way.
  • Fig. 71 shows the connection weights. In the figure, the part indicated by * * * * * indicates the weight of the deleted connection.
  • the fully connected neuron of the rule part shown in Fig. 72 is obtained.
  • the second and third layers correspond to the antecedent membership function
  • the fifth and sixth units correspond to the consequent membership function
  • the sixth and third layers correspond.
  • the 7th layer calculates the non-fuzzification and outputs the output value.
  • Fig. 72 the weights of the second layer, the first layer, and the connection from each unit of the third layer (3 units in this example), which outputs the grade value of the membership function of the antecedent part, are calculated. Throat More specifically, it can be seen that the first unit in the third layer outputs the grade value of the antecedent membership function for the temperature (T) of the input variable. It can be seen that the second and third units output a grade value of the membership variable of the antecedent part regarding the humidity (H) of the input variable by following them at the same time.
  • the weight of the connection between the fifth and sixth layers with respect to the weight of the connection between the sixth and seventh layers is normalized, and the graph is normalized.
  • the horizontal axis is the weight of the connection between the sixth and seventh layers, that is, the output value of the unit of the sixth layer is the point on the coordinate axis of the output value of the consequent member function.
  • the vertical axis indicates the weight of the connection from the sixth layer to the fifth unit, i.e., the consequent member function for each unit in the fifth layer. Represents the grade value at the previously specified point on the output value coordinates.
  • the connection weights are shown in Fig. 73. In the figure, * * * * * * indicates the weight of the deleted connection.
  • FIG. 74 is equivalent to the processing of the conversion procedure (1) shown in FIG. 61, and from the first layer to the second layer, for example, in FIG. 52, the input part and the antecedent member function part in FIG.
  • one unit of the second layer is taken out at sl 20, and it is determined at S 12 21 whether or not the processing for the last unit of the second layer has been completed. If the processing for the last unit has not been completed, the weight of the connection from the first layer to that unit is extracted in S122.
  • Fig. 75 is a flowchart of the processing of the conversion procedure (2) described in Fig. 62.
  • layers 2 and 3 are shown.
  • Fig. 52 the antecedent member function part and the antecedent are shown.
  • This is a flowchart of the processing of the conversion procedure related to the connection with the part member active function output part. This process is similar to Fig. 74, except that the process of deleting the connection to the unit on the third layer is completed with two connections remaining at S129, as shown in Fig. 75. Basically different.
  • Fig. 76 shows the conversion procedure between the third and fourth layers, that is, the processing flow of the conversion procedure (3) described in Fig. 63. In Fig. 52, this corresponds to the process of deleting the connection between the antecedent member function part and the rule part. The processing is substantially the same as the conversion procedure between the second and third layers shown in FIG.
  • Fig. 77 is a flowchart of the process of deleting the connection between the fourth layer and the fifth layer, that is, the connection between the rule part and the membership function part of the consequent part. If one unit is taken out and the processing for the last unit has not been completed in S136, the weight of the connection from that unit to the fifth layer unit in S137. Is extracted, and only one connection with a large weight is left by S138 and SI39, and the processing is continued until the processing for the last unit is completed in S136.
  • FIG. 78 is a processing flowchart of the conversion procedure (5) described in FIG. 65, that is, the rearrangement of connection weights as a conversion procedure from the fifth layer to the seventh layer.
  • Fig. 52 is a flowchart of the conversion procedure in the part corresponding to the part from the membership part of the consequent part to the non-fuzzified output.
  • the unit of the seventh layer that is, the output unit
  • the unit of the seventh layer is extracted at S10
  • the weight of the connection from the sixth layer to the unit is extracted at S11
  • the unit is extracted at S142.
  • the weights of the connections are rearranged in ascending order, and the unit of the sixth layer is changed according to the connections rearranged in S144.
  • step S144 the weights of the connections from the fifth layer to the sixth layer are rearranged in accordance with the rearrangement, and the processing ends.
  • FIG. 79 is a block diagram of the overall configuration of the system.
  • reference numeral 150 denotes a hierarchical network storage unit for storing the entity of a hierarchical neural network
  • reference numeral 151 denotes a hierarchical network storage unit.
  • An initialization processor that initializes the membership function realization part, the consequent member function realization part, the center of gravity calculation realization part, and the rule part of the hierarchical neural network.
  • the Fuzzy Model Extraction Processor that performs the processing, 154 is a pure neuro with a rule part fully connected two euros or a rule part prewired neuro, and a rule part fully connected two euro with a rule part prewire two unit.
  • 15 5 is a fuzzy model extraction unit that has the function of extracting a fuzzy model from the rule part fully connected neuro or the rule part pre-wired neuro.
  • 6 is a membership function extractor that extracts the membership function
  • 157 is a process that extracts the fuzzy rules.
  • This is a fuzzy rule extraction unit that performs processing.
  • a network with a rule part pre-wired neuro, a rule part fully connected neuro, a network with a certain t; or a pure neuro, and these network structures are converted.
  • the network, etc. in the process of being performed are stored.
  • the initialization processing device 15 1 sets the rule part pre-wired 0 bit as initialization processing, for example, as a hierarchical network.
  • Fig. 80 shows an example of setting the weight of the connection between the units 24a and 24b in the rule part of Fig. 5 and the units 25a to 25 ⁇ of the membership function realization part in the consequent part. It is. For the bond corresponding to each abscissa value, the grade value of the consequent membership function at that coordinate is set.
  • the weight of the connection in the center of gravity calculation realizing unit is initialized by the method described with reference to FIG. FIG. 81 shows an example of this.
  • the weight of the connection corresponding to each abscissa value is set according to the above equation (04).
  • the fuzzy logic operation in the rule For example, the weights and thresholds of the sigmoid function units 23 to 23 e in the rule part in FIG. It is set by the method described in Fig. 15.
  • FIG. 82 is a detailed explanatory diagram of the learning processing device 152.
  • 1'-1h in the hierarchical network section 159 is the input unit that constitutes the input layer
  • 1i-i is the processing unit of the intermediate layer (provided in multiple stages)
  • 1-j is the output layer
  • Reference numeral 160 denotes a weight value management unit that manages the weight value of the inter-layer connection of the hierarchical network unit 159.
  • Reference numeral 161 denotes a weight value changing unit that updates the weight value based on the back propagation method based on the error amount of the learning result.
  • a learning signal storage unit 164 stores a learning signal composed of a set of an input signal pattern and an output signal pattern determined by the input / output relationship of the target system.
  • d P j included in the learning signal is for the p-th input parameter: represents a teacher signal to the j-th unit in the i-th layer ⁇ Also
  • 16 2 is a learning signal presenting unit, and the learning signal is in accordance with the learning instruction.
  • the learning signal is extracted from the storage unit 164, the input signal is supplied to the input of the hierarchical network unit 159, and the teacher signal dPj is supplied to the weight value changing unit 161 and the learning convergence judgment described later. Output to section 16 3.
  • 1 6 3 is a learning convergence judging section receives the teaching signal d P j from the hierarchical nets Wa over click section 1 5 9 of the output signal y P j and the learning signal presenting unit 1 6 2, hierarchical It determines whether or not an error in the data processing function of the network section 159 has entered an allowable range, and notifies the learning signal display section 162 of the determination result.
  • FIG. 79 shows the details of the fuzzy model extraction processing unit 15 3 in FIG. 79.
  • Fig. 83 shows the detailed configuration of the fuzzy rule extractor 157 in Fig. 10.
  • 159 is the hierarchical network unit.
  • 157 is a fuzzy rule extraction unit
  • 160 is a weight value management city
  • 165 is a network structure management unit
  • 166 is a network structure reading unit
  • 167 is a weight value reading unit Department
  • 16 8 is a logical operation data reading unit
  • 16 9 is a simulation unit
  • 170 is a logical operation analysis unit
  • 17 1 is a logical operation data management unit
  • 17 2 is a logical operation
  • the management unit, 1 ⁇ 3, is a logical operation input / output characteristic information management unit.
  • the fuzzy rule extraction unit 15 7 is a hierarchical net
  • the state of the block 159 is broken according to the information of the weight value management section 160 and the network structure management section 165, and the type of the logical operation of the unit in the rule section is identified to identify the logical element.
  • Measure The logical operation data management unit 17 1 determines a predetermined logical element required for identifying the logical operation type.
  • the eighth is a flow chart showing the operation procedure of the fuzzy model extraction processing device 153.
  • Unit 1 When inputting two values x and y (0 ⁇ X, y ⁇ 1), as shown in Figure 12 above, Unit 1
  • the output value ⁇ ⁇ ⁇ is output.
  • Such a unit 1 is determined by the weight values W x and W y and the value of the threshold value 0.
  • the logical operation management unit 17 2 of the logical operation data management unit 1 ⁇ 1 is used when, for example, the sign of the weight value and the value of the end value of the two-input unit 1 are set to be positive to simplify the processing.
  • the specific logical operation that is executed by the unit 1 is managed by the logical operation associated with and the sign of the original value of the weight value and the ⁇ value.
  • Figure 86 shows the management data of the logical operation management unit 1 12. 1 is illustrated.
  • Figure 86 (a) shows the weight value and unit value sign of unit 1.
  • the sum operation corresponds to unit 1 when the signals are aligned to positive.
  • Fig. 6 (b) shows the sign of the weight and the value of unit 1
  • the product operation corresponds to unit 1 when aligned to the right.
  • Figure 86 shows the management data used when
  • Fig. 86 (d) shows the management data used in this case.
  • Figure 86 (e) shows the sign of the weight value and ⁇ value of unit 1.
  • ⁇ ⁇ xANDy '' is the product of Fig. 85 (b).
  • the logical operation input / output characteristic information management sections 1 to 3 of the logical operation data management section 1 to 1 normalize, for example, in two inputs and one output, the weight value and the sign of the end value to unit 1 are made positive. After that, the input / output data of various logical operations to be referred to for matching is managed.
  • a network structure management unit 165 a network structure reading unit 166, a weight reading unit 167, and a logical operation data unit constituting the fuzzy rule extraction unit 157 are formed.
  • the functions executed by the data reading section 168, the simulating section 169, and the logic analyzing section 170 will be described.
  • the network structure management section 165 has a hierarchical network structure section 159. What kind of network structure does the network section 159 have? What kind of function operation function is assigned to the unit 1, etc. Manages all information related to the hierarchical network configuration.
  • the network structure reading unit 1666 reads the information held in the network structure management unit 1665 and notifies the logical operation analysis unit 170.
  • the weight reading unit 167 refers to the management information of the network structure management unit 165, and
  • the unit reads the weight value of unit 1 and the learning value of the ⁇ ⁇ ⁇ value in the rule unit from 160 and notifies the logical operation analysis unit 170 of the value.
  • the logical operation data reading unit 1 6 8 is a logical operation management unit
  • the management data of the logical operation input / output characteristic information management unit 1732 and the logical operation input / output characteristic information management unit 173 are read and notified to the logical operation analysis unit 170.
  • the simulation unit 1669 receives the weighted value and the threshold learning value in which the sign of the unit 1 of the rule unit is aligned from the logical operation breaking unit 170 and notifies the unit 1 of the unit 1 By simulating the function operation function and calculating the output value when the input value of the input range managed by the logical operation input / output characteristic information management unit 173 is given, the function is calculated. Process to collect the input / output data characteristic information of unit 1.
  • the logical operation analysis section 170 When the logical operation data reading section 1668 receives the management data of the logical operation data management section 1771 from the logical operation data reading section 1678, the logical operation analysis section 170 outputs the input / output given from the simulating section 169. Fuzzy rules are extracted from the hierarchical network section 159 by breaking the logical contents executed by each unit 1 in the rule section according to the data characteristic information.
  • the network structure conversion unit 154 in FIG. 79 firstly performs, for example, as shown in FIG. 49, as shown in step S175. Executes the process of deleting the connection (internal connection) between the output part of the antecedent member and the rule part. Various methods can be used for this connection deletion processing.
  • the logical operation breaking section 170 of FIG. 83 executes the processing of converting the unit of the rule section into a logical element as shown in step S177. I do.
  • the sign of the weight of the connection between the unit of A and the unit of the antecedent member function output unit (which encloses the Yan value) is recorded as a plus or minus sign. In some cases, it is inverted.
  • the simulating unit 169 is started to obtain the input / output data characteristic information of the unit A, and the maximum value aax and the minimum value min of the obtained output values are used. , And normalizes the output value according to the following formula.
  • the network structure conversion unit 154 executes the process of deleting the unit of the rule unit as necessary in the following step S178. In this way, it is possible to obtain a logically expressed hierarchical network section 159 as shown in FIG.
  • the fuzzy rule extraction unit 157 outputs the obtained logical operation description of the data processing of the hierarchical network unit 159 as a fuzzy rule, and ends the processing. .
  • the rule part profile which is a feature of the present invention is described.
  • the configuration of the rewiring neuro, the fully connected neuro of the rule section The conversion from pure neuro to the pre-wire neuro of the rule section, the fully connected neuro of the rule section, the extraction of the fuzzy model from the pre-wire neuro of the rule section, etc.
  • the learning of the pre-wire neuron in the rule section will be described.
  • the basic configuration of the ruler pre-wired neuro is shown in Fig. 4.
  • the input unit 15 only distributes input signals
  • the learning target is the antecedent member function realizing unit 16
  • the rule part 1 ⁇ is the consequent part of the implementation of the membership function ⁇
  • the weight value is initially set based on the knowledge previously held in the rule part 17 or using random numbers, and the membership function realization part 16 of the antecedent part and the membership function realization part of the consequent part are not implemented.
  • Initializing the weight value based on the knowledge that the fuzzifying unit 18 has in advance [Second step] Learning the weight of the rule unit 17 using the learning data;
  • Step 1 Initially set the weights based on the knowledge previously stored in the antecedent member function realizing part 16 or using random numbers, and also set the rule part 17 and the consequent part member.
  • Initialization of weight value based on knowledge previously known in non-fuzzification section 18 [Second step] Membership function implementation section of antecedent part 1 using learning data [Third process] Realization of the consequent member-based function using the training data ⁇ Learning the weight of the defuzzification part 18;
  • the weight value is initially set based on the knowledge previously held in the rule unit 17 or using random numbers, and the antecedent member function realizing unit 16 and the consequent Part membership function realization * Initialize the weight value based on the knowledge that the non-fuzzy part 18 has in advance [Second step]
  • the weight of the rule part 17 is learned using the learning data;
  • Step 1 Initially set the weight value based on the knowledge previously stored in the antecedent member function realizing unit 16 or using random numbers, and also set the rule unit 17 and the consequent member.
  • Initialization of weight value based on knowledge previously known in non-fuzzification section 18 [Second step] Using learning data, the membership function realization section 16 Learning weights; [Third step] Learning weights of rule part 1 ⁇ using learning data;
  • the configuration is as follows.
  • Step 1 Initially set weights based on knowledge previously stored in the membership function realizing unit 16 and rule unit 17 in the antecedent part or using random numbers, and Implement the membership function of the part ⁇ Initialize the weight value based on the knowledge previously known in the non-fuzzified part 18 [Second step] Using the learning data, the number of members of the antecedent part is reduced Simultaneously learn the weights of the realization part 16 and the rule part 1;
  • the configuration is as follows.
  • the initial weight setting in the first step of the part where weights are first learned is set by initializing the weights that can be set with the knowledge that is held in advance. Others may be initialized using random numbers. Furthermore, as described later, at the start of learning of each process after initial setting of the weight value of the first process, the antecedent member function realizing unit 16, the rule unit 17, and the consequent unit method are executed. Achieved the number of ambassadors.
  • the defuzzification part 18 turn on the learning flag provided for each connection of the part to be learned and provide it for each connection of the part not to be learned. Turns off the learning flag that was set and has the learning flag that is on Only the weight value of the connection is optimized by the learning process.
  • Such a learning method for the pre-wiring of the rule section is performed as follows.
  • the antecedent membership function realization part of the rule wire new mouth, the rule part and the consequent member membership function realization Initialize the weight value based on the knowledge possessed or the initial value of the weight value using random numbers. Perform settings.
  • the learning process is started, an input pattern for learning is presented to the pre-wire neuro in the rule section, and the output pattern from the pre-wire neuro in the rule section becomes a desired output pattern with respect to the input pattern.
  • the initial weight value is corrected so that it substantially matches the signal, for example, learning of the weight is performed by the back projection method.
  • the weight of the entire pre-wired neuro is finally learned, and the following weight learning (1) to (4) is performed in the previous stage before the weight learning in the final stage.
  • the initial setting of the weights based on the knowledge that has been obtained in advance makes learning easy and improves the learning efficiency.
  • FIGS. 100 to 109 are learning operation flow charts showing the processing procedures of the first to tenth embodiments of the method of learning the rule part pre-wire two euros.
  • the rule-based pre-wired neuron to be subjected to the learning method according to the procedure of FIGS. 100 to 109 is, for example, the antecedent member-based function realizing unit 16 shown in FIG. It has a hierarchical network structure for realizing the fuzzy control rules consisting of the part 17 and the consequent part of the member function.
  • the rule part pre-wired neuro means that the rule part 17 is an antecedent member function realizing part 16 in the former stage and / or a consequent member member function realizing part in the latter part.
  • 8 has a network structure in which all units are internally connected according to the control rules without internally connecting all units, and between them, a rule part fully-connected neuro that internally connects all of the units. Compared to In the learning process of the first to tenth embodiments shown in FIGS. 100 to 109, a process including the following six steps is performed.
  • Cab and Cbc are connection group numbers of the connections for which weights are to be initially set, as described later.
  • the weight Cfg of the center of gravity calculation realizing section 18b is initialized so as to realize the center of gravity calculation.
  • a phase 'group correspondence table indicating the neuron group for each phase for which the learning process is performed and a group / connection correspondence table indicating the connection groups belonging to each neuron group are set.
  • the initial setting of the weight values of the first to third steps is as follows for each embodiment. First, for the part where weight learning is performed first by starting the learning scheduler in the sixth step, that is, for the part where weight is learned in phase 1, initial setting of weight values based on the introduction of knowledge that has been done in advance, or random number Initialize the random weights used. On the other hand, for the part where the weight is learned in the second and subsequent times (after the second fuse), only the initial setting of the weight value based on the introduction of the knowledge that has been done in advance is performed.
  • the initial setting of the weight value for the center-of-gravity calculation realizing section 18b in the fourth step is the same for all the embodiments, and the setting of the phase * group correspondence table in the fifth step is performed in the sixth step in each embodiment.
  • a unique phase group mapping table is set according to the change of the learning phase of the group.
  • the group / connection correspondence table is set in common for all the embodiments.
  • the weight learning of the entire neuron at the final stage of the weight learning that is, the last phase, in all the embodiments, that is, the membership function realization part 16 of the precondition part, Rule part 17 and consequent part membership realization ⁇ Non-fuzzification part 1 They are common in learning the overall weight of 8.
  • weight learning specific to each embodiment is performed for the weight learning in the preceding stage.
  • the rule part 1 ⁇ is composed of the antecedent part rule part 248 and the consequent part rule part 249. Antecedent rule section 2 4
  • the consequent rule part 249 is the rule grade value L H S—1 to L
  • the consequent member realization function realization ⁇ Defuzzification part 18 (which includes the center of gravity calculation realization part) inputs the enlargement or reduction rate of the membership function y (SA) to y (LA) of the output variable. And output the value of output variable Y.
  • the rule part 17 which is a combination of the antecedent rule part 248 and the consequent part rule part 249 has the input and output of the grade value of the membership function. It can be scaled and multi-stage inference is possible.
  • the neurons connecting the modules 16, 24 48, 24 9 and 18 can be shared.
  • the output neuron of the antecedent membership realization unit 16 and the input neuron of the antecedent rule unit 248 may be the same in practice.
  • each unit is described as one neuro, but if the function of the unit can be specified, a gate circuit and an arithmetic unit that realize the unit function without using the neuro And so on.
  • Fig. 11 shows a neuron group and a connection group for implementing the learning method for the rule part prewired neuron similar to the rule part fully connected neuro of Fig. 5.
  • Fig. 11 there are seven neuron groups G a to G g, of which the neuron group G a of the input part and the neuron group G g of the centroid calculation realization part 18 b are The two are simply separated and added and synthesized, so they are excluded from the subject of weight learning by the back-propagation method.
  • the connection group indicating the input connection of each neuron group is G ab, C fg
  • the weight value of the input connection located before the neuron group to be learned is learned by the backblob- aging method.
  • step S185 of Fig. 100 following the completion of the initial setting, a learning plan for causing the prewired neuro to perform weight learning, that is, setting of a learning schedule is set. I do. As shown in Fig. 11 (a), this learning schedule involves two steps: setting of the phase ⁇ group correspondence table and setting of the group / connection correspondence table shown in the figure.
  • the fuse / group correspondence table specifies the neuron group to be learned in each phase as the learning fuse progresses.
  • the antecedent member membership function realizing section 16 and the learning of the entire neuro are performed in order after initial setting of the weight value, so that the antecedent member as a learning target group of Phase 1 is performed.
  • the two neuron groups G b and G c in FIG. 11 belonging to the ship function realizing section 16 are set, and the phase In step 2, five neuron groups Gb, Gc, Gd, Ge, and Gf in FIG. 11 are set because the weight of the entire neuron is learned.
  • the group / connection correspondence table shows the correspondence between each neuron group G a to G g and the input connection group C ab to C fg in the rule part pre-wiring as shown in Fig. 12 (k).
  • step S186 of FIG. 100 the process proceeds to step S186 of FIG. 100 to start the learning scheduler.
  • the training data is given to the neuron and weight learning is performed by the back propagation method.
  • Fig. 113 shows a flowchart of the learning process of the rule part pre-wired neuro. This processing flow diagram can be realized, for example, by the apparatus configuration shown in FIG.
  • a learning processing unit 152 and a learning unit 261 are provided.
  • the learning processing unit 15 2 is a learning signal presentation unit shown in FIG.
  • Learning signal storage unit 164 that stores learning signals that are pairs of desired control outputs with the desired control inputs, and the phase shown in Fig. 11 and the phase that contains the group correspondence table and the group correspondence table 2 58, and a group storing the connection table, a connection table 25, and a learning convergence determination unit 163 are provided.
  • the learning unit 26 1 is provided with a learning instruction reading unit 26 2, a learning flag setting unit 26 3, and a weight changing unit 16 1 for changing a weight value by the back-propagation method.
  • a feature of the apparatus configuration in FIG. 114 is that a learning adjuster 260 is provided in a connection connecting each unit in the rule pre-wire neuro 150.
  • the learning adjuster 260 has the configuration shown in FIG. Note that the weight adjuster 16 1 is shown together with the learning adjuster 260.
  • the learning adjuster 260 includes a flag storage unit 265, a weight change information reading unit 266, and a weight change amount adjustment unit 267.
  • the weight changing section 1661 is provided with a weight executing section 268, a weight storing section 269, and a weight changing amount storing section 270 for executing a weight calculation by the back mouth bagging method.
  • the learning adjuster 260 for each connection of the rule part pre-wire neuro 150
  • the learning scheduler 16 2a of the learning processing device 152 allows the rule part pre-wire neuro 150 to be connected.
  • the flag set for the learning coordinator 260 is a precondition that the fuzzy rule can be executed for a general hierarchical network that does not have a network structure capable of realizing the fuzzy rule, for example, a pure new port.
  • Part membership function realization part, rule part and consequent part membership function realization ⁇ It is also used to specify the hierarchical network structure consisting of the defuzzification part.
  • Adjustment of the weight value when actually learning by the back-propagation method with a hierarchical network conforming to the fuzzy rule is performed as follows.
  • the weight change information reading unit 260 of the learning adjuster 260 has a weight change amount storage unit 270 of the weight change unit 161. It monitors whether the change amount has been written. When the weight change information reading unit 2 6 6 detects that the weight has been changed, the weight change amount adjustment unit 26 7 Tell The weight change amount adjustment unit 2667 checks the learning flag in the flag storage unit 265, and does nothing if the learning flag is on. That is, writing to the weight change amount storage unit 270 is made valid. On the other hand, if the learning flag is off, the weight change amount adjustment unit 267 sets the weight change amount storage unit 270 of the weight change unit 161 to zero, and invalidates the weight change.
  • the hardware of this learning scheduler is disclosed in Japanese Patent Application No. 63-227825, “Learning method for network configuration data processing device”.
  • connection group numbers Cab and Cbe corresponding to the neuron group numbers Gb and Gc are read by referring to the group / connection correspondence table 2559. put out. Then, the process proceeds to S254, in which the connection group numbers Cab, Cbc are output, and the learning flag of the learning adjuster 260 provided for the connections belonging to the connection group numbers Cab, Cbc is turned on.
  • connection schedule numbers Cab and Cbc are output and read from the learning scheduler 162a of the learning processing device 152 to the learning instruction reading section 2662 of the learning unit 261.
  • the learning flag setting unit 2 63 receiving the reading result from the learning instruction reading unit 2 62 is the flag storage unit 2 6 5 of the learning adjuster 260 provided for the connection belonging to the connection group number Cab, Cbc. Command the flag to set the learning flag.
  • the learning scheduler 162a issues a learning execution command to start learning processing.
  • the control input X prepared in the learning signal storage 16 and the teacher signal d which is a desirable control output for the control input X are read out, and the control input is given to the rule part pre-wire neuro 150.
  • the teacher signal is provided to the weight changing unit 161 and the learning convergence determining unit 163.
  • the control output Y from the pre-wire 2 euro 1550 receiving the learning input is taken into the weight changing section 16 1 and the learning convergence judging section 16 3, and the weight changing section 16 1
  • the weight value is changed according to the gating method, and the learning convergence determination unit 163 determines that the error of the control output Y with respect to the teacher signal d is less than the specified value. When this happens, the end of learning in phase 1 is determined.
  • phase counter i is incremented by S256 and the learning phase is incremented by S257. Check the end.
  • the process since the process is completed in the phase 2, the process returns to the step S252 to start the learning process of the next phase 2.
  • Figure 116 shows the weight learning process performed by the back propagation method on the pre-wire neurons in the rule section.
  • FIG. The parameters of each part in this learning processing flow are determined as shown in FIG.
  • Fig. 117 schematically shows the hierarchical structure of the pre-wired two euro rule section in Fig. 11 and has a six-layer structure from the first layer to the sixth layer as shown in the figure.
  • the first and third layers are composed of sigmoid function units, and the rest are composed of linear function units.
  • the input layer is excluded.
  • the weights to be learned by the back propagation method are from the first layer to the fifth layer, and the sixth layer of the center-of-gravity calculation realization unit 18b at the last stage is excluded from the weight learning target.
  • Hua Jie inference value output by the rule area a pre-wired neuro in realization of Hua di Iruru, i.e.
  • the control output value is set to y 6, you Keru learning process in a single-phase fifth layer from the sixth layer, the Fourth layer, third layer. Second layer, first layer, and so on.
  • i., And AW ⁇ - ⁇ denote the matrix, its size (number of neurons in the i-th layer) X (i — number of neurons in the first layer) and become.
  • i and yi are vectors, and the size is the same as the number of neurons in the ith layer.
  • the weights of the first and second layers belonging to the prerequisite part membership function realizing unit 16 are learned in the first fuse, and the weights are updated in the next second phase. It learns the weight of the whole bite, that is, all the weights of the first to fifth layers.
  • the flow proceeds to S272, and it is determined whether or not the signal is a sigmoid unit. Since the sixth layer is a linear function unit, the flow proceeds to S284. In S284, it is determined whether or not it is the last layer.Since the sixth layer is the last layer, the process proceeds to S285, where the teacher signal d and the control output y obtained at that time are used.
  • the difference value ⁇ 6 is obtained according to the same equation as the above equation ( ⁇ ). However, 5 is equivalent to or.
  • the process proceeds to S276 to determine whether or not the unit is for learning the weight.
  • the fifth layer also proceeds from S 272 to S 284 because it is a linear function unit, but because it is not the last layer, it proceeds to S 286 and the same as (Eq. 13)
  • the difference value ⁇ 5 is calculated according to the following equation:
  • the fourth layer is also a linear function unit, and in Phase 1, it is excluded from learning targets, so the same processing as in the fifth layer is repeated.
  • the third layer is a sigmoid function unit, it proceeds from S272 to S273, and since it is not the last stage, the sigmoid function unit is obtained at S275.
  • the door-specific difference value [delta] 3 (7) determined in accordance with the equation similar to equation.
  • the second layer is a linear function Yuni' bets in which via S 2 8 4 from S 2 7 2 since S 2 8 6 asked Me a difference value 5 2, second layer antecedent part main Nbashibbu function realizing part the proceeds from the fact that summer and learned Fuwezu 1 belongs to 1 6 S 2 7 7 from S 2 7 6 (6), calculates a weight value update amount AW 21 according to the same equation as the equation (8) and a weight value W 21 to update according to the same equation as the equation (8a) with S 2 7 9. Thereafter, the same process is repeated until the last unit of the second layer, and the process proceeds to the first layer.
  • the first layer is the sigmoid function unit and not the last layer, it proceeds from S27 to S275 via S273 and calculates the difference value (5, 6 obtains a weight value update amount AWI. proceed to S 2 7 7 from updates the weight value W 10 at S 2 7 9. hereinafter, repeated in a similar process to a final Yuni' bets first layer, the first When the processing of the last unit of the layer is completed, the process proceeds to S282, where the i-counter matches the final layer E of phase 1 so that the end of the learning processing of phase 1 is determined. Then, a series of processing ends.
  • the processing of the sixth layer only calculates the difference value 6 and the weight value W 65 is not changed, but the processing of the fifth layer is performed.
  • the calculation of the weight value W 54 is performed effectively, and the weight value W 54 is updated.
  • the weights of the remaining 4th, 3rd, 2nd, and 1st layers are calculated in this order.
  • a learning process is executed, and when it is determined that the process and the end of the last unit of the first layer are completed, a series of learning processes is ended.
  • the initial setting of the weight value for the rule part 17 is based on the weight value Initial setting or initial setting of random weights using random numbers, and also initial setting of weights using both.
  • the antecedent member-based function realization part 16 and the consequent part-membership function realization that perform weight learning in the second and subsequent fuses 2 As shown in 188 and S190, only the initial setting of the weight value using the knowledge that is held in advance is performed.
  • step S192 the phase 'group correspondence table is set by setting the neuron group numbers Gd and Ge belonging to the rule part 17 in the fuse 1 as shown in Fig. 11 (b). In phase 2, five group numbers Gb to Gf for the entire neuro are set.
  • the rule part 17 By learning the whole bite, the weight of the entire rule part pre-wired neuro can be readjusted according to the result of the weight learning of the rule part.
  • step S 199 of FIG. 102 As shown in step S 199 of FIG. 102,
  • the weights are learned in the following order.
  • the rule section 17 which learns the weights first in accordance with the order of learning of the weights, as shown in step S195, the initial setting of the weight value based on the input of the knowledge previously held, as shown in step S195, or Performs initial setting of random weights using random numbers, and initial setting of weights using both.
  • the steps S 194 S 19 As shown in Fig. 6, only the initial setting of the weight value is performed based on the introduction of the knowledge possessed in advance.
  • step S 198 the phase 'group correspondence table is set by setting neuron group numbers G d and Ge belonging to rule part 1 1 in phase 1 as shown in FIG.
  • the neuron group number belonging to the antecedent member number implementation unit 16 Set the numbers G b and G c, and use Phase 3 to set five group numbers G b to G f for the entire neuro.
  • the rule part 17 and the precedent part are realized. It is possible to readjust the weight of the entire rule part neural network in accordance with the weight learning result of the subject part membership function realizing part 16.
  • the weights are learned in the following order.
  • the antecedent member function realizing unit 16 that learns the weights first is based on the introduction of the knowledge that it has in advance as shown in step S201.
  • the initial setting of the weight value or the initial setting of the random weight value using a random number, and the initial setting of the weight value using both of them are performed.
  • the step S202 As shown in S203, only the initial setting of the weight value is performed based on the introduction of the knowledge possessed in advance.
  • phase-group correspondence table in step S205 is set as follows: As shown in Fig. 112 (d), the neuron group number G belonging to fuse 1 and belonging to the antecedent membership function realizing section 16 b and Gc are set, Phase 2 sets the neuron group numbers Gd and Ge belonging to the rule section 17, and phase 3 sets the neuron group numbers Gb to Gf as the whole neuron. You have set.
  • the antecedent part membership function realizing part is realized. It is possible to readjust the weight of the entire rule part pre-neural neuron according to the learning result of the weight of the rule part 16 and the rule part 17.
  • the weight is learned in the number ⁇ .
  • the antecedent member realization unit 16 Since the weights of the rule part 17 are adjusted at the same time, the antecedent member function realizing part 16 and the rule part 17 are provided in advance as shown in steps S208 and S209. Initialization of weight value based on the introduction of knowledge or initial setting of random weight value using random numbers, and also initial setting of weight value using both of them, after this is the second weight learning As shown in S210, only the initial setting of the weight value based on the introduction of the knowledge already held is performed for the non-fuzzification unit 18 as the member function implementation.
  • the phase / group correspondence table in step S212 is set in fuse 1 by the antecedent member function realizing section 16 and rule section.
  • Four neuro group numbers G b, G c G d, and G e belonging to 17 are set, and five neuron group numbers G b to G f as the whole neuro are set in phase 2.
  • the weights of the antecedent member membership function realizing section 16 and the rule section 17 are simultaneously learned, and then the entire rule pre-wired neuro is learned.
  • the weights of the entire rule part pre-wired neuro can be readjusted in accordance with the results of simultaneous learning of the weights of the membership function realizing part 16 and the rule part 17.
  • the weights are learned in the following order.
  • the weight value Initial setting of random weights using random numbers or random numbers, and initial setting of weight values using both of them are performed.
  • the rule part 17 and the consequent membership membership realization for weight learning for the second and subsequent surfaces and the defuzzification part 18 as shown in steps S 2 16 and S 2 17 Only the initial setting of the weight value based on the introduction of the knowledge that is held in advance is performed.
  • the phase 'group correspondence table in step S219 is set as shown in Fig. 11 (f).
  • b, Gc are set, the membership of the consequent part is realized in phase 2, and the neuron group number Gf belonging to the non-fuzzification section 18 is set.
  • the weights are learned in the following order.
  • the initial setting of the weight value based on the introduction of the knowledge or random numbers as shown in step S222.
  • the initial setting of the random weight value that was used, and the initial setting of the weight value using both of them are performed.
  • the antecedent membership realization part 16 and the consequent part membership function realization which are the weight learning for the second and subsequent times, ⁇
  • steps S 2 2 1, S 2 23 Based on the introduction of knowledge Only the initial setting of the weight value based on this is performed.
  • phase 'group correspondence table in step S 2 25 is set by setting the neuron group numbers G d and Ge belonging to the rule part 17 in the fuse 1, as shown in FIG. Realize the consequent member function in phase 2 ⁇ Set the new group number G f belonging to the non-fuzzified section 18, and further in phase 3 the neuron group number G b to G ⁇ ⁇ ⁇ as the whole neuro Set the five.
  • the rule part 17 and the consequent part of the embodied number ⁇ sequentially learning the weights of the non-fuzzification part 18, the rule part pre-wired the whole euro .
  • the rule part 17 and the consequent part membership function are realized.
  • the weight of the entire rule part pre-wired neuro can be readjusted according to the weight learning result of the non-fuzzification part 18. .
  • step S229 the rule unit 17 that first learns the weights in response to the initial setting of the weight values based on the introduction of the knowledge that has been stored in advance or the random weights using random numbers Initial values are set, and weight values using both are initialized.
  • the non-fuzzification part 18 has the steps S 2 28, As shown in S230, the initial setting of the weight value is performed based on the introduction of the knowledge possessed in advance.
  • the setting of the fuse group correspondence table in step S232 is the neuron group number belonging to rule section 17 in phase 1.
  • G d and G e are set, and the phase 2 sets the neuron group numbers G b and G c belonging to the antecedent member function realizing section 16, and the phase 3 sets the consequent part membership Realization of the loop function ⁇
  • the new group number G f belonging to the defuzzification unit 18 is set, and the last phase 4 sets the five neuron group numbers G b to G ⁇ ⁇ ⁇ as the whole neuro.
  • weight learning is performed in the order of the rule part 17, the antecedent part membership function realizing part 16 and the consequent part membership function realizing / non-fuzzification part 18. After that, the weight of the entire pre-wire neuro is learned by the rule section, so that Function realization part 16, rule part 17, and consequent part membership function realizationRe-adjust the weight of the entire rule part pre-wired neuro in accordance with the weight learning result of the defuzzification part 18. be able to.
  • the weight is learned in the number ⁇ .
  • the weight value based on the introduction of knowledge Initial setting of random values or initial setting of random weights using random numbers, and also initial setting of weight values using both.
  • steps S 2 36 and S 2 37 As shown in (1), only the initial setting of the weight value is performed based on the introduction of the knowledge possessed in advance.
  • step S239 the setting of the phase / group correspondence table in step S239 is performed as shown in Fig. 11 (i).
  • 1 1 sets the two-EURON group numbers Gb and Gc belonging to the antecedent member membership realization part 16 and sets Phase 2 to the neuron groove numbers Gd and Ge belonging to the rule part 1 And members of Consequences in Phase 3
  • Antecedent member function realization part 16 and rule part 1 Antecedent member function realization part 16 and rule part 1
  • the whole neuro The weights are learned in the following order.
  • Initialization of weights based on the introduction of, or random weights using random numbers, and weights using both.
  • the consequent member function is realized for the second and subsequent weight learning.
  • the non-fuzzification section 18 is based on the introduction of the knowledge that has been stored in advance as shown in step S243. Only the initial setting of the weight value is performed.
  • the setting of the phase / group correspondence table in step S2450 is performed in phase 1 with the antecedent member function realizing unit 16 and the rule unit.
  • the neuron group numbers belonging to the defuzzification part 18 G f is set, and the last phase 3 sets five neuron group numbers G b to G ⁇ ⁇ corresponding to the entire neuro.
  • the weights of the antecedent member function realizing part 16 and the rule part 17 are simultaneously learned, and then the consequent part membership function realization-non-fuzzy After learning the weights of the transformation section 18, the rule section finally learns the weight of the entire pre-wired neuro.
  • the membership function realization part 16 of the antecedent part, the rule part 17 and the membership function realization of the consequent part are realized. •
  • the rule part pre-wire neuron according to the weight learning result of the defuzzification part 18 Weight can be readjusted.
  • This simulation was performed by constructing a data processing function having an input / output signal relationship in the hierarchical network section 159 as shown in FIG.
  • the horizontal axis X represents an input signal
  • the vertical axis y represents an output signal.
  • the membership function of “small 1 j” shown in Fig. 119 (a) and the “big 1” shown in Fig. 119 (b) output
  • the member functions “small 2” shown in Fig. 119 (c) and “big 2” shown in Fig. 119 (d) are defined, and these membership functions are combined.
  • Two rules governing the relationship are:
  • FIG. 120 shows the input / output signal relationship of the generated fuzzy model 10.
  • the output signal y which is a fuzzy inference value, is determined according to the same equation as the above equation (21).
  • the generated fuzzy model 10 is rough but approximates the input / output signal relationship of the data processing function shown in FIG.
  • FIG. 121 shows an example of a hierarchical network section 159 constructed from the fuzzy model 10 of FIG.
  • this hierarchical network section 159 corresponds to the rule section pre-wired neuro 12.
  • the weight value “12” and the value “ ⁇ 5.4” are assigned to the basic unit 1 of “2”, so that the membership function “bi g 1” in Fig. 119 (b) is assigned.
  • a configuration was calculated to calculate the grade value of “3”, and the weight value “-12” and the ⁇ value “6.6” were assigned to the basic unit 1 of “3”.
  • the basic unit 1 of “4” and “5” provided in correspondence with the two rules that adopts a configuration that calculates the grade value of the number of members “small 1” of “5” has a weight value of “5”. By assigning "and ⁇ value -2.5", the relationship between the sum of input values and the threshold value becomes almost linear.
  • the unit of “7 j” provided corresponding to the inverse function is a linear element that operates so as to output the input value as it is, and the unit of “6” and the basic unit 1 of “5” are used.
  • "1" is set as the weight value of the inner connection between the inner unit between "1” and the basic unit 1 of the unit "4" of "7".
  • the weight value of the internal connection between the units of "6" and “7" is also set to "1".
  • FIG. 122 shows the relationship between the input and output signals of the hierarchical network unit 159 of FIG. 121 constructed in this way.
  • the hierarchical network section 159 constructed in Fig. 121 that is, the prewire neuron in the rule section, does not perform any learning, and the input / output signals in Fig. 118
  • the data processing function that gives an input / output signal relationship that closely approximates the relationship can be demonstrated. This data processing function will be more accurate if the weight value and the grid value are subsequently learned.
  • FIG. 123 (a) shows another example of the hierarchical network section 159 constructed from the fuzzy model 10 of FIG.
  • this hierarchical network The lock part 159 corresponds to the rule part fully connected neuro 13.
  • the weight “12” and the value “-5.4” are assigned to the basic unit 1 of “2”, so that the membership function “big 1” in Fig. 119 (b) is assigned.
  • the configuration to calculate the grade value is adopted, and the weight value “-12” and the threshold value “6.6” are assigned to the basic unit 1 of “3”, so that “small” in Fig. 119 (a) is assigned.
  • (1) is used to calculate the grade value of the membership function.
  • the basic unit 1 of "4" and the basic unit 1 of "2” and “3j” provided corresponding to the membership function of "small 2" in Fig. 119 (c).
  • the unit of “5” which is provided corresponding to the membership function of “big 2” in Fig.
  • the basic unit 1 of 3 is internally connected to each other. Since the values of the basic unit 1 of “4” and “5” and the weight value related to the input are obtained by learning, a random value is set as an initial value, and is indicated by “6”. The weight value of the internal connection between the module for determining the center of gravity to be performed and the unit of “4” and “5” is set to “1”.
  • the values of the basic unit 1 of “4” and “5” and the weight value related to the input are the generated fuzzy model. It is executed using the input / output signal relation created from 1o as a learning signal.
  • Figs. 124 and 125 show the generated fuzzy model 10
  • FIG. 3 shows a learning signal created from the above.
  • the learning teacher signal is obtained as a grade value for “small 2” and “big 2”.
  • Fig. 123 (b) shows the basic unit of "4" and "5" of the hierarchical network part 159 in Fig. 123 (a), which was learned using this learning signal.
  • the learning value of the ⁇ value of 1 and the weight value of the input and the ⁇ value of the basic unit 1 of “2” and “3” updated by learning and the weight value of the input are illustrated.
  • Fig. 126 shows the relationship between the input and output signals of the hierarchical network part 159 of Fig. 123 (b), ie, the rule part fully connected neuro, constructed by this learning.
  • the hierarchical network section 159 constructed in Fig. 123 (b) is a data processing system that provides input / output signal relationships that closely approximate the input / output signal relationships in Fig. 118. It can perform its function. Then, according to the new bridge value and input weight value of the basic unit 1 of “2” and “3”, more appropriate membership functions of “small 1” and “big 1” will be obtained.
  • Figure 127 shows the membership function of “saall 1” after this learning. You.
  • FIG. 128 shows a learning signal used for learning the weight value and the ⁇ value of the internal connection of the adaptive data processing apparatus 11. Although this learning signal was created based on the input / output signal relationship in FIG. 118, it can also be created from the generated fuzzy model 10 described above.
  • FIG. 129 shows the learning values of the weight values of the basic unit 1 and the input values of the basic unit 1 of the adaptive data processing apparatus 11 learned using the learning signal, and FIG.
  • the input / output signal relationship of the adaptive data processing apparatus 11 of FIG. 129 constructed by this learning is illustrated.
  • the value described in association with each basic unit 1 is a ⁇ value
  • the value described in association with each internal combination is a weight value.
  • the adaptive data processing device 11 constructed in FIG. 119 has an input / output signal relationship that is quite similar to the input / output signal relationship in FIG. Data processing functions that can be demonstrated. 1
  • the simulation of generating fuzzy control rules is described. As shown in Fig. 131, this simulation has a unit that outputs the grade value of the antecedent member function as 2 ⁇ , that is, 2a,
  • Figure 13 (a) shows the antecedent member function assigned to the processing unit 21a
  • Figure 13 () shows the antecedent member function assigned to the processing unit 21b
  • Figure 13 (c) shows the membership function associated with the output of the processing unit 24a.
  • Figure 13 (d) shows the membership function associated with the output of the processing unit 24b.
  • a fuzzy function to calculate and output the average value of input values is assigned to the processing units 24a and 24b, and a fuzzy function to output the added value of the input value is output to the processing units 24a and 24b. Should be assigned.
  • control state quantities describing the antecedent part of these fuzzy control rules there are seven control states of “TU1, ALK, TEMP, TUSE, TUUP, FLOC, S TAT” shown in Fig. 135. Yes, it is assumed that there is one type of control operation amount called “DD 0 S” shown in Fig. 135 as a control operation amount to describe the consequent part.
  • the processing unit 21 (21a,... ⁇ ) is the same as the basic unit 1 as described above (however, there is a difference in the definition of the sign of the ⁇ value).
  • Fig. 138 shows the definition of the function shapes of these membership functions
  • FIG. 140 shows the fuzzy control rules of FIG. 134 mapped to the rule part pre-wired neuro 12.
  • the input unit 20 (20 a, ⁇ ) is omitted.
  • the processing unit 21 (21a,%) Is provided with 15 units according to the number of the member functions of the control state quantity, and the processing unit 2 3 (2 3 a, ⁇ ) are prepared in 10 units according to the number of fuzzy control rules, and processing unit 24 (24 a, 4 ⁇ ) 4 units will be prepared according to the number of membership functions for the control MV.
  • the processing unit 23 is assigned a fuzzy arithmetic function for calculating an average value of input values
  • the processing unit 24 is assigned a fuzzy arithmetic function for calculating an added value of input values.
  • the weights of the internal connection between the input unit 20 learned by the simulation and the processing unit 21 are 6 ⁇ , 6) 2 and 0, and the values of 0 and 0 are assigned to the 1st 36, 1 3 7 This is shown in the “after” column in the figure. That is, the membership function defined by the value described in the “before” column of FIG. 13 6> 13 7 is changed to the “after” by the learning process using the control data of the control target.
  • the tuning is performed to the membership function specified by the value described in the column.
  • FIG. 142 shows an example of a change in the function shape of the tuned membership function.
  • the broken line shows the function shape of the membership function before tuning
  • the solid line shows the function shape of the membership function after tuning.
  • Fig. 142 (a) is called “IS (not small) of TEMP (water temperature)" Function change before and after tuning of the membership function.
  • Fig. 142 (b) shows the tuning of the membership function of “SA (small) of FL0C (flock formed state)”. Function change before and after,
  • Fig. 142 (c) shows the function change before and after the tuning of the membership function "MM of TUUP (turbidity increase) MM (normal)”.
  • the fuzzy fuzzy control rules are mapped on the hierarchical network section 159, and the weight of the hierarchical network is obtained by using the control data group obtained from the control target as a learning signal.
  • the function shape of the membership function described in the fuzzy control rule can be tuned using the weight values of the learned hierarchical network. This makes it possible to mechanically and objectively execute a few tunes' of membership control rules.
  • the learning flag in the learning adjuster 260 described in FIG. 114 is applied to the unit corresponding to the membership of the antecedent part.
  • the unit from unit 23 to unit 24 of the consequent rule Tuning is performed by turning on the learning flag of the learning adjuster.
  • a tuning simulation of rule weighting was performed with the learning constants and momentum set in the same manner as described above.
  • FIG. 144 shows the learning data of the weight value when the number of update cycles with the smallest error value is 10000 @ 1.
  • "1.667528" in the No. 2 fuzzy control rule is the processing unit 21 that handles "TU 1 MM" in the No. 2 fuzzy control rule shown in Fig. 134.
  • the fuzzy control rule shown in Fig. 134 it is the learning data of the weight of the internal connection with the processing unit 23 that handles the precondition part of the fuzzy control rule.
  • the learning data of the weight value of the internal connection between the processing unit 21 and the processing unit 23 according to the order in which they appear in the section, and for example, “0.640026” in the fuzzy control rule of ⁇ ⁇ 2 is ⁇ ⁇
  • Learning data of the weight between processing unit 23 that handles the fuzzy operation of the fuzzy control rule of ⁇ 2 and processing unit 24 that handles the fuzzy operation of the fuzzy control rule of ⁇ 2 is shown learning data weight value.
  • the fuzzy control rule can be made faithful to the control target.
  • the hierarchical network unit 159 may be realized by software means, or may be realized by hardware means.
  • the applicant of the present invention has filed a Japanese Patent Application No. 63-216865 (1988) It is possible to use the one disclosed in "Network Configuration Data Processing Apparatus” filed on August 31).
  • the basic unit 1 is composed of the output from the preceding layer input through the input switch unit 301 and the weight value held by the weight value holding unit 310.
  • An analog adder 3 that adds the output value of the multiplying DZA converter 302 to the output value of the multiplying DZA converter 302 and the accumulated value in the front to obtain a new accumulated value 0 3 a and sample and hold circuit 3 0 3 b for holding the addition result of analog adder 3 0 3 a, and non-linear for non-linear conversion of data held in sample hold circuit 3 3 b when accumulation processing is completed
  • a control circuit 309 for controlling each of these processing units. It is revealed the real by.
  • the hierarchical network section 159 has a configuration in which the basic unit 1 adopting this configuration is electrically connected to one common analog bus 310 as shown in FIG. Is achieved.
  • 311 is a weight output circuit for giving a weight value to the weight holding section 308 of the basic unit 1
  • 312 is an initial signal output circuit corresponding to the input unit 1 and 3 1 3 Is for data transfer
  • 3 14 is a main control circuit that sends out a synchronization control signal is there.
  • the main control path 314 selects the basic unit 1 of the preceding layer in chronological order and synchronizes with the selection processing to select the basic unit 1.
  • the final output held by the output holding unit 305 of the basic unit 1 is transmitted to the multiplying DZA converter 302 of the basic unit 1 in the subsequent layer according to the time-division transmission format via the analog bus 310. And output it.
  • the multiplying D / A n converter 302 of the basic unit 1 in the subsequent layer sequentially selects the corresponding weight value, performs multiplication processing of the input value and the weight value, and performs analog processing.
  • An accumulation processing section 303 composed of an adder 303 a and a sample hold circuit 303 b sequentially accumulates the multiplied values. Subsequently, when all the accumulation processes for the basic unit 1 in the preceding layer are completed, the main control circuit 314 activates the nonlinear function generating circuit 304 in the basic unit 1 in the subsequent layer to make the final processing. The output is calculated, and the output holding unit 305 performs processing to hold the final output of the conversion processing result. Then, the main control circuit 314 sets the subsequent layer as a new preceding layer and repeats the same processing for the next succeeding layer, thereby responding to the input pattern.
  • the illustrated embodiment has been described in which an output pattern is processed to be output, the present invention is not limited to this.
  • a new mouth / fuzzy fusion data processing system that can be interpreted in the framework of a fuzzy model that is easy to understand and that can utilize the accuracy and learning ability of neuros Can be constructed. Therefore, the present invention can be applied to a data processing system in any field, as a matter of course, in the field of a control system that provides fuzzy control.
PCT/JP1991/000334 1990-03-12 1991-03-12 Neuro-fuzzy fusion data processing system WO1991014226A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US07/773,576 US5875284A (en) 1990-03-12 1991-03-12 Neuro-fuzzy-integrated data processing system
EP91905520A EP0471857B1 (en) 1990-03-12 1991-03-12 Neuro-fuzzy integrated data processing system; network structure conversion system ; fuzzy model extracting system
KR1019910701593A KR950012380B1 (ko) 1990-03-12 1991-03-12 뉴로-퍼지 융합 데이터처리 시스템
CA002057078A CA2057078C (en) 1990-03-12 1991-03-12 Neuro-fuzzy fusion data processing system
AU74509/91A AU653146B2 (en) 1990-03-12 1991-03-12 Integrated fuzzy logic neural network processor and controller

Applications Claiming Priority (22)

Application Number Priority Date Filing Date Title
JP2060257A JP2744321B2 (ja) 1990-03-12 1990-03-12 適応型データ処理装置の解析処理方法
JP2/60257 1990-03-12
JP2060260A JP2763368B2 (ja) 1990-03-12 1990-03-12 ファジィ制御におけるメンバーシップ関数のチューニング方法
JP2060261A JP2763369B2 (ja) 1990-03-12 1990-03-12 ファジィ制御ルールのチューニング方法
JP2/60261 1990-03-12
JP2/60260 1990-03-12
JP2060258A JP2544821B2 (ja) 1990-03-12 1990-03-12 階層ネットワ―ク構成デ―タ処理装置
JP2060256A JP2763366B2 (ja) 1990-03-12 1990-03-12 階層ネットワーク構成データ処理装置及びデータ処理システム
JP2060263A JP2763371B2 (ja) 1990-03-12 1990-03-12 階層ネットワーク構成ファジィ制御器
JP2/60262 1990-03-12
JP2/60259 1990-03-12
JP2/60256 1990-03-12
JP2/60263 1990-03-12
JP2/60258 1990-03-12
JP2060259A JP2763367B2 (ja) 1990-03-12 1990-03-12 ファジィ制御ルールの生成方法
JP2060262A JP2763370B2 (ja) 1990-03-12 1990-03-12 階層ネットワーク構成ファジィ制御器
JP2066852A JP2761569B2 (ja) 1990-03-19 1990-03-19 重心決定要素出力装置の教師信号決定装置
JP2066851A JP2501932B2 (ja) 1990-03-19 1990-03-19 ニュ―ラルネットワ―クによる重心決定要素出力装置
JP2/66851 1990-03-19
JP2/66852 1990-03-19
JP2/197919 1990-07-27
JP2197919A JPH0484356A (ja) 1990-07-27 1990-07-27 ニューラルネットワークによる重心出力装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US09/253,705 Division US6456989B1 (en) 1990-03-12 1999-02-22 Neuro-fuzzy-integrated data processing system

Publications (1)

Publication Number Publication Date
WO1991014226A1 true WO1991014226A1 (en) 1991-09-19

Family

ID=27582038

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1991/000334 WO1991014226A1 (en) 1990-03-12 1991-03-12 Neuro-fuzzy fusion data processing system

Country Status (5)

Country Link
US (2) US5875284A (US06456989-20020924-M00038.png)
EP (1) EP0471857B1 (US06456989-20020924-M00038.png)
KR (1) KR950012380B1 (US06456989-20020924-M00038.png)
CA (1) CA2057078C (US06456989-20020924-M00038.png)
WO (1) WO1991014226A1 (US06456989-20020924-M00038.png)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2057078C (en) * 1990-03-12 2000-04-11 Nobuo Watanabe Neuro-fuzzy fusion data processing system
JP2605505B2 (ja) * 1991-07-04 1997-04-30 株式会社日立製作所 プロセス運転支援ルール獲得システム、プロセス運転支援システム、プロセス運転制御システム及びプロセス運転支援ルール獲得方法
WO1993020530A1 (de) * 1992-04-06 1993-10-14 Siemens Aktiengesellschaft Strukturierung von neuronalen netzen durch regelbasiertes wissen
DE69319424T2 (de) * 1992-04-09 1999-04-15 Omron Tateisi Electronics Co Neuronales netzwerk-/ unscharfe logikkonvertierungsgerät.
EP0740256A3 (en) * 1994-05-03 1996-11-06 Yamatake-Honeywell Co. Ltd. Building management set value decision support apparatus, set value learning apparatus, set value determining apparatus, and neural network operation apparatus
JP3129932B2 (ja) * 1995-05-16 2001-01-31 シャープ株式会社 ファジィ・ニューラルネットワーク装置およびその学習方法
IL129498A0 (en) * 1996-11-04 2000-02-29 Dimensional Pharm Inc System method and computer program product for identifying chemical compounds having desired properties
US6289329B1 (en) * 1997-11-26 2001-09-11 Ishwar K. Sethi System for converting neural network to rule-based expert system using multiple-valued logic representation of neurons in feedforward network
NZ503882A (en) * 2000-04-10 2002-11-26 Univ Otago Artificial intelligence system comprising a neural network with an adaptive component arranged to aggregate rule nodes
US6269306B1 (en) * 2000-06-13 2001-07-31 Ford Global Tech. System and method for estimating sensor errors
JP2003308427A (ja) * 2002-02-15 2003-10-31 Fujitsu Ltd モデル構築プログラム、モデル構築方法およびモデル構築装置
EP1395080A1 (en) * 2002-08-30 2004-03-03 STMicroelectronics S.r.l. Device and method for filtering electrical signals, in particular acoustic signals
MY141127A (en) * 2002-11-18 2010-03-15 Univ Putra Malaysia Artificial intelligence device and corresponding methods for selecting machinability data
US20080255684A1 (en) * 2002-11-18 2008-10-16 Universiti Putra Malaysia Artificial intelligence device and corresponding methods for selecting machinability data
US8374974B2 (en) * 2003-01-06 2013-02-12 Halliburton Energy Services, Inc. Neural network training data selection using memory reduced cluster analysis for field model development
US20050102303A1 (en) * 2003-11-12 2005-05-12 International Business Machines Corporation Computer-implemented method, system and program product for mapping a user data schema to a mining model schema
US7349919B2 (en) * 2003-11-21 2008-03-25 International Business Machines Corporation Computerized method, system and program product for generating a data mining model
US20050114277A1 (en) * 2003-11-21 2005-05-26 International Business Machines Corporation Method, system and program product for evaluating a data mining algorithm
US7523106B2 (en) * 2003-11-24 2009-04-21 International Business Machines Coporation Computerized data mining system, method and program product
US7292245B2 (en) * 2004-01-20 2007-11-06 Sensitron, Inc. Method and apparatus for time series graph display
US8161049B2 (en) * 2004-08-11 2012-04-17 Allan Williams System and method for patent evaluation using artificial intelligence
US8145640B2 (en) * 2004-08-11 2012-03-27 Allan Williams System and method for patent evaluation and visualization of the results thereof
US8145639B2 (en) * 2004-08-11 2012-03-27 Allan Williams System and methods for patent evaluation
US20060036453A1 (en) * 2004-08-11 2006-02-16 Allan Williams Bias compensated method and system for patent evaluation
US7840460B2 (en) * 2004-08-11 2010-11-23 Allan Williams System and method for patent portfolio evaluation
US20070112695A1 (en) * 2004-12-30 2007-05-17 Yan Wang Hierarchical fuzzy neural network classification
US7613665B2 (en) 2005-06-24 2009-11-03 Halliburton Energy Services, Inc. Ensembles of neural networks with different input sets
US7587373B2 (en) * 2005-06-24 2009-09-08 Halliburton Energy Services, Inc. Neural network based well log synthesis with reduced usage of radioisotopic sources
US8065244B2 (en) * 2007-03-14 2011-11-22 Halliburton Energy Services, Inc. Neural-network based surrogate model construction methods and applications thereof
US8112372B2 (en) * 2007-11-20 2012-02-07 Christopher D. Fiorello Prediction by single neurons and networks
US9514388B2 (en) * 2008-08-12 2016-12-06 Halliburton Energy Services, Inc. Systems and methods employing cooperative optimization-based dimensionality reduction
US8352495B2 (en) 2009-12-15 2013-01-08 Chalklabs, Llc Distributed platform for network analysis
US9015093B1 (en) 2010-10-26 2015-04-21 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
US8775341B1 (en) 2010-10-26 2014-07-08 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
CA2975251C (en) 2015-01-28 2021-01-26 Google Inc. Batch normalization layers

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01183763A (ja) * 1988-01-18 1989-07-21 Fujitsu Ltd ネットワーク構成データ処理装置学習処理方式
JPH02231670A (ja) * 1989-03-03 1990-09-13 Sharp Corp ニューラル・ネットワークの学習装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0682396B2 (ja) * 1985-10-22 1994-10-19 オムロン株式会社 メンバーシップ関数合成装置およびファジィ・システム
JPH0797284B2 (ja) * 1986-09-03 1995-10-18 株式会社日立製作所 ファジー推論によるディジタル制御方法
US5168549A (en) * 1988-05-20 1992-12-01 Matsushita Electric Industrial Co., Ltd. Inference rule determining method and inference device
US5255344A (en) * 1988-05-20 1993-10-19 Matsushita Electric Industrial Co., Ltd. Inference rule determining method and inference device
JP2682060B2 (ja) * 1988-09-29 1997-11-26 オムロン株式会社 コントローラ、及びコントローラの出力範囲の決定方法
JPH0293904A (ja) * 1988-09-30 1990-04-04 Omron Tateisi Electron Co ファジィ制御装置およびファジィ制御方法
US5343553A (en) * 1988-11-04 1994-08-30 Olympus Optical Co., Ltd. Digital fuzzy inference system using logic circuits
JPH02189635A (ja) * 1989-01-18 1990-07-25 Yamaha Corp ファジィ推論装置
JPH02208787A (ja) * 1989-02-09 1990-08-20 Yasuo Nagazumi フアジイ演算回路および該回路を用いたファジイ計算機
US5303385A (en) * 1989-03-17 1994-04-12 Hitachi, Ltd. Control system having optimality decision means
JPH02260001A (ja) * 1989-03-31 1990-10-22 Matsushita Electric Ind Co Ltd ファジィ同定器
US5191638A (en) * 1989-03-31 1993-03-02 Matsushita Electric Industrial Co., Ltd. Fuzzy-boolean multi-stage inference apparatus
JPH02292602A (ja) * 1989-05-02 1990-12-04 Nkk Corp 人工神経回路網型ファジィ制御装置
JPH0690668B2 (ja) * 1989-10-20 1994-11-14 三菱電機株式会社 ファジイ演算装置
JP2561162B2 (ja) * 1990-01-29 1996-12-04 三菱電機株式会社 演算処理用半導体装置
CA2057078C (en) * 1990-03-12 2000-04-11 Nobuo Watanabe Neuro-fuzzy fusion data processing system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01183763A (ja) * 1988-01-18 1989-07-21 Fujitsu Ltd ネットワーク構成データ処理装置学習処理方式
JPH02231670A (ja) * 1989-03-03 1990-09-13 Sharp Corp ニューラル・ネットワークの学習装置

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"The 37th (latter half of 1988) national convention's lecture treatises (II)" (Received on 12 Dec., 1988 by Industrial Property Library), Information Processing Society of Japan, p. 1386-1387. *
"The 40th (former half of 1990) national conventional's lecture treatises (I)" (Received on 12 June, 1990 by Industrial Property Liberary), Information Processing Society of Japan, p. 148-149. *
See also references of EP0471857A4 *

Also Published As

Publication number Publication date
US6456989B1 (en) 2002-09-24
CA2057078A1 (en) 1991-09-13
US5875284A (en) 1999-02-23
KR950012380B1 (ko) 1995-10-17
EP0471857B1 (en) 2000-02-02
KR920702784A (ko) 1992-10-06
CA2057078C (en) 2000-04-11
EP0471857A4 (en) 1993-04-28
EP0471857A1 (en) 1992-02-26

Similar Documents

Publication Publication Date Title
WO1991014226A1 (en) Neuro-fuzzy fusion data processing system
Jang et al. Neuro-fuzzy modeling and control
Ponnapalli et al. A formal selection and pruning algorithm for feedforward artificial neural network optimization
Cardeira et al. Neural networks for multiprocessor real-time scheduling
Bystrov et al. Practice. Neuro-Fuzzy Logic Systems Matlab Toolbox Gui
JP2763369B2 (ja) ファジィ制御ルールのチューニング方法
JPH03268077A (ja) ニューラルネットワークによる重心決定要素出力装置
JP2763370B2 (ja) 階層ネットワーク構成ファジィ制御器
JP2559883B2 (ja) ファジィ制御器
JP2559881B2 (ja) ファジィ制御器
JP2744321B2 (ja) 適応型データ処理装置の解析処理方法
JPH05128082A (ja) 階層ネツトワーク構成データ処理装置とその学習処理方法
JP2763371B2 (ja) 階層ネットワーク構成ファジィ制御器
JP2761569B2 (ja) 重心決定要素出力装置の教師信号決定装置
JP2763367B2 (ja) ファジィ制御ルールの生成方法
JP3417973B2 (ja) ファジイ構造型ニューロコンピュータと財務コンサルト情報収集方法
JP2763368B2 (ja) ファジィ制御におけるメンバーシップ関数のチューニング方法
JPH03269777A (ja) ニューラルネットワークにおける結合の学習調整方式
JP3137669B2 (ja) 階層ネットワーク構成演算素子
JPH03271806A (ja) ファジィ制御器
JP3137996B2 (ja) メンバシップ関数を用いたニューラルネットワーク及びその学習方式
Blume An efficient mapping of Fuzzy ART onto a neural architecture
JPH03260802A (ja) 階層ネットワーク構成データ処理装置及びデータ処理システム
JPH03271807A (ja) ファジィ制御器
JPH05334309A (ja) 債券格付け決定装置及び財務コンサルティング方法

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA KR US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE

WWE Wipo information: entry into national phase

Ref document number: 2057078

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 1991905520

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1991905520

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1991905520

Country of ref document: EP