CA2907267A1 - Artificial neural network training: bounded bias technique - Google Patents
Artificial neural network training: bounded bias technique Download PDFInfo
- Publication number
- CA2907267A1 CA2907267A1 CA2907267A CA2907267A CA2907267A1 CA 2907267 A1 CA2907267 A1 CA 2907267A1 CA 2907267 A CA2907267 A CA 2907267A CA 2907267 A CA2907267 A CA 2907267A CA 2907267 A1 CA2907267 A1 CA 2907267A1
- Authority
- CA
- Canada
- Prior art keywords
- layer
- neural network
- artificial neural
- perceptrons
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Digital Transmission Methods That Use Modulated Carrier Waves (AREA)
Abstract
Artificial Neural Network training is amending weights and biases of Perceptrons utilizing a training algorithm e.g. Back Propagation, and a Training Set for culmination of a trained Artificial Neural Network.
Appertaining to training of Artificial Neural Network utilizing Back Propagation algorithm, Patent Applicant devises a technique to increase Perceptrons in Hidden Layers during training.
Appertaining to training of Artificial Neural Network utilizing Back Propagation algorithm, Patent Applicant devises a technique to increase Perceptrons in Hidden Layers during training.
Description
Description Artificial Neural Network Training: Bounded Bias Technique Artificial Neural Network comprises Input Layer, Hidden Layers, and Output Layer. Layer comprises Perceptrons. Perceptron is computing unit in Artificial Neural Network comprising Input, Transfer Function, and Output.
In prevailing context:
= Artificial Neural Network is Feedforward Neural Network or Multi-Layers Perceptrons.
= Layer refers to Input Layer, Hidden Layer or Output Layer.
= Layers are index from O. Input Layer is Layer O. Layer index is prior layer index +
1.
= Pertinent to Artificial Neural Network comprises m Layers, no_ni_n2_¨_nm_i indicates no Perceptrons in Input Layer (Layer 0), nl Perceptrons in first Hidden Layer (Layer 1), n2 Perceptrons in Hidden Layer 2 (Layer 2), nin_2 Perceptrons in last Hidden Layer (Layer m-2) , and nm_l Perceptrons in Output Layer (Layer m-1).
= Training Iteration refers to utilizing training set and Back Propagation algorithm to train Artificial Neural Network yielding trained Artificial Neural Network.
Training set comprises a number of inputs and respective outputs which are utilized to training Artificial Neural Network. During training iteration i, Back Propagation algorithm utilizes entities in training set to amend weights and biases pertinent to trained Artificial Neural Network of training iteration i ¨ 1.
= Absolute bias refers to absolute value of bias.
Pertinent to n0_n1_n2_===_nm_i Artificial Neural Networks:
= Input Layer (Layer 0).
Input of Perceptron i = xi, i E (0,1,2, = == , no ¨ 11 Transfer Function: f (xi) = xi, i E [0,1,2, === , no ¨ 11 Output of Perceptron i = xi, i c {0,1,2, === , no ¨
= Hidden Layer j (Layer j), j c {1,2,===,72m_2}.
nj-1-1 Input of Perceptron i = Xk x Wkj + b,i E t0,1,2,===,ni ¨
k=0 Transfer Function: f(.) Output of Perceptron i (nj_i-1 n1_1-1 n1_1-1 = f xk X Wk,0 Xk X Wki === Xk X Wkx j_i k=0 k=0 k=0 +b = Output Layer (Layer m ¨ 1).
Input of Perceptron i = Xk x Wkj + b,i E [0,1,2,===, nm_i ¨
k=0 Transfer Function: f(.) Output of Perceptron i /nm-2-1 = f Xk X Wk,o Xk X Wk,i ===
k=0 k=0 11,2-1 E Xk b k=0 Bounded Bias Technique is a method to increase Perceptrons in Hidden Layers during training of Artificial Neural Network with Back Propagation algorithm. Metrics for Artificial Neural Network of m Layers, no_ni_n2_===_nm_i:
= Metricl . Arithmetic Mean of Absolute Biases.
n Arithmetic Mean of Absolute Biases = ET=-11 Ei!-0 (Bias) vr.n-i z-v=1
In prevailing context:
= Artificial Neural Network is Feedforward Neural Network or Multi-Layers Perceptrons.
= Layer refers to Input Layer, Hidden Layer or Output Layer.
= Layers are index from O. Input Layer is Layer O. Layer index is prior layer index +
1.
= Pertinent to Artificial Neural Network comprises m Layers, no_ni_n2_¨_nm_i indicates no Perceptrons in Input Layer (Layer 0), nl Perceptrons in first Hidden Layer (Layer 1), n2 Perceptrons in Hidden Layer 2 (Layer 2), nin_2 Perceptrons in last Hidden Layer (Layer m-2) , and nm_l Perceptrons in Output Layer (Layer m-1).
= Training Iteration refers to utilizing training set and Back Propagation algorithm to train Artificial Neural Network yielding trained Artificial Neural Network.
Training set comprises a number of inputs and respective outputs which are utilized to training Artificial Neural Network. During training iteration i, Back Propagation algorithm utilizes entities in training set to amend weights and biases pertinent to trained Artificial Neural Network of training iteration i ¨ 1.
= Absolute bias refers to absolute value of bias.
Pertinent to n0_n1_n2_===_nm_i Artificial Neural Networks:
= Input Layer (Layer 0).
Input of Perceptron i = xi, i E (0,1,2, = == , no ¨ 11 Transfer Function: f (xi) = xi, i E [0,1,2, === , no ¨ 11 Output of Perceptron i = xi, i c {0,1,2, === , no ¨
= Hidden Layer j (Layer j), j c {1,2,===,72m_2}.
nj-1-1 Input of Perceptron i = Xk x Wkj + b,i E t0,1,2,===,ni ¨
k=0 Transfer Function: f(.) Output of Perceptron i (nj_i-1 n1_1-1 n1_1-1 = f xk X Wk,0 Xk X Wki === Xk X Wkx j_i k=0 k=0 k=0 +b = Output Layer (Layer m ¨ 1).
Input of Perceptron i = Xk x Wkj + b,i E [0,1,2,===, nm_i ¨
k=0 Transfer Function: f(.) Output of Perceptron i /nm-2-1 = f Xk X Wk,o Xk X Wk,i ===
k=0 k=0 11,2-1 E Xk b k=0 Bounded Bias Technique is a method to increase Perceptrons in Hidden Layers during training of Artificial Neural Network with Back Propagation algorithm. Metrics for Artificial Neural Network of m Layers, no_ni_n2_===_nm_i:
= Metricl . Arithmetic Mean of Absolute Biases.
n Arithmetic Mean of Absolute Biases = ET=-11 Ei!-0 (Bias) vr.n-i z-v=1
2 j = 1 indicates the first Hidden Layer.
j = m - 1 signifies Output Layer.
ni indicates Perceptrons number in Layer j.
I
(Bias)Lil is absolute bias of Perceptron i in Layer j.
= Metric2. Layer Arithmetic Mean of Absolute Biases.
Eni_j-,11(Bias)Lil Layer j Arithmetic Mean of Absolute Biases = _____________ ¨
ni ni indicates Perceptrons number in Layer j.
I (Bias)j,i1 is absolute bias of Perceptron i in Layer j.
= Metric3.
Metric3 c R, utilised to comprise confinement interval [0, Metric3].
= Metric4.
Metric4 c N. Metric4 + Perceptrons number in the last Hidden Layer indicates Perceptrons number in nascent Hidden Layer.
= Metric5.
Metric5 c Ai+ signifies number of Perceptrons for augmentation to Hidden Layers.
During training of Artificial Neural Network, at culmination of a Training Iteration, Metricl is calculated. In occurrence of Metricl > Metric3, Metric2 for Hidden Layers are calculated:
= In occurrence of 3 Layer j,j e [1,...,m - 2} Metric2 of Layer j >
Metric3, a number of Perceptrons are adjoined to Y Layer j,j c [1,===,m - 21. Number of nascent Perceptrons is indicated by Metric5. Biases of nascent Perceptrons are initialized to O. Biases of Output Layer Perceptrons are reset to O.
= In occurrence of 0 Layer j, j c (1, ===,m - 2}j Metric2 of Layer j >
Metric3, nascent Layer comprising a number of Perceptrons is inserted prior to Output Layer. Number of nascent layer is Metric4 + Perceptrons Number of last Hidden Layer. Biases of nascent Hidden Layer Perceptrons are initialized to O.
Biases of Output Layer Perceptrons are reset to O.
During training of Artificial Neural Network, at culmination of a Training Iteration:
Metricl is calculated;
Metricl > Metric3
j = m - 1 signifies Output Layer.
ni indicates Perceptrons number in Layer j.
I
(Bias)Lil is absolute bias of Perceptron i in Layer j.
= Metric2. Layer Arithmetic Mean of Absolute Biases.
Eni_j-,11(Bias)Lil Layer j Arithmetic Mean of Absolute Biases = _____________ ¨
ni ni indicates Perceptrons number in Layer j.
I (Bias)j,i1 is absolute bias of Perceptron i in Layer j.
= Metric3.
Metric3 c R, utilised to comprise confinement interval [0, Metric3].
= Metric4.
Metric4 c N. Metric4 + Perceptrons number in the last Hidden Layer indicates Perceptrons number in nascent Hidden Layer.
= Metric5.
Metric5 c Ai+ signifies number of Perceptrons for augmentation to Hidden Layers.
During training of Artificial Neural Network, at culmination of a Training Iteration, Metricl is calculated. In occurrence of Metricl > Metric3, Metric2 for Hidden Layers are calculated:
= In occurrence of 3 Layer j,j e [1,...,m - 2} Metric2 of Layer j >
Metric3, a number of Perceptrons are adjoined to Y Layer j,j c [1,===,m - 21. Number of nascent Perceptrons is indicated by Metric5. Biases of nascent Perceptrons are initialized to O. Biases of Output Layer Perceptrons are reset to O.
= In occurrence of 0 Layer j, j c (1, ===,m - 2}j Metric2 of Layer j >
Metric3, nascent Layer comprising a number of Perceptrons is inserted prior to Output Layer. Number of nascent layer is Metric4 + Perceptrons Number of last Hidden Layer. Biases of nascent Hidden Layer Perceptrons are initialized to O.
Biases of Output Layer Perceptrons are reset to O.
During training of Artificial Neural Network, at culmination of a Training Iteration:
Metricl is calculated;
Metricl > Metric3
3 Metric2 for Hidden Layers are calculated;
3 Layer j,j E (1, === ,In ¨ 2} l Metric2 of Layer j > Metric3 FOR h FROM 1TO m¨ 2 FOR 1 FROM 1TO Metric5 a new Perceptron with bias = 0 is added to Layer h;
Layer j, j E [1,===,771¨ Metric2 of Layer j > Metric3 nascentLayer is initialized;
FOR l FROM 1TO Metric5 + PerceptronsNumber0 fLastHiddenLayer a new Perceptron with bias = 0 is added to nascentLayer;
nascentLayer is inserted prior to Output Layer;
V perceptron in Output Layer bias of perceptron := 0;
In occurrence of saturated Transfer Functions in Output Layer yielding from increasing Perceptrons in Hidden Layers, Metric4 or Metric5 are reduced.
3 Layer j,j E (1, === ,In ¨ 2} l Metric2 of Layer j > Metric3 FOR h FROM 1TO m¨ 2 FOR 1 FROM 1TO Metric5 a new Perceptron with bias = 0 is added to Layer h;
Layer j, j E [1,===,771¨ Metric2 of Layer j > Metric3 nascentLayer is initialized;
FOR l FROM 1TO Metric5 + PerceptronsNumber0 fLastHiddenLayer a new Perceptron with bias = 0 is added to nascentLayer;
nascentLayer is inserted prior to Output Layer;
V perceptron in Output Layer bias of perceptron := 0;
In occurrence of saturated Transfer Functions in Output Layer yielding from increasing Perceptrons in Hidden Layers, Metric4 or Metric5 are reduced.
4
Claims (2)
1. Method to calculate Metric 1 and Metric 2.
2. Method to utilize Metric 1, Metric 2, Metric 3, Metric 4, and Metric 5 to increase Perceptrons Number in Hidden Layers of Artificial Neural Network during training with Back Propagation Algorithm.
is requested.
is requested.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2907267A CA2907267A1 (en) | 2015-10-05 | 2015-10-05 | Artificial neural network training: bounded bias technique |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2907267A CA2907267A1 (en) | 2015-10-05 | 2015-10-05 | Artificial neural network training: bounded bias technique |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2907267A1 true CA2907267A1 (en) | 2017-04-05 |
Family
ID=58468388
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA2907267A Abandoned CA2907267A1 (en) | 2015-10-05 | 2015-10-05 | Artificial neural network training: bounded bias technique |
Country Status (1)
Country | Link |
---|---|
CA (1) | CA2907267A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109356652A (en) * | 2018-10-12 | 2019-02-19 | 深圳市翌日科技有限公司 | Adaptive fire grading forewarning system method and system under a kind of mine |
-
2015
- 2015-10-05 CA CA2907267A patent/CA2907267A1/en not_active Abandoned
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109356652A (en) * | 2018-10-12 | 2019-02-19 | 深圳市翌日科技有限公司 | Adaptive fire grading forewarning system method and system under a kind of mine |
CN109356652B (en) * | 2018-10-12 | 2020-06-09 | 深圳市翌日科技有限公司 | Underground self-adaptive fire classification early warning method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xu et al. | Social emotional optimization algorithm for nonlinear constrained optimization problems | |
Gaxiola et al. | Generalized type-2 fuzzy weight adjustment for backpropagation neural networks in time series prediction | |
Chen et al. | Forecasting studies by designing Mamdani interval type-2 fuzzy logic systems: With the combination of BP algorithms and KM algorithms | |
CN101968629A (en) | PID (Proportional Integral Derivative) control method for elastic integral BP neural network based on RBF (Radial Basis Function) identification | |
Sadowski | Notes on backpropagation | |
Yin et al. | Crisscross optimization based short-term hydrothermal generation scheduling with cascaded reservoirs | |
Haider et al. | Inflation forecasting in Pakistan using artificial neural networks | |
Chakraverty et al. | McCulloch–Pitts neural network model | |
CN109155001B (en) | Signal processing method and device based on impulse neural network | |
CA2907267A1 (en) | Artificial neural network training: bounded bias technique | |
Ali et al. | Artificial showering algorithm: a new meta-heuristic for unconstrained optimization | |
Zili et al. | Forecasting Indonesian mortality rates using the Lee-Carter model and ARIMA method | |
CN108459570A (en) | Based on the irrigation water distribution intelligence control system and method for generating the confrontation network architecture | |
Zhang et al. | Adaptive network traffic prediction algorithm based on bp neural network | |
Zhang | The application of imperialist competitive algorithm based on chaos theory in perceptron neural network | |
Delshad et al. | Spiking neural network learning algorithms: Using learning rates adaptation of gradient and momentum steps | |
Sayyaadi et al. | Improvement of energy systems using the soft computing techniques | |
Radi | Modeling charged-particle multiplicity distributions at LHC | |
Yang | A novel wind power capacity combined forecasting method based on backtracking search algorithm | |
Shukla et al. | Identification of global minima of back-propagation neural network in the prediction of chaotic motion | |
Sarıca et al. | Recurrent ANFIS for Time Series Forecasting | |
Hamidi | Application of multi-layered perceptron neural network (MLPNN) to combined economic and emission dispatch | |
Xin-qiu et al. | BP neural network based GPSA used in tandem cold rolling force prediction | |
Wang et al. | Model of Uninterruptible Power Supply Based on Improved Hopfield Network | |
Wang | Mechanics analysis of long jump by means of the theory of artificial neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Dead |
Effective date: 20180604 |