CN110766144B - Scalar coupling constant prediction system between atoms based on multi-layer decomposition fuzzy neural network - Google Patents

Scalar coupling constant prediction system between atoms based on multi-layer decomposition fuzzy neural network Download PDF

Info

Publication number
CN110766144B
CN110766144B CN201911090719.8A CN201911090719A CN110766144B CN 110766144 B CN110766144 B CN 110766144B CN 201911090719 A CN201911090719 A CN 201911090719A CN 110766144 B CN110766144 B CN 110766144B
Authority
CN
China
Prior art keywords
layer
neural network
fuzzy
fuzzy neural
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911090719.8A
Other languages
Chinese (zh)
Other versions
CN110766144A (en
Inventor
赵亮
谢志峰
张坤鹏
金军委
付园坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN201911090719.8A priority Critical patent/CN110766144B/en
Publication of CN110766144A publication Critical patent/CN110766144A/en
Application granted granted Critical
Publication of CN110766144B publication Critical patent/CN110766144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/043Architecture, e.g. interconnection topology based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to the field of artificial intelligence in computer science and discloses a multi-layer fuzzy neural network design method. The whole design process of the invention comprises the structural design and parameter optimization of the model. In the aspect of model structure design, a kernel-based fuzzy clustering algorithm is adopted to determine the number of component fuzzy neural networks in an implicit layer, and the number of layers of the fuzzy neural networks is determined according to the maximum threshold value of the number of the implicit layer networks and the model precision; in the aspect of parameter optimization, a least square method is adopted to optimize the connection weight among an input layer, an hidden layer and an output layer. The model provided by the invention is validated by adopting the problem of scalar coupling constant prediction among atoms in molecules as a specific embodiment.

Description

Scalar coupling constant prediction system between atoms based on multi-layer decomposition fuzzy neural network
Technical Field
The invention belongs to the technical field of artificial intelligence in computer science, and particularly relates to an optimal design method of a multi-layer decomposition fuzzy neural network.
Background
In order to effectively describe the uncertainty phenomenon commonly existing in nature, zadeh proposes a fuzzy system example based on a fuzzy set. On the basis, the fuzzy system has been successful in the field of system modeling and control, but the design of the fuzzy system lacks self-adaptability and self-organization, is difficult to adapt to the time-varying characteristics of a complex system, and has poor effect in practical applications such as traffic flow prediction, industrial robot control and the like. The neural network has strong self-learning capability, and the neural network and the fuzzy system are combined into the fuzzy neural network, so that the fuzzy neural network has both interpretation and learning properties. However, the approximation performance of the conventional fuzzy neural network is still limited, and in order to further improve its classification or regression ability, the decomposition of the fuzzy neural network (Hsueh Y C, su S F, chen M C. Decompesed Fuzzy Systems and Their Application in Direct Adaptive Fuzzy Control [ J ] IEEE Transactions on Cybernetics,2014,44 (10): 1772-1783 ]) (Su S F, chen M C, hsueh Y c.a Novel Fuzzy Modeling Structure-Decomposed Fuzzy System [ J ]. IEEE Transactions on Systems Man & Cybernetics Systems,2017,47 (8): 2311-2317.) is beginning to gain attention. However, its function fitting capability remains limited because it is still a shallow fuzzy neural network. In order to overcome the defect, the invention provides a multi-layer decomposition fuzzy neural network with a deep learning structure, designs a multi-layer neuron structure and a free parameter determination method thereof, and aims to solve the problem of estimating scalar coupling constants between two atoms in a molecule.
Disclosure of Invention
Aiming at the problem that the function fitting capacity of the existing decomposition fuzzy neural network is limited, the invention provides a multilayer decomposition fuzzy neural network optimization design method.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the multi-layer decomposition fuzzy neural network optimization design method comprises a first input layer, hidden layers and an output layer, wherein one hidden layer realizes nonlinear mapping represented by a decomposition fuzzy neural network composed of seven layers of neurons, and the method is characterized by comprising the following steps:
step 1: setting a termination threshold lambda of a multi-layer decomposition fuzzy neural network model, and inputting a training set { X } k ,G k -where k=1, 2, …, N train ,N train Representing the number of samples of the training set, X k ={x 1 ,x 2 ,…,x k Input data of the model, G k ={g 1 ,g 2 ,…,g k -desired output of the model; setting the initial hidden layer as one layer and the input of the initial hidden layer as Y 1 I =(x 1 ,x 2 ,…,x k ) Radius of given fuzzy numberFor Y 1 I Blurring is performed, where i=1, …, K l ,K l Representing the input dimension of the hidden layer;
step 2: according to the clustering number h, initializing a membership weight index m and a clustering centerAnd initializing a membership matrix>Where r=1, 2, …, h, t is the number of iterations, determining the number m of component fuzzy neural networks in the hidden layer l And component fuzzy neural network front part membership function center +.>
Step 3: initializing fuzzy rule back-piece parametersAccording to the number m of the component fuzzy neural networks in the hidden layer l Component fuzzy neural network front part membership function center +.>Fuzzy rule back part parameter->Calculating the actual output of each component fuzzy neural network in the hidden layer of the model, obtaining the connection weight between the input layer and the output layer through the actual output of each component fuzzy neural network in the hidden layer, obtaining the actual output of the model by using the obtained connection weight, and calculating the modelRoot mean square error RMSE of actual output and expected output;
step 4: judging RMSE<Whether lambda is true or not, if so, ending the model training; if not, clustering number h=h+1, H is less than or equal to H max ,H max Representing the maximum number of component fuzzy neural networks in the hidden layer, turning to step 2, and looping steps 2 through 4 until RMSE<Until the lambda condition is established; if clustering number h=h max If the condition is not satisfied, executing the step 5;
step 5: adding an hidden layer and newly adding hidden layer inputWherein L represents the number of hidden layers, m 1 The first component fuzzy neural network in the 1 st hidden layer is represented, the radius of the given fuzzy number +.>For->Blurring is carried out, the process goes to the step 2, and the steps 2 to 5 are circulated until the condition RMSE<Lambda is established.
Further, in the step 1, in the step of comparing Y 1 I After blurring, the method further comprises:
obtaining fuzzy set F l i Fuzzy set F l i The fuzzy value after medium fuzzification isWherein (1)>Representation->Is (a) the domain of (a) is (are)>Represents Y 1 I Is a component of (a).
Further, the step 2 includes:
step 2.1: given a clustering number h, initializing a membership weight index m and a clustering centerInitializing a membership matrix->Setting an algorithm termination threshold epsilon and a maximum iteration number t max
Step 2.2: computing a kernel matrixWherein->Representation->Components of (2);
step 2.3: updating cluster centersAnd calculate the membership matrix U (t+1)
Step 2.4: comparison membership matrix U (t) And U (t+1) If U (t+1) -U (t) ||<Epsilon holds true or reaches the maximum number of iterations t max The algorithm terminates, otherwise, steps 2.2 through 2.4 are looped.
Further, the step 3 includes:
step 3.1: membership function center of front part through component fuzzy neural networkDetermination of the s < th l The j-th rule front piece fuzzy set of the component fuzzy neural network +.>
Step 3.2: the obtained fuzzy setAnd the s l The j-th rule front piece fuzzy set of the component fuzzy neural network +.>And (3) calculating the matching degree:
wherein t represents a triangular norm;
step 3.3: calculating activation forceWherein t represents a triangular norm, and performing normalization calculation on the activation force of each rule in the component fuzzy neural network to obtain normalized activation force +.>
Step 3.4: initializing fuzzy rule back-piece parametersThe normalized activation force and the fuzzy rule back part of the component fuzzy neural network are subjected to product calculation to obtain the output of each rule in the component fuzzy neural network>Wherein->The s-th layer representing the first hidden layer l The j-th rule back-piece parameter initial value of the fuzzy neural network of the individual components;
step 3.5: by blurring the assemblySumming the output of each rule in the neural network to obtain the output vector of the hidden layerWherein->
Wherein y is (l)j Representing the output of the jth component fuzzy neural network in the ith hidden layer,representing the connection weight between the input layer and the output layer,/->Representing connection weights between the hidden layer and the output layer, g representing the desired output value;
the above formula is represented in matrix form:
ΦW LY =G
wherein Φ is the combination of input layer output and hidden layer output:
W LY the connection weight matrix among the input layer, the hidden layer and the output layer is as follows:
step 3.6: obtaining W according to a least square method LY =Φ + G, wherein "+" represents Moore-Penrose generalized inverse of the matrix, and using the obtained connection weight, determining actual output D of the model train =ΦW LY Wherein D is train ={d i },i=1,2,...,N train
Step 3.7: computing a root mean square error RMSE between the actual output and the desired output of the model:
wherein d is i G is the actual output of the model i Is the desired output of the model.
Further, in the step 5, in the pairing processAfter blurring, the method further comprises:
obtaining fuzzy setsFuzzy aggregation->The blurring value after the medium blurring is +.>Wherein (1)>Representation->Is (a) the domain of (a) is (are)>Representation->Is a component of (a).
Compared with the prior art, the invention has the beneficial effects that:
the number of the component fuzzy neural networks in the hidden layer is determined through the fuzzy clustering algorithm based on the kernel, the number of layers for decomposing the fuzzy neural networks is determined according to the maximum threshold value of the number of the hidden layer networks and the model precision, and then the connection weights among the input layer, the hidden layer and the output layer are optimized through a least square method. The problems that modeling accuracy is limited, training efficiency cannot meet real-time requirements and the like in the existing decomposition fuzzy neural network are solved, learning time and requirements on software and hardware are shortened while network modeling effects are guaranteed, and the method has important practical value in the technical field of artificial intelligent machine learning.
Drawings
FIG. 1 is a schematic diagram of a multi-layer decomposition fuzzy neural network;
FIG. 2 is a schematic diagram of a first hidden layer;
FIG. 3 is a graph of rule front blur values;
FIG. 4 is a basic flow chart of a method for optimizing design of a multi-layer decomposition fuzzy neural network according to an embodiment of the invention;
FIG. 5 is a flow chart of component fuzzy neural network number estimation;
FIG. 6 is a schematic diagram of the magnetic interactions between atoms in a molecule.
Detailed Description
The invention is further illustrated by the following description of specific embodiments in conjunction with the accompanying drawings:
the structure frame of the multi-layer decomposition fuzzy neural network is as shown in fig. 1, the input layer transmits only data, so the output x= (X) of the input layer 1 ,…,x n ). The first hidden layer is used for extracting the characteristics of the output of the input layer, namely Y 1 O =F 1 (X) whereinl 1 =1,…,m 1 . The input of the second hidden layer is the output of the input layer and the output of the first hidden layer, i.e +.>The dimension is n+m 1 Its output Y 2 O =F 2 (Y 2 I ) Wherein F 2 ,Y 2 O Similar to F 1 ,Y 1 O ,Y 2 O Is m 2 . Similarly, the input and output vectors of the third to L-th hidden layers can be obtained +.>Where L represents the number of hidden layers. The input of the output layer is the output of the input layer and the output of all hidden layers in between, i.e. +.>Its output is y=f (Y I )。
For the above-mentioned multi-layer decomposition fuzzy neural network we can prove that it is a universal approximator (approximating any continuous function with any accuracy). Wherein each hidden layer implements a nonlinear mapping represented by a decomposed fuzzy neural network of seven layers of neurons, input Output ofThe basic structure is shown in figure 2.
A first layer: an input layer. The basic function of the input layer is to conduct the signal input into the first hidden layer, and mathematical calculation is not performed, and the basic formula is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Respectively represent the first 1 Input and output of the ith neuron in the layer, < > and->Representing input vector +.>I=1, …, K l
A second layer: blurring layer. The input components are processed in this layer using a Gaussian blur (Gauss fuzzifers)Blurring to fuzzy set +.>Its fuzzification map can be expressed as:
wherein the membership function of the gaussian blur is expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,representation->Is (a) the domain of (a) is (are)>Is the radius of the gaussian membership function.
In decomposing the fuzzy neural network, compared with the traditional fuzzy neural network, forEach front part in each fuzzy rule adopts a fuzzy complement operator to decompose the front part into three fuzzy valuesA complete rule base of the component fuzzy neural network (component fuzzy neural network, CFNN) is then formed from these fuzzy values, for the s < th) l The fuzzy rules of the individual component fuzzy neural network are as follows:
s l 1 if (1)Is->… and->Is->… and->Is->Then->
……
If->Is->… and->Is->… and->Is->Then->
……
s l k If (1)Is->… and->Is->… and->Is->Then->Wherein (1)>Represents the s < th l Fuzzy rule number of fuzzy neural network of individual components, < +.>For front piece->m is the number of fuzzy neural networks of the first implicit layer component), the complement of which can be obtained by using fuzzy complement operator>Wherein the method comprises the steps ofIs->Membership function of>Is->C is a complement operator, c may take the basic fuzzy complement, sugeno fuzzy complement or Yager fuzzy complement.
Definition in the field of discussionFuzzy set +.>And its fuzzy complement can be expressed as
Wherein the method comprises the steps ofIs the membership function center of the front part of the component fuzzy neural network.
The membership functions represented by the three formulas are shown in fig. 3.
Third layer: a matching layer. In this layer, the input is fuzzyAnd corresponding component fuzzy neural network rule precursor fuzzy set +.>And (3) performing matching degree calculation, wherein the basic formula is as follows:
where T represents a triangular norm (also known as T-norm) having a variety of implementation operators, e.g. Minimum T-norm: T min (a,b)=min{a,b},Product t-norm:T prod (a,b)=a·b,Lukasiewicz:T Luk (a, b) =max {0, a+b-1} and the like, specifically, a minimum t-norm operation can be employed.
Fourth layer: activating the active layer. In this layer, the triangular norm is calculated for the rule front piece matching degree calculated in the previous layer, and the rule activating force is obtained, and the basic formula is as follows:
where t represents a triangular norm, and the implementation of the triangular norm may be selected from the Product t-norm operation.
Fifth layer: normalization layer. In this layer, the activation force obtained from the previous layer is normalized, and the basic formula is shown as follows
Wherein k represents the s < th ] l The number of fuzzy rules in the individual component fuzzy neural network, and
sixth layer: and (5) a product layer. Product calculation is carried out on normalized activation force obtained from the previous layer and fuzzy back part
Seventh layer: a summation layer for summing the output of each rule obtained from the previous layer
Wherein the method comprises the steps ofRepresents the s < th l The individual components blur the output of the neural network.
In the above multi-layer decomposition fuzzy neural network, the structural design of the hidden layer and the parameter optimization of the network directly determine the performance of the network. Aiming at the problem, the invention provides a multi-layer decomposition fuzzy neural network optimization design method for comprehensively optimizing the structure and parameters of the network.
As shown in fig. 4, the method for optimizing the design of the multi-layer decomposition fuzzy neural network includes:
step S101: setting a termination threshold lambda of a multi-layer decomposition fuzzy neural network model, and inputting a training set { X } k ,G k -where k=1, 2, …, N train ,N train Representing the number of samples of the training set, X k ={x 1 ,x 2 ,…,x k Input data of the model, G k ={g 1 ,g 2 ,…,g k -desired output of the model; setting the initial hidden layer as one layer and the input of the initial hidden layer as Y 1 I =(x 1 ,x 2 ,…,x k ) Radius of given fuzzy numberTaking random numbers uniformly distributed in (0, 1) for Y 1 I Blurring, in particular using Gaussian blur number pairs Y 1 I Fuzzification is performed to obtain a fuzzy set +.>Fuzzy set F l i The blurring value after the medium blurring is +.>Wherein (1)>Representation->Is (a) the domain of (a) is (are)>Represents Y 1 I I=1, …, K l ,K l Representing the input dimension of the hidden layer.
Step S102: according to the clustering number h, initializing a membership weight index m and a clustering centerAnd initializing a membership matrix>Where r=1, 2, …, h, t is the number of iterations, determining the number m of component fuzzy neural networks in the hidden layer l And component fuzzy neural network front part membership function center +.>
Specifically, the step S102 includes:
step S102.1: given a clustering number h, initializing a membership weight index m and a clustering centerInitializing a membership matrix->Setting an algorithm termination threshold epsilon and a maximum iteration number t max
Step S102.2: computing a kernel matrix(Gaussian kernel function), wherein ∈>K l Representing the input dimension of the hidden layer;
step S102.3: updating cluster centersAnd calculate the membership matrix U (t+1)
Step S102.4: comparison membership matrix U (t) And U (t+1) If U (t+1) -U (t) ||<Epsilon holds true or reaches the maximum number of iterations t max The algorithm terminates, otherwise, steps S102.2 to S102.4 are looped.
Step S103: step S102 is performed to obtain the number m of the component fuzzy neural networks in the hidden layer l Membership function center of front part of component fuzzy neural networkAnd radius>Initializing fuzzy rule back part parameters +.>Specifically, the->Is taken as [ -1,1]Random numbers distributed uniformly according to the number m of component fuzzy neural networks in hidden layer l Component fuzzy neural network front part membership function center +.>Fuzzy rule back part parameter->Calculating the actual output of each component fuzzy neural network in the hidden layer of the model, obtaining the connection weight between the input layer and the output layer through the actual output of each component fuzzy neural network in the hidden layer, obtaining the actual output of the model by using the obtained connection weight, and calculating the Root Mean Square Error (RMSE) between the actual output of the model and the expected output;
specifically, the step S103 includes:
step S103.1: membership function center of front part through component fuzzy neural networkDetermination of the s < th l The j-th rule front piece fuzzy set of the component fuzzy neural network +.>
Step S103.2: the obtained fuzzy set F l i And the s l Jth rule front piece fuzzy set of component fuzzy neural networkAnd (3) calculating the matching degree:
wherein t represents a triangular norm; specifically, the triangular norm in the present formula is a minimum t-norm operation.
Step S103.3: calculating activation forceWherein t represents a triangular norm, and specifically, the triangular norm in the step can be selected from a Product t-norm operation; normalizing the activation force of each rule in the component fuzzy neural network to obtain normalized activation force +.>
Step S103.4: initializing fuzzy rule back-piece parametersThe normalized activation force and the fuzzy rule back part of the component fuzzy neural network are subjected to product calculation to obtain the output of each rule in the component fuzzy neural networkWherein->The s-th layer representing the first hidden layer l The j-th rule back-piece parameter initial value of the fuzzy neural network of the component;
step S103.5: obtaining an output vector of the hidden layer by summing the output of each rule in the component fuzzy neural networkWherein->
Wherein y is (l)j Representing the output of the jth component fuzzy neural network in the ith hidden layer,representing the connection weight between the input layer and the output layer,/->Representing connection weights between the hidden layer and the output layer, g representing the desired output value;
the above formula is represented in matrix form:
ΦW LY =G
wherein Φ is the combination of input layer output and hidden layer output:
W LY the connection weight matrix among the input layer, the hidden layer and the output layer is as follows:
step S103.6: obtaining W according to a least square method LY =Φ + G, wherein "+" represents Moore-Penrose generalized inverse of the matrix, and using the obtained connection weight, determining actual output D of the model train =ΦW LY Wherein D is train ={d i },i=1,2,...,N train
Step S103.7: computing a root mean square error RMSE between the actual output and the desired output of the model:
wherein d is i G is the actual output of the model i Is the expectation of the modelAnd outputting.
Step S104: judging RMSE<Whether lambda is true or not, if so, ending the model training; if not, clustering number h=h+1, H is less than or equal to H max ,H max Representing the maximum number of component fuzzy neural networks in the hidden layer, go to step S102, loop through step S102 to step S104 until RMSE<Until the lambda condition is established; if clustering number h=h max When the condition is not satisfied, step S105 is executed.
Step S105: adding an hidden layer and newly adding hidden layer inputWherein L represents the number of hidden layers, m 1 The first component fuzzy neural network in the 1 st hidden layer is represented, the radius of the given fuzzy number +.>For->Fuzzification is performed to obtain a fuzzy set +.>Fuzzy aggregation->The fuzzy value after medium fuzzification isWherein (1)>Representation->Is (a) the domain of (a) is (are)>Representation->I=1, …, K l ,K l Representing the input dimensions of the hidden layer, go to step S102, loop through steps S102 to S105 until the condition RMSE<Lambda is established.
To further illustrate the innovativeness and feasibility of the present invention, the implementation steps of the present invention are described in detail by identifying scalar coupling constants (Scalar Coupling Constant) between atoms in a molecule based on information about the pairs of atoms.
As shown in fig. 6, there is a magnetic interaction (i.e., a scalar coupling constant) between each pair of atoms in the molecule, and the strength of this magnetic interaction depends on intervening electrons and chemical bonds that constitute the three-dimensional structure of the molecule. Although the quantum mechanical method can accurately calculate the scalar coupling constant between atoms according to the 3D molecular structure, the quantum mechanical method is very time-consuming and labor-consuming to calculate, and may require several days or weeks for each molecule, so that the calculation method has a great limitation in practical application.
With the development of deep learning, the interaction between the atoms is predicted by using a deep neural network method, so that the calculation time can be greatly reduced, and the calculation cost is reduced. Moreover, it also enables chemists to obtain more rapid knowledge about the structure of the molecule, and also enables them to more conveniently understand the inherent links of the 3D chemical structure of the molecule to its properties and behaviour.
The scalar coupling constant prediction process between atoms based on the multi-layer decomposition fuzzy neural network is as follows.
1 pretreatment of data
The data originates from a collaborative project called phase space chemistry and mathematics (chams). First, the Euclidean distance between atom_index_0 and atom_index_1 atom pairs in each molecule is calculated according to the following formula
Then, the dipole moment (dipole moment) of the training set corresponding to each molecule (momecule) in the test set is combined according to the following formula,
five atomic radii C0.77 (10 -1 nm)]、H[0.38(10 -1 nm)]、N[0.75(10 -1 nm)]、O[0.74(10 -1 nm)]、F[0.71(10 -1 nm)]. Electronegativity C2.55 of five elements]、H[2.20]、N[3.04]、O[3.44]、F[3.98]Electronegativity is a scale of the ability of an atom of an element to attract electrons in a compound. The greater the electronegativity of an element, the greater the ability of its atom to attract electrons in the compound, and the electronegativity is a relative value, so there is no unit. Then, the average value of the bond lengths corresponding to the atoms contained in each molecule in the training set and the test set is calculated according to the following formula,
wherein bn is the number of covalent bonds corresponding to between atom atoms and atoms',representing atom atoms and atom 'in space' i Euclidean distance between atoms.
Finally, euclidean distance dist between two atoms contained in each molecule in the training set and the testing set and distances dist_x, dist_y and dist_z corresponding to each dimension are calculated according to the sequence.
Model 2 training
The first step, a model termination threshold lambda is set, and a training set { X }, is input k ,G k },(k=1,2,…,N train ),N train Representing the number of samples of the training set. Initially set to only one hidden layer, its input Y 1 I =(x 1 ,x 2 ,…,x n ). Fuzzification is performed on the model by using Gaussian fuzzification number, and the given fuzzification number is givenRadius of (2)(taken as (0, 1)]Uniformly distributed random numbers) to obtain a blurred value +.>
And secondly, determining the number of the component fuzzy neural networks in the hidden layer by adopting a fuzzy clustering algorithm based on a kernel. Firstly, a cluster number h is given, a membership weight index m and a cluster center are initializedWherein r=1, 2, …, h, initializing the membership matrix +.>Setting an algorithm termination threshold epsilon and a maximum iteration number t max . Then, a Gaussian kernel matrix is calculated>Wherein->Update cluster center->Calculating membership matrix U (t+1) . Finally, the membership degree matrix U is compared (t) And U (t+1) If U (t+1) -U (t) ||<Epsilon holds true or reaches the maximum number of iterations t max The algorithm terminates, otherwise, the above steps are looped. />
Thirdly, obtaining the number m of the component fuzzy neural networks in the hidden layer through the last step l Membership function center of front part of component fuzzy neural networkAnd radius>Random giving of fuzzy rule back-piece parameters +.>(taken as [ -1, 1)]A uniformly distributed random number). The fuzzy set obtained is->And the s l Jth rule front piece fuzzy set of component fuzzy neural networkAnd (3) calculating the matching degree:
wherein the triangular norm t is a minimum t-norm operation. Calculating activation forceWherein the triangular norm t is the Product t-norm operation; then, carrying out normalization calculation on the activation force of each rule in the component fuzzy neural network to obtain normalized activation force +.>The normalized activation force and the fuzzy back part of the component fuzzy neural network are subjected to product calculation to obtain the output of each rule in the component fuzzy neural network>Finally, the output vector of the hidden layer is obtained by summing the output of each rule in the fuzzy neural network of each component in the hidden layer>Wherein->There is->(y (l)j Representing the output of the jth component fuzzy neural network in the ith hidden layer, +.>Representing the connection weight between the input layer and the output layer,/->Representing the connection weight between the hidden layer and the output layer, g representing the desired output value), written in matrix form, Φw LY =g, where the matrix of hidden layer outputs and input layer outputs is expressed as:
connection weight matrix between it and output layerObtaining W according to a least square method LY =Φ + G. Using the obtained connection weight, the actual output D of the model is obtained train =ΦW LY The root mean square error between the actual output and the expected output of the model is calculated:
/>
specifically, the root mean square error between the actual output of the model and the expected output is 85.2%.
Fourth step, judging the condition RMSE<And whether lambda is true or not, and if so, indicating that model training is finished. If not, the clustering number h=h+1 is given, (h.ltoreq.N) train ) Jump back to the second step, loop through the above steps until RMSE<The lambda condition is established. If the clustering number h=n is given train When the conditions are stillIf not, adding an hidden layer and adding an input of the hidden layerRadius of given fuzzy number +.>(taken as (0, 1)]Uniformly distributed random numbers), and blurring the input (Gaussian blur number) to obtain a blurred value +.>Jumping back to the second step to obtain the number m of the component fuzzy neural networks in the newly added hidden layer L+1 Component fuzzy neural network front part membership function center +.>And radius>The above steps are looped until the condition RMSE<Lambda is established until the multi-layer decomposition fuzzy neural network model training phase is finished.
3 model test
For test dataset { X k ,G k },(k=1,2,…,N test ) Wherein N is test Representing the number of samples of the test dataset. When data is input into each hidden layer in the multi-layer decomposition fuzzy neural network, the data is fuzzified by adopting Gaussian fuzzification numbers, and the radius of the fuzzification numbers is given(taken as (0, 1)]Uniformly distributed random numbers) to obtain a blurred value +.>(i=1,2,…,K l ,K l Representing the input dimension of the first hidden layer), fuzzy set to be obtained +.>And->Rule front fuzzy set of individual component fuzzy neural network +.>And (3) calculating the matching degree:
calculating activation forceThen, carrying out normalization calculation on the activation force of each rule in the component fuzzy neural network to obtain normalized activation force +.>The normalized activation force and the fuzzy back-piece parameters of the component fuzzy neural network are multiplied and calculated to obtain the output of each rule in the component fuzzy neural network>Finally, the output Y= (Y) of the hidden layer is obtained by summing the output of each rule in the fuzzy neural network of each component 1 ,…,Y l ,…,Y rL ) Wherein->And->r L Representing the number of hidden layers of the trained phase model. Defining vectors of hidden layer output and input layer output +.> The matrix is expressed as follows->
Through model training stage, weight matrix among input layer, hidden layer and output layer can be obtainedCalculation model actual test output D test =ΦW LY The prediction accuracy of the model is determined by the following formula:
where n_test represents the number of samples for which the output prediction after the test set samples passed the model is accurate and n_test represents the number of test samples. The case is to predict scalar coupling constants between atoms in a molecule if the sample output value in the test set meets the requirement(wherein ζ is a small positive number, preferably 10 -3 Y (i) represents the expected value of the ith test sample, and Y (i) represents the model output value of the ith test sample, i.e. represents that the prediction is accurate.
The foregoing is merely illustrative of the preferred embodiments of this invention, and it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of this invention, and it is intended to cover such modifications and changes as fall within the true scope of the invention.

Claims (5)

1. A system for predicting scalar coupling constants between atoms based on a multi-layer decomposition fuzzy neural network, the multi-layer decomposition fuzzy neural network comprising a first input layer, an hidden layer, and an output layer, wherein one hidden layer implements a nonlinear mapping represented by a decomposition fuzzy neural network comprised of seven layers of neurons, the system being configured to perform the steps of:
data preprocessing, including: firstly, calculating Euclidean distance between atom_index_0 and atom_index_1 atom pairs in each molecule; combining the dipole moment corresponding to each molecule in the training set and the testing set; then calculating the average value of the bond length corresponding to the atoms contained in each molecule in the training set and the testing set; finally, euclidean distance dist between two atoms contained in each molecule in the training set and the testing set and distances dist_x, dist_y and dist_z corresponding to each dimension are calculated according to the sequence;
designing a multi-layer decomposition fuzzy neural network, comprising:
step 1: setting a termination threshold lambda of a multi-layer decomposition fuzzy neural network model, and inputting a training set { X } k ,G k -where k=1, 2, …, N train ,N train Representing the number of samples of the training set, X k ={x 1 ,x 2 ,…,x k Input data of the model, G k ={g 1 ,g 2 ,…,g k -desired output of the model; setting the initial hidden layer as one layer and the input of the initial hidden layer as Y 1 I =(x 1 ,x 2 ,…,x k ) Radius of given fuzzy numberFor Y 1 I Blurring is performed, where i=1, …, K l ,K l Representing the input dimension of the hidden layer;
step 2: according to the clustering number h, initializing a membership weight index m and a clustering centerAnd initializing a membership matrix>Where r=1, 2, …, h, t is the number of iterations, determining the number m of component fuzzy neural networks in the hidden layer l And component fuzzy neural network front part membership function center +.>
Step 3: initializing fuzzy rule back-piece parametersAccording to the number m of the component fuzzy neural networks in the hidden layer l Component fuzzy neural network front part membership function center +.>Fuzzy rule back part parameter->Calculating the actual output of each component fuzzy neural network in the hidden layer of the model, obtaining the connection weight between the input layer and the output layer through the actual output of each component fuzzy neural network in the hidden layer, obtaining the actual output of the model by using the obtained connection weight, and calculating the Root Mean Square Error (RMSE) between the actual output of the model and the expected output;
step 4: judging RMSE<Whether lambda is true or not, if so, ending the model training; if not, clustering number h=h+1, H is less than or equal to H max ,H max Representing the maximum number of component fuzzy neural networks in the hidden layer, turning to step 2, and looping steps 2 through 4 until RMSE<Until the lambda condition is established; if clustering number h=h max If the condition is not satisfied, executing the step 5;
step 5: adding an hidden layer and newly adding hidden layer inputWherein L represents the number of hidden layers, m 1 The first component in the 1 st hidden layer is represented by a fuzzy neural network, given a fuzzyRadius of number>For->Blurring is carried out, the process goes to the step 2, and the steps 2 to 5 are circulated until the condition RMSE<Lambda is established;
inputting the preprocessed data into a designed multi-layer decomposition fuzzy neural network, and estimating scalar coupling constants among atoms based on the multi-layer decomposition fuzzy neural network.
2. The system for predicting scalar coupling constant between atoms based on a multi-layer decomposition-fuzzard network according to claim 1, wherein in said step 1, in the case of Y 1 I After blurring, the method further comprises:
obtaining fuzzy set F l i Fuzzy set F l i The fuzzy value after medium fuzzification isWherein (1)> Representation->Is (a) the domain of (a) is (are)>Represents Y 1 I Is a component of (a).
3. The system for predicting scalar coupling constants between atoms based on a multi-layer decomposition-fuzzard network of claim 1, wherein said step 2 comprises:
step 2.1: given a clustering number h, initializing a membership weight index m and a clustering centerInitializing a membership matrix->Setting an algorithm termination threshold epsilon and a maximum iteration number t max
Step 2.2: computing a kernel matrixWherein->Represents Y l I Components of (2);
step 2.3: updating cluster centersAnd calculate the membership matrix U (t+1)
Step 2.4: comparison membership matrix U (t) And U (t+1) If U (t+1) -U (t) ||<Epsilon holds true or reaches the maximum number of iterations t max The algorithm terminates, otherwise, steps 2.2 through 2.4 are looped.
4. The system for predicting scalar coupling constants between atoms based on a multi-layer decomposition-fuzzard network of claim 2, wherein said step 3 comprises:
step 3.1: membership function center of front part through component fuzzy neural networkDetermination of the s < th l The j-th rule front piece fuzzy set of the component fuzzy neural network +.>
Step 3.2: the obtained fuzzy set F l i And the s l Jth rule front piece fuzzy set of component fuzzy neural networkAnd (3) calculating the matching degree:
wherein t represents a triangular norm;
step 3.3: calculating activation forceWherein t represents a triangular norm, and performing normalization calculation on the activation force of each rule in the component fuzzy neural network to obtain normalized activation force +.>
Step 3.4: initializing fuzzy rule back-piece parametersThe normalized activation force and the fuzzy rule back part of the component fuzzy neural network are subjected to product calculation to obtain the output of each rule in the component fuzzy neural network>Wherein->The s-th layer representing the first hidden layer l The j-th rule back-piece parameter initial value of the fuzzy neural network of the individual components;
step 3.5: obtaining an output vector of the hidden layer by summing the output of each rule in the component fuzzy neural networkWherein->
Wherein y is (l)j Representing the output of the jth component fuzzy neural network in the ith hidden layer,representing the connection weight between the input layer and the output layer,/->Representing connection weights between the hidden layer and the output layer, g representing the desired output value;
the above formula is represented in matrix form:
ΦW LY =G
wherein Φ is the combination of input layer output and hidden layer output:
W LY the connection weight matrix among the input layer, the hidden layer and the output layer is as follows:
step 3.6: obtaining W according to a least square method LY =Φ + G, itThe "+" in the matrix represents the Moore-Penrose generalized inverse of the matrix, and the actual output D of the model is obtained by using the obtained connection weight train =ΦW LY Wherein D is train ={d i },i=1,2,...,N train
Step 3.7: computing a root mean square error RMSE between the actual output and the desired output of the model:
wherein d is i G is the actual output of the model i Is the desired output of the model.
5. The system for predicting scalar coupling constant between atoms based on a multi-layer decomposition-fuzzard network according to claim 1, wherein in said step 5, in the pair ofAfter blurring, the method further comprises:
obtaining fuzzy setsFuzzy aggregation->The blurring value after the medium blurring is +.>Wherein (1)> Representation->Is (a) the domain of (a) is (are)>Representation->Is a component of (a).
CN201911090719.8A 2019-11-09 2019-11-09 Scalar coupling constant prediction system between atoms based on multi-layer decomposition fuzzy neural network Active CN110766144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911090719.8A CN110766144B (en) 2019-11-09 2019-11-09 Scalar coupling constant prediction system between atoms based on multi-layer decomposition fuzzy neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911090719.8A CN110766144B (en) 2019-11-09 2019-11-09 Scalar coupling constant prediction system between atoms based on multi-layer decomposition fuzzy neural network

Publications (2)

Publication Number Publication Date
CN110766144A CN110766144A (en) 2020-02-07
CN110766144B true CN110766144B (en) 2023-10-13

Family

ID=69337086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911090719.8A Active CN110766144B (en) 2019-11-09 2019-11-09 Scalar coupling constant prediction system between atoms based on multi-layer decomposition fuzzy neural network

Country Status (1)

Country Link
CN (1) CN110766144B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113051821B (en) * 2021-03-24 2023-03-10 临沂大学 Concrete compressive strength prediction method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5416888A (en) * 1990-10-22 1995-05-16 Kabushiki Kaisha Toshiba Neural network for fuzzy reasoning
US5579439A (en) * 1993-03-24 1996-11-26 National Semiconductor Corporation Fuzzy logic design generator using a neural network to generate fuzzy logic rules and membership functions for use in intelligent systems
CN107977539A (en) * 2017-12-29 2018-05-01 华能国际电力股份有限公司玉环电厂 Improvement neutral net boiler combustion system modeling method based on object combustion mechanism
CN109272037A (en) * 2018-09-17 2019-01-25 江南大学 A kind of self-organizing TS pattern paste network modeling method applied to infra red flame identification
CN109598337A (en) * 2018-12-05 2019-04-09 河南工业大学 Decompose Fuzzy neural network optimization method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5416888A (en) * 1990-10-22 1995-05-16 Kabushiki Kaisha Toshiba Neural network for fuzzy reasoning
US5579439A (en) * 1993-03-24 1996-11-26 National Semiconductor Corporation Fuzzy logic design generator using a neural network to generate fuzzy logic rules and membership functions for use in intelligent systems
CN107977539A (en) * 2017-12-29 2018-05-01 华能国际电力股份有限公司玉环电厂 Improvement neutral net boiler combustion system modeling method based on object combustion mechanism
CN109272037A (en) * 2018-09-17 2019-01-25 江南大学 A kind of self-organizing TS pattern paste network modeling method applied to infra red flame identification
CN109598337A (en) * 2018-12-05 2019-04-09 河南工业大学 Decompose Fuzzy neural network optimization method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于LM算法的相关模糊神经网络及其应用;廖芹;姚丽;;科学技术与工程(第11期);全文 *
基于综合算法的补偿模糊神经网络建模方法;刘军;崔红;庞中华;李桂丽;;青岛科技大学学报(自然科学版)(第01期);全文 *
基于聚类算法的自适应模糊神经网络研究;宋清昆;郝敏;;控制工程(第S3期);全文 *

Also Published As

Publication number Publication date
CN110766144A (en) 2020-02-07

Similar Documents

Publication Publication Date Title
Schulman et al. Gradient estimation using stochastic computation graphs
Basirat et al. The quest for the golden activation function
Song et al. An extension to fuzzy cognitive maps for classification and prediction
Luo et al. Using spotted hyena optimizer for training feedforward neural networks
Montazer et al. Radial basis function neural networks: A review
Zhan et al. Learning-aided evolution for optimization
Li et al. One-shot neural architecture search for fault diagnosis using vibration signals
Han et al. Accelerated gradient algorithm for RBF neural network
Chen et al. A hybrid fuzzy inference prediction strategy for dynamic multi-objective optimization
Lu et al. A new hybrid algorithm for bankruptcy prediction using switching particle swarm optimization and support vector machines
CN111507365A (en) Confidence rule automatic generation method based on fuzzy clustering
Koeppe et al. Explainable artificial intelligence for mechanics: physics-explaining neural networks for constitutive models
CN113935489A (en) Variational quantum model TFQ-VQA based on quantum neural network and two-stage optimization method thereof
Zhang et al. A new approach to neural network via double hierarchy linguistic information: Application in robot selection
CN110766144B (en) Scalar coupling constant prediction system between atoms based on multi-layer decomposition fuzzy neural network
Kim et al. Knowledge extraction and representation using quantum mechanics and intelligent models
CN113052373A (en) Monthly runoff change trend prediction method based on improved ELM model
Alsadi et al. Intelligent estimation: A review of theory, applications, and recent advances
Wu et al. A forecasting model based support vector machine and particle swarm optimization
Parnianifard et al. New adaptive surrogate-based approach combined swarm optimizer assisted less tuning cost of dynamic production-inventory control system
Hua et al. Advances on intelligent algorithms for scientific computing: an overview
Chen et al. Dynamic multi-objective evolutionary algorithm with center point prediction strategy using ensemble Kalman filter
Hong et al. A new decision-making GMDH neural network: effective for limited and Fuzzy Data
Shan et al. Evolutionary extreme learning machine optimized by quantum-behaved particle swarm optimization
Wang et al. Stochastic adaptive CL-BFGS algorithms for fully complex-valued dendritic neuron model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant