CN107679566A - A kind of Bayesian network parameters learning method for merging expert's priori - Google Patents

A kind of Bayesian network parameters learning method for merging expert's priori Download PDF

Info

Publication number
CN107679566A
CN107679566A CN201710865219.1A CN201710865219A CN107679566A CN 107679566 A CN107679566 A CN 107679566A CN 201710865219 A CN201710865219 A CN 201710865219A CN 107679566 A CN107679566 A CN 107679566A
Authority
CN
China
Prior art keywords
mrow
beta
distribution
expression formula
normal distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710865219.1A
Other languages
Chinese (zh)
Inventor
柴慧敏
雷江南
赵昀瑶
方敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710865219.1A priority Critical patent/CN107679566A/en
Publication of CN107679566A publication Critical patent/CN107679566A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Complex Calculations (AREA)

Abstract

The present invention proposes a kind of Bayesian network parameters learning method for merging expert's priori, it is intended to improves the precision of parameter learning result under condition of small sample, realizes that step is:Obtain the normal distribution that the possibility of Bayesian network parameters value represents;Obtain the span of normal distribution standard difference;Obtain the object function for approaching using beta distribution and being solved needed for normal distribution;Abbreviation is carried out to object function expression formula and judges that object function whether there is minimum, if so, calculating the location parameter of beta distribution and the value of form parameter;Otherwise, trickle adjustment is carried out to the coefficient in object function expression formula;Obtain the parameter learning model of fusion expert's priori;Calculate the Distribution estimation value of each variable of fusion expert's priori.Expert's priori is fused in Bayesian Estimation method by the present invention, available for the data analysis with higher precision under condition of small sample.

Description

A kind of Bayesian network parameters learning method for merging expert's priori
Technical field
The invention belongs to field of computer technology, is related to a kind of Bayesian network parameters study for merging expert's priori Method, available for the data analysis with higher precision in practical application under condition of small sample.
Background technology
Bayesian network is a kind of graph-based of uncertainty relationship between description variable, general by structural model and condition Rate distribution collection two parts are formed:Network structure model is a directed acyclic graph (DAG, Directed Acyclic Graph), Node in figure represents stochastic variable, and the directed edge in figure represents the dependence between variable.Degree of dependence between two variables It is then to be described by attached probability distribution on each node.Conditional probability distribution collection or conditional probability table, it is each section The set of the local probability distribution of point association.Bayesian network is initially as probabilistic instrument in a kind of processing expert system And it is suggested.In recent years, it is increasingly being used for data analysis, to disclose and portray the rule contained in data.Shellfish This e-learning of leaf refers to obtaining the process of Bayesian network by data analysis, and it includes parameter learning and Structure learning two Kind situation.Wherein parameter learning refers to known network structure, the problem of determining network parameter.
In recent years, the Algorithm for Bayesian Networks Parameter Learning being widely used mainly has maximum likelihood estimate (MLE), shellfish This estimation technique of leaf and expectation maximization method (EM) etc..EM algorithms are applied to the condition of imperfect sample data, and MLE and Bayes estimate Meter method is applied to the condition of full sample data.The parameter θ of Bayesian network is considered as independent variable by MLE, with parameter θ seemingly For right function as optimization aim, the process that parameter learning is carried out with MLE is searching process.In the case of sample size abundance, MLE Can solve Bayesian network parameters problem concerning study known to network structure well, and in the case where sample size is seldom, MLE's Parameter learning precision is low.But in some practical applications, acquisition great amount of samples data are extremely difficult or cost is very high It is expensive, such as the case data in medical diagnosis system, the case data in financial operation risk management system, air combat situation assessment system Engine failure data in air battle data, Fault Diagnosis of Aeroengines system in system etc..In this case, by institute Obtainable sample data is often less;Simultaneously as the limitation of current conditions, such as some disaster datas or Campaign Process number According to needs correct decision-making is made in the case where sample size is as far as possible few.
As a rule, to build a Bayesian network can seek advice from the expert of association area, to obtain the priori in the field Knowledge.For a domain expert, his (or she) can easily and reliably determine the network structure of Bayesian network, And it is difficult to provide design parameter.Although expert is difficult to provide accurate network parameter, relatively easily can provide in network Constraint information between interdependent node, these constraint informations can be expressed as to the priori of our needs, and Bayesian Estimation During priori can be fused to parameter learning by method, the parameter θ of Bayesian network is considered as stochastic variable, and handle Priori on θ is expressed as a prior probability p (θ), and what is calculated is the posteriority of parameter θ after data D is observed Probability p (θ | D).In fact, these constraint informations that expert provides possess more preferable robustness than specific parameter information, but Under condition of small sample, due to the limited accuracy of prior information, traditional Bayesian Estimation method can cause parameter learning result Precision it is low.Therefore, the problem of improving the Bayesian network parameters study precision under Small Sample Database collection is all the time by wide General concern.
At present, numerous scholars are expanded to the Bayesian network parameters study under small data set using prior information and ground Study carefully and achieve some achievements.For example, Di Ruohai et al. was in 2 months 2014 the 2nd phases of volume 36《System engineering and electronics skill Art》On, deliver the article of entitled " the discrete Bayesian network parameters study based on monotonicity constraint ", it is proposed that one kind is based on The Parameter Learning Algorithm of monotonicity constraint, for parameter learning problem of the Bayesian Estimation method under Small Sample Database collection, give The mathematical modeling of monotonicity constraint is gone out, to express qualitatively prior information, then by monotonicity constraint with Dirichlet prior Form be integrated into Bayesian Estimation, and utilize Bayesian Estimation carry out parameter learning;But the method mentioned in text is only applicable In the priori based on parameter monotonicity constraint and range constraint, the precision that Bayesian network parameters learn can be influenceed, and it is right The requirement of expert's priori is more harsh, makes the cost of sample acquisition big.
The content of the invention
It is an object of the invention to overcome the shortcomings of above-mentioned prior art, it is proposed that a kind of shellfish for merging expert's priori This network parameter learning method of leaf, it is intended to improve the precision of parameter learning result under condition of small sample.
To achieve the above object, the technical scheme that the present invention takes comprises the following steps:
(1) normal distribution X~N (μ, σ that the possibility of Bayesian network parameters value represents are obtained2):
According to the transparency of expert's priori and the characteristics of flexibility and known bayesian network structure, acquisition expert Normal distribution X~N (μ, the σ that the possibility of the Bayesian network parameters value provided in priori represents2), wherein, X is represented Stochastic variable, μ represent normal distribution X~N (μ, σ2) expectation, σ2Represent normal distribution X~N (μ, σ2) variance;
(2) normal distribution X~N (μ, σ are obtained2) standard deviation sigma span:
According to normal distribution X~N (μ, σ2) area in the range of X=μ ± 0.2 at least accounts for normal distribution X~N (μ, σ2) The 80% of the gross area, X=μ ± 0.2 are substituted into probability density functionIn, wherein, X represents stochastic variable, and μ is represented just State distribution X~N (μ, σ2) expectation, σ represents normal distribution X~N (μ, σ2) standard deviation;And ensure X=μ+0.2 and X=μ- The absolute value of the difference of probability density function values at 0.2 is more than or equal to 80% and less than or equal to 100%, calculates probability density letter NumberSpan, consult the probability density function table of normal distyribution function, by calculating, obtain corresponding normal state It is distributed X~N (μ, σ2) span of standard deviation sigma is:0≤σ≤0.155;
(3) obtain and be distributed using beta to normal distribution X~N (μ, σ2) approached needed for solve object function M:
(3a) is to normal distribution X~N (μ, σ2) integrated in [0,1] section, obtain normal distribution X~N (μ, σ2) [0,1] the expectation expression formula E in sectionN(X);
(3b) is by normal distribution X~N (μ, σ2) expectation expression formula E in [0,1] sectionN(X) normal distribution X~N is substituted into (μ,σ2) expectation expression formula E in [0,1] sectionN(X) with variance expression formula DN(X) in functional relation, normal state point is obtained Cloth X~N (μ, σ2) variance expression formula D in [0,1] sectionN(X);
(3c) is in normal distribution X~N (μ, σ2) expectation expression formula E in [0,1] sectionN(X) and beta distribution expectation Expression formula EB(X) the form parameter α and location parameter β of the distribution of equal and beta value are all higher than under 1 constraints, with just State distribution X~N (μ, σ2) variance expression formula D in [0,1] sectionN(X) the variance expression formula D being distributed with betaB(X) difference Value square adds normal distribution X~N (μ, σ2) expectation μ and beta be distributed in expectation expression formula Mode in [0,1] section (X) difference square sum minimum value, be taken as beta distribution to normal distribution X~N (μ, σ2) approached needed for The object function M of solution;
(4) abbreviation is carried out to object function M expression formula, and judges that object function M whether there is pole according to abbreviation result Small value:
The location parameter β that (4a) is distributed using beta carries out abbreviation as independent variable, to object function M expression formula, obtains shellfish Tower distribution form parameter α and location parameter β between relational expression, and obtained from step (2) normal distribution X~N (μ, σ2) standard deviation sigma span in selection standard difference σ a value, substitute into beta distribution form parameter α and location parameter In relational expression between β, the expression formula of the object function M after abbreviation is obtained;
(4b) judges that object function M whether there is minimum, if so, holding according to the expression formula of the object function M after abbreviation Row step (6);Otherwise, step (5) is performed;
(5) adjustment to be added deduct to the coefficient in object function M expression formula in units of 0.1, and perform step (4);
(6) the location parameter β and form parameter α of beta distribution value are calculated:
Corresponding independent variable value during (6a) calculating target function M minimalizations, obtain the location parameter β's of beta distribution Value;
(6b) substitutes into the value for the location parameter β that beta is distributed in step (4a) the form parameter α that obtained beta is distributed In relational expression between location parameter β, the form parameter α of beta distribution value is obtained;
(7) the Bayesian network parameters learning model of fusion expert's priori is obtained:
The form parameter α and location parameter β of beta distribution value are substituted into the Bayesian network of Bayesian Estimation method In parameter expression, obtain merging the Bayesian network parameters learning model of expert's priori;
(8) the Distribution estimation value of each variable of Bayesian network of fusion expert's priori is calculated:
The Small Sample Database collection D of Bayesian network is read, according to the shellfish of known network structure and fusion expert's priori This network parameter learning model of leaf, using the Maximun Posterior Probability Estimation Method of Bayesian Estimation, calculate each variable in Bayesian network Distribution estimation value.
The present invention compared with prior art, has the following advantages that:
Expert's priori of Normal Distribution is fused to by the present invention during Bayesian network parameters learn In Bayesian Estimation method, as the prior probability p (θ) of Bayesian Estimation method, make what Bayesian network to be learned obtained Priori is more abundant, accurate, so that the posterior probability p (θ | D) of parameter θ to be calculated is more accurate, effectively improves The precision of the probability distribution of each variable of Bayesian network to be calculated, while reduce the cost of sample acquisition.
Brief description of the drawings
Fig. 1 is the implementation process figure of the present invention;
Fig. 2 is the bayesian network structure figure that specific embodiment uses in the present invention;
Fig. 3 is the normal distribution model that the possibility of the Bayesian network parameters value provided of expert's priori represents Figure;
Fig. 4 (a) be the present invention to p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C =0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) totally 18 posterior probability The KL divergence sums of the corresponding actual value of calculated value, and " the discrete Bayesian network parameters study based on monotonicity constraint " Method to p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C =1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R =1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) corresponding true of the calculated value of totally 18 posterior probability The KL divergence sums of value, between simulation comparison figure;
Fig. 4 (b) for the present invention to p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C= 0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C =1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R =1), the sum of p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) totally 18 posterior probability The corresponding actual value of calculated value Euclidean distance sum, and " discrete Bayesian network parameters based on monotonicity constraint Method in study " to p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), the calculated value of totally 18 posterior probability is right with it by p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) The Euclidean distance sum for the actual value answered, between simulation comparison figure.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in further detail, it is noted that to ability For the those of ordinary skill in domain, without departing from the inventive concept of the premise, various modifications and improvements can be made.These Belong to protection scope of the present invention.
A kind of reference picture 1, Bayesian network parameters learning method for merging expert's priori, comprises the following steps:
Step 1) obtains normal distribution X~N (μ, the σ that the possibility of Bayesian network parameters value represents2):
According to the transparency of expert's priori and the characteristics of flexibility and known bayesian network structure, acquisition expert Normal distribution X~N (μ, the σ that the possibility of the Bayesian network parameters value provided in priori represents2), wherein, X is represented Stochastic variable, μ represent normal distribution X~N (μ, σ2) expectation, σ2Represent normal distribution X~N (μ, σ2) variance;
Known bayesian network structure is classical lawn moistening model, as shown in Fig. 2 figure interior joint is two-value section Point, node value are defaulted as 0 and 1, wherein 0 represents false, 1 represents true;
Normal distribution X~N that the possibility of the Bayesian network parameters value provided in expert's priori represents (μ, σ2) model, as shown in figure 3, being normal distribution X~N (μ, a σ for it is expected that μ is 0.52), wherein the longitudinal axis represents normal distribution letter Numerical value y, transverse axis represent stochastic variable x value;
The possibility of Bayesian network parameters value represents, refers in Bayesian Estimation method, by Bayesian network Parameter θ is considered as stochastic variable, and expert's priori on parameter θ is expressed as Normal Distribution X~N (μ, a σ2) Prior probability p (θ), that to be calculated is the θ posterior probability p (θ | D) after data D is observed.It will be given in expert's priori The possibility of the Bayesian network parameters value gone out represents reflection into Bayesian network parameters condition distribution table, such as following table institute Show:
Step 2) obtains normal distribution X~N (μ, σ2) standard deviation sigma span:
According to normal distribution X~N (μ, σ2) area in the range of X=μ ± 0.2 at least accounts for normal distribution X~N (μ, σ2) The 80% of the gross area, X=μ ± 0.2 are substituted into probability density functionIn, wherein, X represents stochastic variable, and μ is represented just State distribution X~N (μ, σ2) expectation, σ represents normal distribution X~N (μ, σ2) standard deviation;And ensure X=μ+0.2 and X=μ- The absolute value of the difference of probability density function values at 0.2 is more than or equal to 80% and less than or equal to 100%, calculates probability density letter NumberSpan, consult the probability density function table of normal distyribution function, by calculating, obtain corresponding normal state It is distributed X~N (μ, σ2) span of standard deviation sigma is:0≤σ≤0.155;
Step 3) obtains to be distributed to normal distribution X~N (μ, σ using beta2) approached needed for solve object function M:
(3a) is to normal distribution X~N (μ, σ2) integrated in [0,1] section, obtain normal distribution X~N (μ, σ2) [0,1] the expectation expression formula E in sectionN(X), it is specially:
Wherein, μ represents normal distribution X~N (μ, σ2) expectation, σ represents normal distribution X~N (μ, σ2) standard deviation;
(3b) is by normal distribution X~N (μ, σ2) expectation expression formula E in [0,1] sectionN(X) normal distribution X~N is substituted into (μ,σ2) expectation expression formula E in [0,1] sectionN(X) with variance expression formula DN(X) in functional relation, normal state point is obtained Cloth X~N (μ, σ2) variance expression formula D in [0,1] sectionN(X):
(i) normal distribution X~N (μ, σ2) expectation expression formula E in [0,1] sectionN(X) with variance expression formula DN(X) Functional relation, it is specially:
DN(X)=EN(X2)-EN 2(X)
Wherein, DN(X) normal distribution X~N (μ, σ are represented2) variance expression formula in [0,1] section, EN(X) represent just State distribution X~N (μ, σ2) expectation expression formula in [0,1] section;
(ii) normal distribution X~N (μ, σ2) variance expression formula D in [0,1] sectionN(X), it is specially:
Wherein, σ represents normal distribution X~N (μ, σ2) standard deviation.
(3c) is in normal distribution X~N (μ, σ2) expectation expression formula E in [0,1] sectionN(X) and beta distribution expectation Expression formula EB(X) the form parameter α and location parameter β of the distribution of equal and beta value are all higher than under 1 constraints, with just State distribution X~N (μ, σ2) variance expression formula D in [0,1] sectionN(X) the variance expression formula D being distributed with betaB(X) difference Value square adds normal distribution X~N (μ, σ2) expectation μ and beta be distributed in expectation expression formula Mode in [0,1] section (X) difference square sum minimum value, be taken as beta distribution to normal distribution X~N (μ, σ2) approached needed for The object function M of solution:
(i) the expectation expression formula E of beta distributionB(X), it is specially:
Wherein, α represents the form parameter of beta distribution, and β represents the location parameter of beta distribution;
(ii) the variance expression formula D of beta distributionB(X), it is specially:
Wherein, α represents the form parameter of beta distribution, and β represents the location parameter of beta distribution;
(iii) beta is distributed in the expectation expression formula Mode (X) in [0,1] section, is specially:
Wherein, α represents the form parameter of beta distribution, and β represents the location parameter of beta distribution;
(iv) it is distributed using beta to normal distribution X~N (μ, σ2) approached needed for solve object function M, its table It is specially up to formula:
M=min [(DN(X)-DB(X))2+(μ-Mode(X))2]
Wherein, DN(X) normal distribution X~N (μ, σ are represented2) variance expression formula in [0,1] section, DB(X) shellfish is represented The variance expression formula of tower distribution, μ represent normal distribution X~N (μ, σ2) expectation, Mode (X) represent beta be distributed in [0,1] area Interior expectation expression formula, EN(X) normal distribution X~N (μ, σ are represented2) expectation expression formula in [0,1] section, EB(X) table Show the expectation expression formula of beta distribution, α represents the form parameter of beta distribution, and α > 1, β represent the location parameter of beta distribution, β > 1.
Step 4) carries out abbreviation to object function M expression formula, and judges that object function M whether there is according to abbreviation result Minimum:
The location parameter β that (4a) is distributed using beta carries out abbreviation as independent variable, to object function M expression formula, obtains shellfish Relational expression between the form parameter α and location parameter β of tower distribution, and normal distribution X~N (μ, the σ obtained from step 2)2) A selection standard difference σ value in the span of standard deviation sigma, substitute into beta distribution form parameter α and location parameter β it Between relational expression in, obtain the expression formula of the object function M after abbreviation, be specially:
Wherein, the form parameter of α expressions beta distribution, the location parameter of β expression beta distributions, μ expression normal distributions X~ N(μ,σ2) expectation, σ represents normal distribution X~N (μ, σ2) standard deviation.
The present embodiment is by taking σ=0.155 as an example;
(4b) judges that object function M whether there is minimum, if so, holding according to the expression formula of the object function M after abbreviation Row step 6);Otherwise, step 5) is performed;Judge that object function M is with the presence or absence of the method for minimum:To object function M derivations And zero point is sought, judge that zero point whether there is, if so, object function M has minimum;Otherwise, object function M is not present minimum Value.
The adjustment that step 5) is added deduct to the coefficient in object function M expression formula in units of 0.1, and perform step It is rapid 4);
Step 6) calculates the location parameter β and form parameter α of beta distribution value:
Corresponding independent variable value during (6a) calculating target function M minimalizations, obtain the location parameter β's of beta distribution Value;
(6b) substitutes into the value for the location parameter β that beta is distributed in step (4a) the form parameter α that obtained beta is distributed In relational expression between location parameter β, the form parameter α of beta distribution value is obtained;
Step 7) obtains the Bayesian network parameters learning model of fusion expert's priori:
The form parameter α and location parameter β of beta distribution value are substituted into the Bayesian network of Bayesian Estimation method In parameter expression, obtain merging the Bayesian network parameters learning model of expert's priori, its expression formula is:
Wherein, NijkExpression meets π (X in the Small Sample Database collection D of Bayesian networkiX under the conditions of)=jj=k sample Number, α represent the form parameter of beta distribution, and β represents the location parameter of beta distribution;
Step 8) calculates the Distribution estimation value of each variable of Bayesian network of fusion expert's priori:
The Small Sample Database collection D of Bayesian network is read, according to the shellfish of known network structure and fusion expert's priori This network parameter learning model of leaf, using the Maximun Posterior Probability Estimation Method of Bayesian Estimation, calculate each variable in Bayesian network Distribution estimation value.
Below in conjunction with emulation experiment, the technique effect of the present invention is illustrated:
1. simulated conditions and content:Model Bayes network structure is moistened using the lawn of classics, as shown in Fig. 2 using The sample size of data set is respectively 15,30,50,100, carries out Bayesian network parameters study.Simulated environment is Intel (R) MATLAB R2014a under the 64bit operating systems of Pentium (R) CPU G2020@2.90GHz, Windows 7.
Emulation content:
Emulation 1:According to Bayesian network parameters learning outcome calculate the present invention to p (C=1), p (C=0), p (R=1 | C =1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W =0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) calculated value of totally 18 posterior probability and p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), P (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W= 1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) totally 18 posteriority The KL divergence sums of the actual value of probability, and " discrete Bayesian network parameters study " based on monotonicity constraint method to p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C= 1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R= 1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) calculated value of totally 18 posterior probability and p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), P (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R= 0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p The KL divergence sums of (W=0 | C=0, R=0) actual value of totally 18 posterior probability, are contrasted;KL divergence expression formulas are specific For:
P (x) represents the actual value of each posterior probability in formula, and q (x) represents calculating of the present invention to each posterior probability Value, n represent the number sum of each variable posterior probability;
Emulation 2:According to Bayesian network parameters learning outcome calculate the present invention to p (C=1), p (C=0), p (R=1 | C =1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W =0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) calculated value of totally 18 posterior probability and p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), P (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W= 1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) totally 18 posteriority The Euclidean distance sum of the actual value of probability, and the method for " the discrete Bayesian network parameters study based on monotonicity constraint " To p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R =1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), P (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) calculated value of totally 18 posterior probability and p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), P (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R= 0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p The Euclidean distance sum of (W=0 | C=0, R=0) actual value of totally 18 posterior probability, is contrasted;Between 2 points of n-dimensional space The expression formula of Euclidean distance be specially:
X in formula1kRepresent the actual value of each posterior probability, x2kRepresent calculated value of the present invention to each posterior probability, n Represent the number sum of each variable posterior probability;
2. analysis of simulation result:
Reference picture 4 (a), the p (C=1), p (C=0), p (R=1 that longitudinal axis expression learns to obtain by Bayesian network parameters | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S= 1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W= 0 | C=0, R=0) calculated value and p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), P (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R= 1), the KL divergences of p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) actual value it With transverse axis expression sample size.By the present invention to p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C= 0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) calculated value and p (C= 1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W =1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C =0, R=0) and p (W=0 | C=0, R=0) actual value KL divergence sums, and " the discrete pattra leaves based on monotonicity constraint This network parameter learns " method to p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C= 0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C =1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R =1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) calculated value and p (C=1), P (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S =0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W= 1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C= 0, R=0) and p (W=0 | C=0, R=0) actual value KL divergence sums, as index, it is more of the invention with " based on dullness Property constraint discrete Bayesian network parameters study " method to Bayesian network carry out parameter learning result precision.KL dissipates Degree is the function for describing the fit correlation between two distributions, and KL divergences are smaller, illustrates the better of two fittings of distribution, i.e., Parameter learning precision is higher;Sample size be not more than 100 in the case of, the present invention to p (C=1), p (C=0), p (R=1 | C= 1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C =0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W= 0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C =0, R=0) calculated value and p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R =0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R= 1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p The KL divergence sums of (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) actual value are small In " discrete Bayesian network parameters study " based on monotonicity constraint method to p (C=1), p (C=0), p (R=1 | C= 1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C =0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W= 0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C =0, R=0) calculated value and p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R =0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R= 1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p The KL divergence sums of (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) actual value, because This, the present invention is in precision better than the method for " the discrete Bayesian network parameters study based on monotonicity constraint ".
Reference picture 4 (b), the p (C=1), p (C=0), p (R=1 that longitudinal axis expression learns to obtain by Bayesian network parameters | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S= 1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W= 0 | C=0, R=0) calculated value and p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), P (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R= 1), the Euclidean distance of p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) actual value it With transverse axis expression sample size.By the present invention to p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C= 0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) calculated value and p (C= 1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W =1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C =0, R=0) and p (W=0 | C=0, R=0) actual value Euclidean distance sum, and " the discrete shellfish based on monotonicity constraint The method of leaf this network parameter study " to p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C= 0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C =1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R =1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) calculated value and p (C=1), P (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S =0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W= 1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C= 0, R=0) and p (W=0 | C=0, R=0) actual value Euclidean distance sum, as index, it is more of the invention with " based on single The method of the discrete Bayesian network parameters study of tonality constraint " carries out the precision of parameter learning result to Bayesian network.Europe Family name's distance is used for measuring actual distance in n-dimensional space between two points, and Euclidean distance is smaller, illustrates two fittings of distribution Better, i.e. parameter learning precision is higher;In the case where sample size is not more than 100, the present invention is to p (C=1), p (C=0), p (R =1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R= 0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p The calculated value of (W=0 | C=0, R=0) and p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C =0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), the Euclidean of p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) actual value away from From sum, less than " the discrete Bayesian network parameters study based on monotonicity constraint " method to p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C=0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), P (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R= 0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p The calculated value of (W=0 | C=0, R=0) and p (C=1), p (C=0), p (R=1 | C=1), p (R=0 | C=1), p (R=1 | C =0), p (R=0 | C=0), p (S=1 | C=1), p (S=0 | C=1), p (S=1 | C=0), p (S=0 | C=0), p (W=1 | C=1, R=1), p (W=0 | C=1, R=1), p (W=1 | C=1, R=0), p (W=0 | C=1, R=0), p (W=1 | C=0, R=1), the Euclidean of p (W=0 | C=0, R=1), p (W=1 | C=0, R=0) and p (W=0 | C=0, R=0) actual value away from From sum, therefore, the present invention is in precision better than the method for " the discrete Bayesian network parameters study based on monotonicity constraint ".
From Fig. 4 (a), Fig. 4 (b) simulation result, in the case where sample size is not more than 100, entered using the present invention The precision of row Bayesian network parameters study, higher than the precision that Bayesian network parameters study is carried out using existing method.So Under condition of small sample, compared with prior art, expert's priori that the present invention can merge Normal Distribution well is known Know, improve the precision of Bayesian Estimation method.

Claims (8)

1. a kind of Bayesian network parameters learning method for merging expert's priori, it is characterised in that comprise the following steps:
(1) normal distribution X~N (μ, σ that the possibility of Bayesian network parameters value represents are obtained2):
According to the transparency of expert's priori and the characteristics of flexibility and known bayesian network structure, acquisition expert's priori Normal distribution X~N (μ, the σ that the possibility of the Bayesian network parameters value provided in knowledge represents2), wherein, X represents random Variable, μ represent normal distribution X~N (μ, σ2) expectation, σ2Represent normal distribution X~N (μ, σ2) variance;
(2) normal distribution X~N (μ, σ are obtained2) standard deviation sigma span:
According to normal distribution X~N (μ, σ2) area in the range of X=μ ± 0.2 at least accounts for normal distribution X~N (μ, σ2) total face Long-pending 80%, X=μ ± 0.2 are substituted into probability density functionIn, wherein, X represents stochastic variable, and μ represents normal state point Cloth X~N (μ, σ2) expectation, σ represents normal distribution X~N (μ, σ2) standard deviation;And ensure in X=μ+0.2 and X=μ -0.2 The absolute value of the difference of the probability density function values at place is more than or equal to 80% and less than or equal to 100%, calculates probability density functionSpan, consult the probability density function table of normal distyribution function, by calculating, obtain corresponding to normal state point Cloth X~N (μ, σ2) span of standard deviation sigma is:0≤σ≤0.155;
(3) obtain and be distributed using beta to normal distribution X~N (μ, σ2) approached needed for solve object function M:
(3a) is to normal distribution X~N (μ, σ2) integrated in [0,1] section, obtain normal distribution X~N (μ, σ2) [0, 1] the expectation expression formula E in sectionN(X);
(3b) is by normal distribution X~N (μ, σ2) expectation expression formula E in [0,1] sectionN(X) substitute into normal distribution X~N (μ, σ2) expectation expression formula E in [0,1] sectionN(X) with variance expression formula DN(X) in functional relation, normal distribution X is obtained ~N (μ, σ2) variance expression formula D in [0,1] sectionN(X);
(3c) is in normal distribution X~N (μ, σ2) expectation expression formula E in [0,1] sectionN(X) and beta distribution expectation expression Formula EB(X) the form parameter α and location parameter β of the distribution of equal and beta value are all higher than under 1 constraints, with normal state point Cloth X~N (μ, σ2) variance expression formula D in [0,1] sectionN(X) the variance expression formula D being distributed with betaB(X) difference Square add normal distribution X~N (μ, σ2) expectation μ and beta be distributed in expectation expression formula Mode's (X) in [0,1] section Difference square sum minimum value, be taken as beta distribution to normal distribution X~N (μ, σ2) approached needed for solve Object function M;
(4) abbreviation is carried out to object function M expression formula, and judges that object function M whether there is minimum according to abbreviation result:
The location parameter β that (4a) is distributed using beta carries out abbreviation as independent variable, to object function M expression formula, obtains beta point Relational expression between the form parameter α and location parameter β of cloth, and normal distribution X~N (μ, the σ obtained from step (2)2) mark A selection standard difference σ value, is substituted between the form parameter α and location parameter β of beta distribution in accurate poor σ span Relational expression in, obtain the expression formula of the object function M after abbreviation;
(4b) judges that object function M whether there is minimum, if so, performing step according to the expression formula of the object function M after abbreviation Suddenly (6);Otherwise, step (5) is performed;
(5) adjustment to be added deduct to the coefficient in object function M expression formula in units of 0.1, and perform step (4);
(6) the location parameter β and form parameter α of beta distribution value are calculated:
Corresponding independent variable value during (6a) calculating target function M minimalizations, obtain the location parameter β of beta distribution value;
(6b) substitutes into the value for the location parameter β that beta is distributed in step (4a) form parameter α and the position that obtained beta is distributed Put in the relational expression between parameter beta, obtain the form parameter α of beta distribution value;
(7) the Bayesian network parameters learning model of fusion expert's priori is obtained:
The form parameter α and location parameter β of beta distribution value are substituted into the parameter of the Bayesian network of Bayesian Estimation method In expression formula, obtain merging the Bayesian network parameters learning model of expert's priori;
(8) the Distribution estimation value of each variable of Bayesian network of fusion expert's priori is calculated:
The Small Sample Database collection D of Bayesian network is read, according to the Bayes of known network structure and fusion expert's priori Network parameter learning model, using the Maximun Posterior Probability Estimation Method of Bayesian Estimation, each variable is general in calculating Bayesian network Rate is distributed estimate.
2. a kind of Bayesian network parameters learning method for merging expert's priori according to claim 1, its feature It is, the possibility of the Bayesian network parameters value described in step (1) represents, refers in Bayesian Estimation method, by shellfish The parameter θ of this network of leaf is considered as stochastic variable, and expert's priori on parameter θ is expressed as a Normal Distribution X ~N (μ, σ2) prior probability p (θ).
3. a kind of Bayesian network parameters learning method for merging expert's priori according to claim 1, its feature It is, normal distribution X~N (μ, σ described in step (3a)2) expectation expression formula E in [0,1] sectionN(X), it is specially:
<mrow> <msub> <mi>E</mi> <mi>N</mi> </msub> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mi>&amp;sigma;</mi> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> </mfrac> <mo>+</mo> <mfrac> <mi>&amp;mu;</mi> <mn>2</mn> </mfrac> </mrow>
Wherein, μ represents normal distribution X~N (μ, σ2) expectation, σ represents normal distribution X~N (μ, σ2) standard deviation.
4. a kind of Bayesian network parameters learning method for merging expert's priori according to claim 1, its feature It is, normal distribution X~N (μ, σ described in step (3b)2) expectation expression formula E in [0,1] sectionN(X) expressed with variance Formula DN(X) functional relation, described normal distribution X~N (μ, σ2) variance expression formula D in [0,1] sectionN(X), divide It is not:
(i) normal distribution X~N (μ, σ2) expectation expression formula E in [0,1] sectionN(X) with variance expression formula DN(X) function Relational expression, it is specially:
DN(X)=EN(X2)-EN 2(X)
Wherein, DN(X) normal distribution X~N (μ, σ are represented2) variance expression formula in [0,1] section, EN(X) normal state point is represented Cloth X~N (μ, σ2) expectation expression formula in [0,1] section;
(ii) normal distribution X~N (μ, σ2) variance expression formula D in [0,1] sectionN(X), it is specially:
<mrow> <msub> <mi>D</mi> <mi>N</mi> </msub> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msup> <mi>&amp;sigma;</mi> <mn>2</mn> </msup> <mn>2</mn> </mfrac> </mrow>
Wherein, σ represents normal distribution X~N (μ, σ2) standard deviation.
5. a kind of Bayesian network parameters learning method for merging expert's priori according to claim 1, its feature It is, the expectation expression formula E of the beta distribution described in step (3c)B(X), the variance expression formula D of described beta distributionB(X), Described beta is distributed in the expectation expression formula Mode (X) in [0,1] section, and described is distributed to normal distribution X using beta ~N (μ, σ2) approached needed for solve object function M, be respectively:
(i) the expectation expression formula E of beta distributionB(X), it is specially:
<mrow> <msub> <mi>E</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mi>&amp;alpha;</mi> <mrow> <mi>&amp;alpha;</mi> <mo>+</mo> <mi>&amp;beta;</mi> </mrow> </mfrac> </mrow>
Wherein, α represents the form parameter of beta distribution, and β represents the location parameter of beta distribution;
(ii) the variance expression formula D of beta distributionB(X), it is specially:
<mrow> <msub> <mi>D</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>&amp;alpha;</mi> <mi>&amp;beta;</mi> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <mrow> <mi>&amp;alpha;</mi> <mo>+</mo> <mi>&amp;beta;</mi> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mrow> <mi>&amp;alpha;</mi> <mo>+</mo> <mi>&amp;beta;</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
Wherein, α represents the form parameter of beta distribution, and β represents the location parameter of beta distribution;
(iii) beta is distributed in the expectation expression formula Mode (X) in [0,1] section, is specially:
<mrow> <mi>M</mi> <mi>o</mi> <mi>d</mi> <mi>e</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>&amp;alpha;</mi> <mo>-</mo> <mn>1</mn> </mrow> <mrow> <mi>&amp;alpha;</mi> <mo>+</mo> <mi>&amp;beta;</mi> <mo>-</mo> <mn>2</mn> </mrow> </mfrac> </mrow>
Wherein, α represents the form parameter of beta distribution, and β represents the location parameter of beta distribution;
(iv) it is distributed using beta to normal distribution X~N (μ, σ2) approached needed for solve object function M, its expression formula tool Body is:
M=min [(DN(X)-DB(X))2+(μ-Mode(X))2]
<mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <msub> <mi>E</mi> <mi>N</mi> </msub> <mo>(</mo> <mi>X</mi> <mo>)</mo> <mo>=</mo> <msub> <mi>E</mi> <mi>B</mi> </msub> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mtd> </mtr> <mtr> <mtd> <mi>&amp;alpha;</mi> <mo>&gt;</mo> <mn>1</mn> <mo>,</mo> <mi>&amp;beta;</mi> <mo>&gt;</mo> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, DN(X) normal distribution X~N (μ, σ are represented2) variance expression formula in [0,1] section, DB(X) beta point is represented The variance expression formula of cloth, μ represent normal distribution X~N (μ, σ2) expectation, Mode (X) represent beta be distributed in [0,1] section Expectation expression formula, EN(X) normal distribution X~N (μ, σ are represented2) expectation expression formula in [0,1] section, EB(X) shellfish is represented The expectation expression formula of tower distribution, α represent the form parameter of beta distribution, and α > 1, β represent the location parameter of beta distribution, β > 1.
6. a kind of Bayesian network parameters learning method for merging expert's priori according to claim 1, its feature It is, the expression formula of the object function M after abbreviation described in step (4a), is specially:
<mrow> <mi>&amp;alpha;</mi> <mo>=</mo> <mfrac> <mrow> <mi>&amp;beta;</mi> <mrow> <mo>(</mo> <mn>2</mn> <mi>&amp;sigma;</mi> <mo>+</mo> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <mi>&amp;mu;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <mo>-</mo> <mn>2</mn> <mi>&amp;sigma;</mi> <mo>-</mo> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <mi>&amp;mu;</mi> </mrow> </mfrac> </mrow>
Wherein, the form parameter of α expressions beta distribution, the location parameter of β expression beta distributions, μ expression normal distributions X~N (μ, σ2) expectation, σ represents normal distribution X~N (μ, σ2) standard deviation.
7. a kind of Bayesian network parameters learning method for merging expert's priori according to claim 1, its feature It is, the judgement object function M described in step (4b) whether there is minimum, and determination methods are:To object function M derivations and ask Zero point, judge that zero point whether there is, if so, object function M has minimum;Otherwise, minimum is not present in object function M.
8. a kind of Bayesian network parameters learning method for merging expert's priori according to claim 1, its feature It is, the Bayesian network parameters learning model of fusion expert's priori described in step (7), its expression formula is:
<mrow> <msub> <mi>&amp;theta;</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <mi>&amp;alpha;</mi> <mo>+</mo> <msub> <mi>N</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> <mrow> <mi>&amp;alpha;</mi> <mo>+</mo> <mi>&amp;beta;</mi> <mo>+</mo> <msub> <mi>N</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mfrac> </mrow>
Wherein, NijkExpression meets π (X in the Small Sample Database collection D of Bayesian networkiX under the conditions of)=jj=k sample Number, α represent the form parameter of beta distribution, and β represents the location parameter of beta distribution.
CN201710865219.1A 2017-09-22 2017-09-22 A kind of Bayesian network parameters learning method for merging expert's priori Pending CN107679566A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710865219.1A CN107679566A (en) 2017-09-22 2017-09-22 A kind of Bayesian network parameters learning method for merging expert's priori

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710865219.1A CN107679566A (en) 2017-09-22 2017-09-22 A kind of Bayesian network parameters learning method for merging expert's priori

Publications (1)

Publication Number Publication Date
CN107679566A true CN107679566A (en) 2018-02-09

Family

ID=61136517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710865219.1A Pending CN107679566A (en) 2017-09-22 2017-09-22 A kind of Bayesian network parameters learning method for merging expert's priori

Country Status (1)

Country Link
CN (1) CN107679566A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063845A (en) * 2018-07-15 2018-12-21 大国创新智能科技(东莞)有限公司 Based on the deep learning method and robot system for generating sample
CN109214450A (en) * 2018-08-28 2019-01-15 北京航空航天大学 A kind of unmanned systems resource allocation methods based on Bayes's programmed instruction programmed learning algorithm
CN109886944A (en) * 2019-02-02 2019-06-14 浙江大学 A kind of white matter high signal intensity detection and localization method based on multichannel chromatogram
CN110008350A (en) * 2019-03-06 2019-07-12 杭州哲达科技股份有限公司 A kind of pump Ankang knowledge base lookup method based on Bayesian inference
CN110705132A (en) * 2019-10-31 2020-01-17 哈尔滨工业大学 Guidance control system performance fusion evaluation method based on multi-source heterogeneous data
CN111913887A (en) * 2020-08-19 2020-11-10 中国人民解放军军事科学院国防科技创新研究院 Software behavior prediction method based on beta distribution and Bayesian estimation
CN112163373A (en) * 2020-09-23 2021-01-01 中国民航大学 Radar system performance index dynamic evaluation method based on Bayesian machine learning
CN113139604A (en) * 2021-04-26 2021-07-20 东南大学 Heart rate fusion labeling method and system based on Bayesian prior probability
CN113780348A (en) * 2021-08-09 2021-12-10 浙江工业大学 Expert knowledge constraint-based Bayesian network model parameter learning method for state evaluation of high-voltage switch cabinet
CN115456220A (en) * 2022-09-29 2022-12-09 江苏佩捷纺织智能科技有限公司 Intelligent factory architecture method and system based on digital model
CN117113524A (en) * 2023-07-17 2023-11-24 武汉理工大学 Sampling method, system, equipment and terminal integrating design knowledge

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063845B (en) * 2018-07-15 2021-12-07 大国创新智能科技(东莞)有限公司 Deep learning method based on generated samples and robot system
CN109063845A (en) * 2018-07-15 2018-12-21 大国创新智能科技(东莞)有限公司 Based on the deep learning method and robot system for generating sample
CN109214450A (en) * 2018-08-28 2019-01-15 北京航空航天大学 A kind of unmanned systems resource allocation methods based on Bayes's programmed instruction programmed learning algorithm
CN109214450B (en) * 2018-08-28 2022-05-10 北京航空航天大学 Unmanned system resource allocation method based on Bayesian program learning algorithm
CN109886944A (en) * 2019-02-02 2019-06-14 浙江大学 A kind of white matter high signal intensity detection and localization method based on multichannel chromatogram
CN110008350A (en) * 2019-03-06 2019-07-12 杭州哲达科技股份有限公司 A kind of pump Ankang knowledge base lookup method based on Bayesian inference
CN110705132A (en) * 2019-10-31 2020-01-17 哈尔滨工业大学 Guidance control system performance fusion evaluation method based on multi-source heterogeneous data
CN110705132B (en) * 2019-10-31 2023-04-28 哈尔滨工业大学 Guidance control system performance fusion evaluation method based on multi-source heterogeneous data
CN111913887A (en) * 2020-08-19 2020-11-10 中国人民解放军军事科学院国防科技创新研究院 Software behavior prediction method based on beta distribution and Bayesian estimation
CN111913887B (en) * 2020-08-19 2022-11-11 中国人民解放军军事科学院国防科技创新研究院 Software behavior prediction method based on beta distribution and Bayesian estimation
CN112163373A (en) * 2020-09-23 2021-01-01 中国民航大学 Radar system performance index dynamic evaluation method based on Bayesian machine learning
CN113139604A (en) * 2021-04-26 2021-07-20 东南大学 Heart rate fusion labeling method and system based on Bayesian prior probability
CN113780348A (en) * 2021-08-09 2021-12-10 浙江工业大学 Expert knowledge constraint-based Bayesian network model parameter learning method for state evaluation of high-voltage switch cabinet
CN115456220A (en) * 2022-09-29 2022-12-09 江苏佩捷纺织智能科技有限公司 Intelligent factory architecture method and system based on digital model
CN115456220B (en) * 2022-09-29 2024-03-15 江苏佩捷纺织智能科技有限公司 Intelligent factory architecture method and system based on digital model
CN117113524A (en) * 2023-07-17 2023-11-24 武汉理工大学 Sampling method, system, equipment and terminal integrating design knowledge
CN117113524B (en) * 2023-07-17 2024-03-26 武汉理工大学 Sampling method, system, equipment and terminal integrating design knowledge

Similar Documents

Publication Publication Date Title
CN107679566A (en) A kind of Bayesian network parameters learning method for merging expert&#39;s priori
Falk et al. A flexible full-information approach to the modeling of response styles.
Roberts et al. Characteristics of MML/EAP parameter estimates in the generalized graded unfolding model
Dolan et al. Fitting multivariage normal finite mixtures subject to structural equation modeling
Gatta et al. On finite sample performance of confidence intervals methods for willingness to pay measures
CN102393881B (en) A kind of high-precision detecting method of real-time many sensing temperatures data fusion
Venanzi et al. Trust-based fusion of untrustworthy information in crowdsourcing applications
CN109492076B (en) Community question-answer website answer credible evaluation method based on network
CN105260390A (en) Group-oriented project recommendation method based on joint probability matrix decomposition
CN103617259A (en) Matrix decomposition recommendation method based on Bayesian probability with social relations and project content
Taylor et al. Survival estimation and testing via multiple imputation
Fu et al. Objective Bayesian analysis of Pareto distribution under progressive Type-II censoring
CN111026877A (en) Knowledge verification model construction and analysis method based on probability soft logic
Nieto Modeling bivariate threshold autoregressive processes in the presence of missing data
Zhu et al. Autoregressive optimal transport models
CN101299218B (en) Method and device for searching three-dimensional model
CN106651427A (en) Data association method based on user behavior
CN110716998B (en) Fine scale population data spatialization method
CN109977131A (en) A kind of house type matching system
Anderson et al. Log-multiplicative association models as item response models
Kim Parameter estimation in stochastic volatility models with missing data using particle methods and the EM algorithm
Chen et al. Monte Carlo simulation in financial engineering
Fu et al. Estimation of multinomial probit-kernel integrated choice and latent variable model: comparison on one sequential and two simultaneous approaches
Wang et al. A new method on decision-making using fuzzy linguistic assessment variables and fuzzy preference relations
Yuan et al. Online Calibration in Multidimensional Computerized Adaptive Testing with Polytomously Scored Items

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180209