CN107766884A - Bayes fusion evaluation method based on representative point optimization - Google Patents

Bayes fusion evaluation method based on representative point optimization Download PDF

Info

Publication number
CN107766884A
CN107766884A CN201710974547.5A CN201710974547A CN107766884A CN 107766884 A CN107766884 A CN 107766884A CN 201710974547 A CN201710974547 A CN 201710974547A CN 107766884 A CN107766884 A CN 107766884A
Authority
CN
China
Prior art keywords
mrow
msub
msup
mtd
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710974547.5A
Other languages
Chinese (zh)
Other versions
CN107766884B (en
Inventor
段晓君
刘博文
晏良
徐琎
张胜迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201710974547.5A priority Critical patent/CN107766884B/en
Publication of CN107766884A publication Critical patent/CN107766884A/en
Application granted granted Critical
Publication of CN107766884B publication Critical patent/CN107766884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Complex Calculations (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

The invention discloses a Bayes fusion evaluation method based on representative point optimization, which comprises the following steps: 1. constructing a classic Bayes point estimation model, and calculating Bayes posterior estimation values of parameters (mu and D); 2. partitioning a priori sample data into n components using a clustering algorithmrClass nrNot less than 4, based on nrCalculating a posterior estimation value of the parameter by taking the clustering center of the class as a representative point; 3. constructing an optimization function, and calculating an optimization function value, wherein the optimization function comprises a deviation function and an information loss function; 4. and screening the optimal representative point according to the optimization function value to obtain a Bayes posterior estimation value based on the representative point. The method has the advantages of reasonably and effectively utilizing prior information, quantifying the influence of the number of representative points on information loss and fusion efficiency, and further improving test evaluation precision.

Description

It is a kind of that appraisal procedure is merged based on the Bayes for representing point optimization
Technical field
Present invention relates generally to the accuracy evaluation field of guidance system and alignment system, more particularly to one kind is based on representative The Bayes fusion appraisal procedures of point optimization.
Background technology
For accuracy testing for guidance system or alignment system is assessed, the sample size of each step-by-step test is less, directly It is relatively low to tap into the result precision that row assessment obtains.Document [1] (Tang Xuemei, Zhang Jinhuai, Shao Fengchang, waits weaponrys System in Small Sample Situation to try Analysis is tested with assessing [M] Beijing:National Defense Industry Press, 2001:1-3) point out during test assessment, to solve increment The problem of less, test assessment is carried out frequently with Bayes methods.Its advantage is can fully to merge the priori of all kinds of Heterogeneous assays Information, so as to improve the degree of accuracy of test assessment, but traditional Bayes fusions appraisal procedure mainly faces both sides hardly possible Topic:First, prior information distortion;Second, priori sample number is excessive.
When priori information distortion, prior information directly is merged into assessment with real scene experiment can bring relatively large deviation, There are many documents to discuss this respect problem at present.Document [2] (Huang Hanyan, Duan Xiaojun, Wang Zhengming, considers priori The posteriority weighting Bayes estimations [J] of information credibility, aviation journal, 2008,29 (5):1245-1251) analyze it is different can Confidence measure method, including the Analysis on confidence based on data compatibility check, comprehensive physical priori and data priori it is credible Degree analysis etc., the criterion that a kind of confidence level fusion is assessed is proposed, and provide one kind according to the criterion and consider prior information confidence level Bayes methods of estimation.For priori sample it is excessive the problem of, have in national military standard and all prior informations be normalized to an examination Test way a little, can mitigate to a certain extent priori sample number it is larger when information field test increment " will be flooded " not Foot, the risk of the larger deviation of Bayes estimations appearance when can also reduce empirical prior information distortion.But prior information is normalized to one Individual testing site, the relation between its information loss and fusion efficiencies, theory support are still inadequate.
Document [3] (Flury, B.Principal points [J] .Biometrika, 1990,77 (1):33-41) propose The concept of point is represented, the so-called point (Representative Points, RPs) that represents refers in MSE (Mean Square Error one group of point set of distribution function) can be optimally characterized under criterion, is also principal component point (Principal Points). Document [4] (Hartigan J A, Wong M A.Algorithm AS 136:A K-Means Clustering Algorithm [J] .Applied Statistics, 1979,28 (1):100-108) and document [5] (Tarpey, T.A parametric k- Means algorithm [J] .2007, Computational Statistics, 22:71-89) propose the selection for representing point Conventional method, typically directly represented a little using clustering algorithms such as k-means to find.It can be advised by representing point selection Keep away in national military standard and prior information is normalized to risk caused by a testing site, but when priori data has deviation, It may be failed using clustering algorithm estimation representative point using direct, because when representative point number is excessive, prior inaccuracy is to commenting The influence for estimating result may be larger;And Selecting Representative Points from A it is very few when, information loss is bigger than normal.
The content of the invention
The problem of existing for prior art, the problem to be solved in the present invention are to provide a kind of Evaluation accuracy higher base Yu represents the Bayes fusion appraisal procedures of point optimization.
In order to solve the above technical problems, the present invention uses following technical scheme:
It is a kind of that appraisal procedure is merged based on the Bayes for representing point optimization, comprise the following steps:Step 1:Construction is classical Bayes point estimation models, the Bayes posterior estimates of calculating parameter (μ, D);Step 2:Using clustering algorithm by priori sample number According to dividing ingredient nrClass, nr>=4, with the nrThe n of classrIndividual cluster centre is representative point calculating parameter posterior estimate;Step 3:Construction Majorized function, calculation optimization functional value, the majorized function include departure function and information loss function;Step 4:According to optimization Functional value chooses optimal represent and a little, obtains the Bayes posterior estimates based on point is represented.
As a further improvement on the present invention:
Majorized function is described in step 3:
Wherein:nrFor the number of selected representative point,For the departure function under the numbers of representative points,To be corresponding Information loss function;
The departure function is:
Described information loss function is:
Further, chosen in step 4 the optimal method represented a little as:Make nrAdd 1, repeat step 2 and 3, work as nrValue During more than twice of second stage sample number, stop circulation, selection makes majorized functionIt is worth minimum classification number nrValue, the nrValue Corresponding nrIndividual cluster centre is that the optimal of selection represents a little.
Further, the detailed process of the step 1 is:
Step 1.1:Obtain the priori increment of first stageCalculate first stage priori increment X(1) Normal state-inverse Gamma distributed constant estimate;
Step 1.2:Obtain the sample after second stage experimentCalculate second stage sample X(2)Normal state-inverse Gamma distributed constant estimate;
Step 1.3:Trying to achieve Bayes posterior estimates is:
Further, the priori increment of first stage is obtained in the step 1.1Afterwards, X(1)Sample This averageAnd sample varianceFor
Further, the sample sample of second stage is obtained in step 1.2Calculate second-order Section sample X(2)Sample averageAnd sample variance
Further, the detailed process of the step 2 is:
Step 2.1:Using K-means clustering algorithms, priori sample is divided into nrClass, the i-th class sample are denoted asniFor the quantity of the i-th class sample, the n is takenrThe cluster centre of class is the generation presupposed Table point, is denoted asCalculate per shown in a kind of sample variance such as formula (7):
Step 2.2:Calculation representative pointSample averageAnd sample varianceAs shown in formula (8)
Obtaining the estimate based on the hyper parameter for representing the first stage normal state-Inv-Gamma distribution put is respectively:
Step 2.3:With reference to the second stage sample X calculated in step 1.2(2)Sample averageAnd sample variance Calculate based on the second stage sample X for representing point(2)The estimation of distribution parameters value of normal state-Inv-Gamma distribution be:
Obtain based on the Bayes posterior estimates for representing point:
Compared with prior art, the advantage of the invention is that:
The present invention is a kind of to merge appraisal procedure based on the Bayes for representing point optimization, includes information loss with being by construction The majorized function for deviation of uniting, calculation optimization functional value, then most preferably represented a little according to the screening of majorized function value, obtain being based on the generation The Bayes posteriority fusion estimate of table point, prior information is utilized so as to rationally effective, the number of representative points for quantization is damaged to information The influence with fusion efficiencies is lost, and then improves the precision of test assessment.
Brief description of the drawings
Fig. 1 is the system flow chart of the present invention;
Fig. 2 is the relation of large deviations function of the present invention, information loss function and numbers of representative points.
Embodiment
Fig. 1 shows a kind of a kind of embodiment based on the Bayes fusion appraisal procedures for representing point optimization of the present invention, the party Method comprises the following steps:Step 1:Construct classical Bayes point estimation model, calculating parameter (μx、σx) Bayes posterior estimates; Step 2:Divide priori sample data to ingredient n using clustering algorithmrClass, nr>=4, with the nrThe n of classrIndividual cluster centre is representative Point calculating parameter posterior estimate;Step 3:Constitution optimization function, calculation optimization functional value, the majorized function include deviation letter Number and information loss function;Step 4:Most preferably represented according to the screening of majorized function value after a little, obtaining the Bayes based on the representative point Test fusion estimate.The present invention is by constructing including balancing information loss and the majorized function of system deviation, calculation optimization letter Numerical value, the optimal Bayes posteriority fusion estimate for representing and a little, obtaining based on the representative point is then chosen according to majorized function value, Prior information is utilized so as to rationally effective, the influence of the number of representative points for quantization to information loss and fusion efficiencies, Jin Erti The precision of high test assessment.
By taking X-direction deviation as an example, X probability density function is:
Wherein σxIt is standard deviation, μxIt is average, variance is designated as Dxx 2
If the two stage deviation data sample that accuracy testing obtains isWith The data x of first stage(1)As priori sample.
In the present embodiment, calculating parameter (μ in step 1x、σx) the specific methods of Bayes posterior estimates be:
Step 1.1:Obtain the priori increment of first stageCalculate first stage priori increment X(1) Sample averageAnd sample varianceFor:
The estimate for obtaining the hyper parameter of normal state-Inv-Gamma distribution is respectively:
Step 1.2:Obtain the sample after second stage experimentCalculate second stage sample X(2)Sample averageAnd sample varianceFor:
Obtain second stage sample X(2)Normal state-inverse Gamma distributed constant estimate;
Step 1.3:Bayes posterior estimates are:
Step 2:Divide priori sample data to ingredient n using clustering algorithmrClass, with the nrThe cluster centre of class is representative point Calculating parameter posterior estimate;
Step 2.1:Using K-means clustering algorithms, priori sample is divided into nrClass, the i-th class sample are denoted asniFor the quantity of the i-th class sample, this nrThe n of classrIndividual cluster centre presupposes Represent a little, be denoted asCalculate per a kind of sample variance, be designated as:
Step 2.2:Calculation representative pointSample averageAnd sample variance
Obtaining the estimate based on the hyper parameter for representing the Inv-Gamma distribution of first stage normal state one put is respectively:
Step 2.3:With reference to the second stage sample X calculated in step 1.2(2)Sample averageAnd sample variance Calculate based on the second stage sample X for representing point(2)The estimate of normal state-Inv-Gamma distribution parameter be respectively:
Obtain based on the Bayes posterior estimates for representing point:
Step 3:Constitution optimization function, calculation optimization functional value, the majorized function include departure function and information loss Function;Calculation optimization function large deviations valueWith information loss valueAnd majorized function value F;
Majorized function:
Majorized function large deviations value:
Information loss value:
Deviation can subtract the average of classical Bayes fusion to be weighed with the Bayes Posterior Means of Selecting Representative Points from A Amount, i.e., According to text [3] (Cox, D.R.A note on grouping [J] .Journal of The American Statistical Association, 1957,52:For the meter of representative point information loss in 543-547) Calculation method, it isTo sum up, the majorized function of construction is as follows:
Step 4:Bayes posterior estimates based on the representative point a little, are obtained according to optimal represent of majorized function value screening.
Choosing the optimal method for representing point is:Make nrAdd 1, repeat step 2 and 3, work as nrValue be more than second stage sample When several twice, stop circulation, selection makes majorized functionThe minimum classification number n of valuerValue, the nrThe corresponding n of valuerIt is individual poly- Class center is the representative point chosen, and the Bayes posterior estimates based on the representative point are calculated according to step 2.
Verified below by theory in the case where prior information and real data have system deviation, based on representing a little The Bayes fusions appraisal procedure maximum probability of optimization is better than Bayes credibility evaluation methods, comprehensively better than classical Bayes methods.With The estimation problem of offset landings Parameters on Normal Population is analyzed, and theoretical and example all confirms the advantage of this method.
The error of Bayes estimations is generally measured with posterior variance.If real Posterior distrbutionp is f (θ | X), posterior error It is estimated asOther any estimated values thetas*Posterior variance be
Classical Bayes estimations do not consider prior information confidence level, estimate θ2, then the posterior variance estimated is:
With θ '2, θ2The posterior error estimation of parameter θ when no information priori and completely credible priori sample is indicated respectively, is based on The point estimate that certain factor provides is represented by
Its posterior variance is:
Wherein:
Similarly, based on the method estimated result posterior variance represented a little as
In the case of without system deviation, haveBut there is system deviation In the case of, because Selecting Representative Points from A can lose system deviation, therefore, after Selecting Representative Points from A, thoroughly believe priori and without information Difference between priori can reduce because of the reduction of priori sample size.
Due to k1< n1, thenIt can release
Therefore, then have
2Rp′-θ2)2≤(θ2′-θ2)2 (18)
Contrast (13), (14) and (15), can be derived from,
Equal sign sets up and if only if λ0=1, i.e. priori confidence level p=1.
As can be seen here:In the case of different phase distribution function has system deviation, based on the method for representing point optimization Estimated accuracy be higher than the Bayes methods of estimation based on confidence level, and classical Bayes methods of estimation precision is minimum.Based on representative The difference of the estimated accuracy and the precision of improved Bayes methods of estimation of the method for point optimization is λ0(1-λ0)[(θ2′-θ2)2- (θ2Rp′-θ2)2], the precision difference for improving Bayes methods of estimation and classical Bayes methods is (1- λ0)2(θ′22)2, with can Reliability reduces, and this gap is more obvious.
Beneficial effects of the present invention are further verified using example below.
In the first stage in priori experiment and second stage field test, n is chosen respectively1=200, n2=10 sample points. Using normal distribution N (η, D0) and N (0, D) emulated, two distribution mean bias be η.Compare classical Bayes methods, Bayes methods based on confidence level, the result herein in conjunction with three kinds of methods such as the Bayes methods of representing optimized.Wherein combine generation In the confidence level Bayes methods of table point, best representative points number carries out calculating selection based on formula (12).Simulation results are shown in Table 1, last row Opt are the number of best representative points.
The comparison of computational results of 1 three kinds of methods of table
It can be obtained from table 1:1) in priori sample and actual sample distribution indifference, i.e. during η=0, classical Bayes methods are For optimal selection;2) when priori sample and actual sample distributional difference are larger, merged and assessed based on the Bayes for representing point optimization Method reduces the influence such as system deviation in insincere prior information, better than Bayes credibility evaluation methods and classics Bayes methods, it is as a result more reasonable.
On the calculating of majorized function, with D=1, D0Exemplified by=1, η=1, Fig. 2 be majorized function large deviations function and Information loss function and numbers of representative points relation, figure it is seen that best representative points number is 15 in this example.
The above is only the preferred embodiment of the present invention, protection scope of the present invention is not limited merely to above-described embodiment, All technical schemes belonged under thinking of the present invention belong to protection scope of the present invention.It should be pointed out that for the art For those of ordinary skill, some improvements and modifications without departing from the principles of the present invention, the protection of the present invention should be regarded as Scope.

Claims (7)

1. a kind of merge appraisal procedure based on the Bayes for representing point optimization, it is characterised in that:Comprise the following steps:
Step 1:Construct classical Bayes point estimation model, the Bayes posterior estimates of calculating parameter (μ, D);
Step 2:Divide priori sample data to ingredient n using clustering algorithmrClass, nr>=4, with the nrThe n of classrIndividual cluster centre is generation Table point calculating parameter posterior estimate;
Step 3:Constitution optimization function, calculation optimization functional value, the majorized function include departure function and information loss function;
Step 4:Bayes posterior estimates based on point is represented a little, are obtained according to optimal represent of majorized function value screening.
It is 2. according to claim 1 a kind of based on the Bayes fusion appraisal procedures for representing point optimization, it is characterised in that:Step Shown in rapid 3 majorized function such as formula (1):
<mrow> <msub> <mi>F</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>=</mo> <msub> <mi>D</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>+</mo> <msub> <mi>L</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein:For the departure function under the numbers of representative points,For corresponding information loss function;
The departure function is:
Described information loss function is:
It is 3. according to claim 2 a kind of based on the Bayes fusion appraisal procedures for representing point optimization, it is characterised in that:Step Screened in rapid 4 the optimal method represented a little as:Make nrAdd 1, repeat step 2 and 3, work as nrValue be more than second stage sample number Twice when, stop circulation, selection make majorized functionIt is worth minimum classification number nrValue, the nrThe corresponding n of valuerIn individual cluster The heart is that the optimal of selection represents a little.
It is 4. according to claim 1 a kind of based on the Bayes fusion appraisal procedures for representing point optimization, it is characterised in that:Institute The detailed process for stating step 1 is:
Step 1.1:Obtain the priori increment of first stageCalculate first stage priori increment X(1)Just The estimate of state-inverse Gamma distributed constant;
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>&amp;alpha;</mi> <mn>1</mn> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <mn>2</mn> <mo>=</mo> <msub> <mi>n</mi> <mn>1</mn> </msub> <msubsup> <mi>S</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> <mn>2</mn> </msubsup> <mo>/</mo> <mn>2</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;beta;</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;mu;</mi> <mn>1</mn> </msub> <mo>=</mo> <msup> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>=</mo> <mi>n</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Step 1.2:Obtain the sample after second stage experimentCalculate second stage sample X(2)'s The estimate of the distributed constant of normal state-Inv-Gamma distribution;
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>&amp;alpha;</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>&amp;alpha;</mi> <mn>1</mn> </msub> <mo>+</mo> <mfrac> <mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> <msubsup> <mi>S</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> <mn>2</mn> </msubsup> </mrow> <mn>2</mn> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>k</mi> <mn>1</mn> </msub> <msup> <mrow> <mo>(</mo> <msup> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;beta;</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>&amp;beta;</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> <mo>/</mo> <mn>2</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;mu;</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <msub> <mi>&amp;mu;</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> <msup> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> </mrow> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>k</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
Step 1.3:Bayes posterior estimates are:
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mover> <mi>&amp;mu;</mi> <mo>^</mo> </mover> <mrow> <mi>B</mi> <mi>a</mi> <mi>y</mi> <mi>e</mi> <mi>s</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>&amp;mu;</mi> <mn>2</mn> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mover> <mi>D</mi> <mo>^</mo> </mover> <mrow> <mi>B</mi> <mi>a</mi> <mi>y</mi> <mi>e</mi> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mfrac> <msub> <mi>&amp;alpha;</mi> <mn>2</mn> </msub> <mrow> <msub> <mi>&amp;beta;</mi> <mn>2</mn> </msub> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
It is 5. according to claim 4 a kind of based on the Bayes fusion appraisal procedures for representing point optimization, it is characterised in that:Institute State the priori increment that the first stage is obtained in step 1.1Afterwards, X is calculated(1)Sample averageAnd sample VarianceFor
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mn>1</mn> </msub> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> </munderover> <msubsup> <mi>X</mi> <mi>i</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>S</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mn>1</mn> </msub> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
It is 6. according to claim 5 a kind of based on the Bayes fusion appraisal procedures for representing point optimization, it is characterised in that:Step The sample of second stage is obtained in rapid 1.2Calculate second stage sample X(2)Sample average And sample variance
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mn>2</mn> </msub> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> </munderover> <msup> <msub> <mi>X</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>S</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> <mn>2</mn> </mrow> </msup> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mn>2</mn> </msub> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> </munderover> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>X</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
7. an a kind of Bayes based on representative point optimization according to any one of claim 1 to 6 merges an appraisal procedure, its It is characterised by:The detailed process of the step 2 is:
Step 2.1:Using K-means clustering algorithms, priori sample is divided into nrClass, the i-th class sample are denoted asniFor the quantity of the i-th class sample, the n is takenrThe n of classrIndividual cluster centre is to presuppose Representative point, be denoted asCalculate per shown in a kind of sample variance such as formula (7):
<mrow> <msubsup> <mi>S</mi> <mi>i</mi> <mn>2</mn> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mi>i</mi> </msub> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mi>i</mi> </msub> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>i</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msup> <msub> <mi>X</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
Step 2.2:Calculation representative pointSample averageAnd sample varianceAs shown in formula (8)
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <msub> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mi>r</mi> </msub> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mi>r</mi> </msub> </munderover> <msup> <msub> <mi>&amp;xi;</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>S</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> <mn>2</mn> </mrow> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mi>r</mi> </msub> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mi>r</mi> </msub> </munderover> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>X</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <msub> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
Obtaining the estimate based on the parameter for representing the first stage normal state-Inv-Gamma distribution put is respectively:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>&amp;alpha;</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>n</mi> <mi>r</mi> </msub> <msubsup> <mi>S</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> <mn>2</mn> </mrow> </msubsup> </mrow> <mn>2</mn> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;beta;</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mi>r</mi> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mn>2</mn> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;mu;</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>=</mo> <msubsup> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <msub> <mi>n</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>k</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>=</mo> <msub> <mi>n</mi> <mi>r</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
Step 2.3:With reference to the second stage sample X calculated in step 1.2(2)Sample averageAnd sample varianceCalculate Go out based on the second stage sample X for representing point(2)The estimation of distribution parameters value of normal state-Inv-Gamma distribution be:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>&amp;alpha;</mi> <mrow> <mn>2</mn> <mi>r</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>&amp;alpha;</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>+</mo> <mfrac> <mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> <msup> <mi>S</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> <mn>2</mn> </mrow> </msup> </mrow> <mn>2</mn> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> <msub> <mi>k</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <msup> <mrow> <mo>(</mo> <msup> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msub> <mi>&amp;mu;</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>k</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>+</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;beta;</mi> <mrow> <mn>2</mn> <mi>r</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>&amp;beta;</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>+</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> <mo>/</mo> <mn>2</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;mu;</mi> <mrow> <mn>2</mn> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>k</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <msub> <mi>&amp;mu;</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> <msup> <mover> <mi>X</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> </mrow> <mrow> <msub> <mi>k</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>+</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>k</mi> <mrow> <mn>2</mn> <mi>r</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>k</mi> <msub> <mi>n</mi> <mi>r</mi> </msub> </msub> <mo>+</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
Obtain based on the Bayes posterior estimates for representing point:
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mover> <mi>&amp;mu;</mi> <mo>^</mo> </mover> <mrow> <mi>R</mi> <mi>p</mi> <mo>-</mo> <mi>B</mi> <mi>a</mi> <mi>y</mi> <mi>e</mi> <mi>s</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>&amp;mu;</mi> <mrow> <mn>2</mn> <mi>r</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mover> <mi>D</mi> <mo>^</mo> </mover> <mrow> <mi>R</mi> <mi>p</mi> <mo>-</mo> <mi>B</mi> <mi>a</mi> <mi>y</mi> <mi>e</mi> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mfrac> <msub> <mi>&amp;alpha;</mi> <mrow> <mn>2</mn> <mi>r</mi> </mrow> </msub> <mrow> <msub> <mi>&amp;beta;</mi> <mrow> <mn>2</mn> <mi>r</mi> </mrow> </msub> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
CN201710974547.5A 2017-10-19 2017-10-19 Bayes fusion evaluation method based on representative point optimization Active CN107766884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710974547.5A CN107766884B (en) 2017-10-19 2017-10-19 Bayes fusion evaluation method based on representative point optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710974547.5A CN107766884B (en) 2017-10-19 2017-10-19 Bayes fusion evaluation method based on representative point optimization

Publications (2)

Publication Number Publication Date
CN107766884A true CN107766884A (en) 2018-03-06
CN107766884B CN107766884B (en) 2019-12-31

Family

ID=61269676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710974547.5A Active CN107766884B (en) 2017-10-19 2017-10-19 Bayes fusion evaluation method based on representative point optimization

Country Status (1)

Country Link
CN (1) CN107766884B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111077769A (en) * 2018-10-19 2020-04-28 罗伯特·博世有限公司 Method for controlling or regulating a technical system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106202929A (en) * 2016-07-11 2016-12-07 中国人民解放军国防科学技术大学 A kind of Accuracy Asse ssment method based on Bayes mixed model
CN106570281A (en) * 2016-11-08 2017-04-19 上海无线电设备研究所 Similar product information-based bayesian reliability evaluation method of small number samples
US20170278113A1 (en) * 2016-03-23 2017-09-28 Dell Products, Lp System for Forecasting Product Sales Using Clustering in Conjunction with Bayesian Modeling

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170278113A1 (en) * 2016-03-23 2017-09-28 Dell Products, Lp System for Forecasting Product Sales Using Clustering in Conjunction with Bayesian Modeling
CN106202929A (en) * 2016-07-11 2016-12-07 中国人民解放军国防科学技术大学 A kind of Accuracy Asse ssment method based on Bayes mixed model
CN106570281A (en) * 2016-11-08 2017-04-19 上海无线电设备研究所 Similar product information-based bayesian reliability evaluation method of small number samples

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DUAN XIAO-JUN ET AL.: "Heterogeneous information fusion of Bayesian model with applications in accuracy evaluation of carrier rocket", 《JOURNAL OF ASTRONAUTICS》 *
孟凡荣 等: "一种基于代表点的增量聚类算法", 《计算机应用研究》 *
段晓君 等: "一类 Bayes 公式的推导及在小子样评估中的应用", 《湖南工业大学学报》 *
黄寒砚 等: "考虑先验信息可信度的后验加权Bayes估计", 《航空学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111077769A (en) * 2018-10-19 2020-04-28 罗伯特·博世有限公司 Method for controlling or regulating a technical system

Also Published As

Publication number Publication date
CN107766884B (en) 2019-12-31

Similar Documents

Publication Publication Date Title
Cannamela et al. Controlled stratification for quantile estimation
CN106202929B (en) A kind of Accuracy Asse ssment method based on Bayes mixed model
CN108182511A (en) It is a kind of based on Demand Side Response reserve value assessment method of the sum of ranks than method
CN103902798B (en) Data preprocessing method
CN107766884A (en) Bayes fusion evaluation method based on representative point optimization
CN103970651A (en) Software architecture safety assessment method based on module safety attributes
CN105373473B (en) CDR accuracys method of testing and test system based on original signaling decoding
CN106611339B (en) Seed user screening method, and product user influence evaluation method and device
CN104915430B (en) A kind of restriction relation rough set regulation obtaining method based on MapReduce
TW200921445A (en) Circuit analysis method
Robiolo et al. Transactions and paths: Two use case based metrics which improve the early effort estimation
CN110008972A (en) Method and apparatus for data enhancing
Woods Consequences of ignoring guessing when estimating the latent density in item response theory
CN107491576B (en) Missile component reliability analysis method based on performance degradation data
CN107292213B (en) Handwriting quantitative inspection and identification method
CN109327476A (en) Method and system for evaluating risk of Web attack on information system
CN108629454A (en) The method for filing line with two or three recipe prediction colleges and universities
CN111242433B (en) Power data identification method and device, computer equipment and storage medium
Zhang et al. Digital finance in the context of common wealth helps regional economic development of high quality
Fair Information content of DSGE forecasts
CN113656267A (en) Method and device for calculating energy efficiency of equipment, electronic equipment and storage medium
CN106250593A (en) Reliability estimation method based on like product information
Asgharzadeh et al. The generalized exponential distribution as a lifetime model under different loss functions
CN111652515A (en) Method, device and equipment for evaluating operation efficiency of regional power distribution network
de Santi et al. Graph neural networks for robust parameter inference in cosmology: the first steps before real data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant