CN109951327A - A kind of network failure data synthesis method based on Bayesian mixture models - Google Patents
A kind of network failure data synthesis method based on Bayesian mixture models Download PDFInfo
- Publication number
- CN109951327A CN109951327A CN201910165006.7A CN201910165006A CN109951327A CN 109951327 A CN109951327 A CN 109951327A CN 201910165006 A CN201910165006 A CN 201910165006A CN 109951327 A CN109951327 A CN 109951327A
- Authority
- CN
- China
- Prior art keywords
- distribution
- follows
- class
- data
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention discloses a kind of network failure data synthesis method based on Bayesian mixture models, this method is used to solve in existing network failure predication due to the defect for the estimated performance decline that fault data is less and generates, the present invention is firstly for the collected Network data set with lack of balance characteristic, indicate that the minority class fault data in the data set is distributed with Bayesian mixture models, then complete model parameter estimation, minority class fault sample is generated using trained model, failure and non-faulting these two types data is made to reach balanced.Using method of the invention, it is possible to accurately hold the Network data set characteristic with lack of balance characteristic, network failure predictablity rate is effectively promoted.
Description
Technical field
The present invention relates to a kind of network failure data synthesis method based on Bayesian mixture models, belongs to lack of balance data
Processing technology field.
Background technique
With the development of internet technology, more and more users begin to use disparate networks business.Network operator
It is that user provides higher quality the streaming media video service more stable with transmission in effort.Due to the generation of network failure,
It is easy to cause user experience quality to decline.In other words, it if operator accurately can prejudge network failure in advance, and takes and arranges
It applies and solves the problems, such as to be likely to occur in network, then user experience can be promoted effectively.Therefore, the prediction of the failure of user
It is most important for network operator with timely processing.
In systems in practice, network failure data ratio shared in the whole network data set that system is collected into is opposite
Smaller, in other words, the probability that network failure generates will be well below the normal probability of network.Therefore, Network data set has
Lack of balance characteristic.Lack of balance data set refers to that a kind of data in data set are obviously few more many than other class data.Herein,
The data volume of network failure (minority class sample) will be far less than the data volume of network normal (most class samples).For such
Situation, for two traditional classifiers when handling lack of balance data, the classifier that usually training obtains has preference, so that most classes
Prediction has very high accuracy, and then accuracy is very low for minority class.In the method for processing lack of balance data set, usually
Method based on sampling, by changing the distribution of data set, so that lack of balance data set becomes balanced data set.
Most of existing methods are non-equal to handle by the way of directly generating new minority class sample from available sample
The data that weigh, such as Synthetic Minority Oversampling Technique (SMOTE) method.These modes compare
Intuitively, but since it is there is no the distribution character for deeply excavating minority class sample, the sample generated is not only not necessarily helped
In classification, reaction often is played to classification, to find out its cause, the new minority class sample generated is not representative, thus
Network failure can not be preferably applied to predict.
Summary of the invention
Object of the present invention is to be to solve the defects of existing network failure data processing, propose a kind of based on pattra leaves
The network failure data synthesis method of this mixed model, this method is using Bayesian mixture models for describing network failure data
The distribution situation of (minority class sample) is distributed firstly, describing existing fault data with Bayesian mixture models, then carries out parameter
Estimation, it is final to generate new fault data using the model for estimating parameter, so that lack of balance data become relatively
Weighing apparatus.
The technical scheme is that a kind of network failure data synthesis method based on Bayesian mixture models, the party
Method includes the following steps:
Step 1: set collected Network data set asWherein xnBe made of six attributes, respectively lose
Packet rate, terminal downloads rate, propagation delay time, shake, video transmission quality, end-user experience scoring;The data set is corresponding
Tag set isyn=0 or 1, i.e. X correspond to two class labels, wherein yn=0 is the normal class label of network, yn=1
Class is network failure class label, the data amount check due to the data amount check of the normal class of network far more than network failure class, definition
yn=1 corresponding xnComposed collection is combined into minority classWhereinFor minority class sample, NalmFor minority
Class number of samples, and yi=0 corresponding xiComposed collection is combined into most classesWhereinFor most class samples
This, NmajFor most class numbers of samples;
Step 2: Bayesian mixture models being selected to indicate XalmDistribution, probability-distribution function expression formula includes:
Wherein, K is to be mixed into score, πj(V)、μj、ΛjAnd νjRespectively indicate weight, the mean value, association of j-th of blending constituent
Variance matrix and freedom degree parameter;For the probability density function of t distribution, indicate are as follows:
Wherein N () and Gam () respectively represent Gaussian distribution function and Gamma distribution function, unjFor with xnWith
The relevant hidden variable of j-th of blending constituent, weight πj(V) meetIts expression formula are as follows:
Variable V in above formulajObey Beta distribution, i.e. p (Vj)=Beta (Vj| 1, α), α is the hyper parameter of Beta distribution,
In addition, μj,ΛjObey joint Gaussian-Wishart distribution, the i.e. product of Gaussian Profile and Wishart distribution, N () W
():
p(μj,Λj)=N (μj|mj,λjΛj)W(Λj|Wj,ρj)
WhereinFor the hyper parameter of joint Gaussian-Wishart distribution, mjFor 6 DOF column arrow
Amount, λjAnd ρjIt is scalar, WjFor the matrix of one (6 × 6);Introduce a hidden variableWherein znIt indicates currently
Data xnIt is to be generated by which ingredient in t mixed model, works as xnIt is z when being generated by j-th of blending constituentnj=1, it is based on
The above, the hyper parameter of entire model are as follows:
Step 3: utilizing XalmParameter Estimation is carried out to mixed model, specific as follows:
3-1) generate NalmEqually distributed random integers on a obedience [1, K] section count each integer on the section and occur
Probability;That is, if producing NjA integer j, then δj=Nj/Nalm;For eachCorresponding hidden variable znIt is initial
It is distributed as
znFor K n dimensional vector n, in each dimension znjValue on (j=1 ..., K) is { 0,1 };
3-2) set hyper parameterThe initial value of α;For all j (j=1 ..., K), mj
=0, λj=1, ρjTake the arbitrary number between 3~20, Wj=10I, I are unit matrix, νjTake the arbitrary number between 1~100, α
Take the arbitrary number between 1~10;In addition, the number of iterations counting variable k=1;
3-3) update hidden variableDistribution, that is,Its hyper parameterMore new formula are as follows:
Wherein
It is calculated in iteration for the first timeWhen,
3-4) update stochastic variableDistribution, that is,
Corresponding hyper parameterMore new formula it is as follows:
Wherein,
3-5) update stochastic variableDistribution, that is,Corresponding hyper parameterMore new formula are as follows:
3-6) update hidden variableDistribution
Wherein
In above formula
In, the calculation formula of items expectation<>is as follows:
Wherein Γ () is the gamma function of standard, and Γ () ' is the derivative of standard gamma function;In addition,<unj> calculation method respectively in step 3-3) and step 3-4) provide;
3-7) update freedom degree parameterThat is, solution contains ν as followsjEquation:
Common numerical computation method, such as Newton method are selected, this non trivial solution ν is obtainedj;
3-8) calculate the likelihood value LIK after current iterationitr, itr is current the number of iterations:
3-9) the difference △ LIK=LIK after calculating current iteration with the likelihood value after last iterationitr-LIKitr-1;Such as
Otherwise fruit △ LIK≤δ is gone to step (3-3) then parameter estimation procedure terminates, the value of itr increases by 1, is continued next time
Iteration;The value range of threshold value δ is 10-5~10-4;
Step 4: using the Bayesian mixture models estimated, generating new Network data set (Xalm) ', if needing to generate
Data volume be N ', comprising:
It 4-1) is randomly generated between one 0 to 1 and obeys equally distributed random number ε;
Obedience 4-2) is randomly generatedDistribution
4-3) calculate
Obedience 4-4) is randomly generatedDistribution
4-5) utilization estimatesIf ε ∈ [0, π1], then it generates an obedience t and is distributed t (μ1,Λ1,ν1)
Sample;IfIt then generates an obedience t and is distributed t (μk,Λk,
νk) sample;IfIt then generates an obedience t and is distributed t (μK,ΛK,νK) sample;
(4-1)~(4-5) N ' that 4-6) repeats the above steps is secondary, obtains (Xalm) ', final network failure data set isTotal data set after synthesis is
The invention has the following advantages:
1. the present invention by generate network failure data, well solved occur in network failure prediction task it is non-
The not accurate enough problem of classification, the prediction for the data that weigh.
2. the present invention has modeled the distribution of network failure data using Bayesian mixture models, the data are held well
Characteristic, compared with traditional method, the new more representative and classificatory area of network failure data caused by the present invention
Indexing.
3. Bayesian mixture models designed by the present invention can determine optimal mould according to minority class data adaptive
Type structure.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention.
Fig. 2 is that the present invention is fitted the distribution map after artificially generated sample with Bayesian mixture models.
Fig. 3 is Bayesian mixture models iterative process likelihood value change curve of the present invention.
Fig. 4 is Kmeans-SMOTE method, the G value comparison of GMM oversampler method and the method for the present invention.
Specific embodiment
The invention is described in further detail with reference to the accompanying drawings of the specification.
As shown in Figure 1, the present invention provides a kind of network failure data synthesis method based on Bayesian mixture models, it should
Method includes the following steps:
Step 1: set collected Network data set asWherein xnBe made of six attributes, respectively lose
Packet rate, terminal downloads rate, propagation delay time, shake, video transmission quality, end-user experience scoring;The data set is corresponding
Tag set isyn=0 or 1, i.e. X correspond to two class labels, wherein yn=0 is the normal class label of network, yn=1
Class is network failure class label, the data amount check due to the data amount check of the normal class of network far more than network failure class, definition
yn=1 corresponding xnComposed collection is combined into minority classWhereinFor minority class sample, NalmFor minority
Class number of samples, and yi=0 corresponding xiComposed collection is combined into most classesWhereinFor most class samples
This, NmajFor most class numbers of samples;
Step 2: Bayesian mixture models being selected to indicate XalmDistribution, probability-distribution function expression formula includes:
Wherein, K is to be mixed into score, πj(V),μj,Λj,νjRespectively indicate the weight of j-th of blending constituent, mean value, association
Variance matrix and freedom degree parameter.For the probability density function of t distribution, can indicate are as follows:
Wherein N () and Gam () respectively represent Gaussian distribution function and Gamma distribution function, unjFor with xnWith
The relevant hidden variable of j-th of blending constituent.Weight πj(V) meetIts expression formula are as follows:
Variable V in above formulajObey Beta distribution, i.e. p (Vj)=Beta (Vj| 1, α), α is the hyper parameter of Beta distribution.
In addition, μj,ΛjObey joint Gaussian-Wishart distribution (the i.e. product of Gaussian Profile and Wishart distribution, N () W
()):
p(μj,Λj)=N (μj|mj,λjΛj)W(Λj|Wj,ρj)
WhereinFor the hyper parameter of joint Gaussian-Wishart distribution.mjFor 6 DOF column arrow
Amount, λjAnd ρjIt is scalar, WjFor the matrix of one (6 × 6).Also need to introduce a hidden variableWherein znInstruction
Current data xnIt is to be generated by which ingredient in t mixed model.Work as xnIt is z when being generated by j-th of blending constituentnj=
1.Based on the above, the hyper parameter of entire model are as follows:
Step 3: utilizing XalmParameter Estimation is carried out to mixed model, specific as follows:
(3-1) generates NalmEqually distributed random integers on a obedience [1, K] section, count each integer on the section and go out
Existing probability;That is, if producing NjA integer j, then δj=Nj/Nalm;For eachCorresponding hidden variable znJust
Begin to be distributed are as follows:
It should be noted that znFor K n dimensional vector n, in each dimension znjValue on (j=1 ..., K) is { 0,1 };
(3-2) sets hyper parameterThe initial value of α;For all j (j=1 ..., K), mj
=0, λj=1, ρjThe arbitrary number between 3~20, W can be takenj=10I, I are unit matrix, νjIt can take between 1~100
Arbitrary number, α can take the arbitrary number between 1~10;In addition, the number of iterations counting variable k=1;
(3-3) updates hidden variableDistribution, that is,Its hyper parameterMore new formula are as follows:
Wherein:
It is calculated in iteration for the first timeWhen,
(3-4) updates stochastic variableDistribution, that is,
Corresponding hyper parameterMore new formula it is as follows:
Wherein,
(3-5) updates stochastic variableDistribution, that is,Corresponding hyper parameterMore new formula are as follows:
(3-6) updates hidden variableDistribution
Wherein:
In above formula
In, the calculation formula of items expectation<>is as follows:
Wherein Γ () is the gamma function of standard, and Γ () ' is the derivative of standard gamma function;In addition,<unj> calculation method provided respectively in step (3-3) and step (3-4);
(3-7) updates freedom degree parameterThat is, solution contains ν as followsjEquation:
Common numerical computation method, such as Newton method can be selected, this non trivial solution ν is rapidly obtainedj;
(3-8) calculates the likelihood value LIK after current iterationitr, itr is current the number of iterations:
Difference △ LIK=LIK after (3-9) calculating current iteration with the likelihood value after last iterationitr-LIKitr-1;Such as
Otherwise fruit △ LIK≤δ is gone to step (3-3) then parameter estimation procedure terminates, the value of itr increases by 1, is continued next time
Iteration;The value range of threshold value δ is 10-5~10-4。
Step 4: using the Bayesian mixture models estimated, generating new Network data set (Xaim) ', if needing to generate
Data volume be N ', comprising:
(4-1), which is randomly generated between one 0 to 1, obeys equally distributed random number ε;
Obedience is randomly generated in (4-2)Distribution
(4-3) is calculated
Obedience is randomly generated in (4-4)Distribution
What (4-5) utilization estimatedIf ε ∈ [0, π1], then it generates an obedience t and is distributed t (μ1,Λ1,ν1)
Sample;IfK=2 ..., K-1 then generates an obedience t and is distributed t (μk,Λk,νk)
Sample;IfIt then generates an obedience t and is distributed t (μK,ΛK,νK) sample;
(4-6) (4-1)~(4-5) N ' that repeats the above steps is secondary, obtains (Xalm) ', final network failure data set isTotal data set after synthesis is
Performance compares:
The Clustering Effect of Bayesian mixture models (DPMM) is tested first.Thinking is as follows: using certain for coming from multiple clusters
And the unknown several samples of cluster label, unsupervised study is carried out with DPMM clustering algorithm, finally by cluster result and originally
The label of sample, which compares, shows classifying quality.
In this experiment, 1000 three-dimensional samples, the iteration time of experiment are generated using three single Gauss models
Number is 200.The probability distribution of samples points completed after fitting with the Bayesian mixture models that the present invention designs is as shown in Figure 2.It is wherein correct
The sample size of classification is 942, and the accuracy of fitting has reached 94.2%.What Fig. 3 was indicated is the class in 200 iterative process
The variation line chart of other quantity, can the score K that is mixed into of the therefrom model in figure be generally in the fluctuation of 3 surroundings, and it is big
It is final convergent when about iterating to 160 times.The experimental results showed that can be according to described sample based on Bayesian mixture models
Originally model structure is automatically determined.
Then for the network data that certain network operator provides, the method for the present invention is subjected to replication experiment.Use this
Inventive method synthesizes new sample and is added in minority class, finally makes new data set relative equilibrium, then using simple shellfish
This classifier of leaf is trained modeling to new data set as base classifier, is then tested using test data set.Make
It is compared with traditional Kmeans-SMOTE method and GMM oversampler method.Test data set uses initial data, surveys
In examination data the ratio of minority class and most classes we select 1:30,1:60 and 1:89 to be trained test, the result of experiment
It is as shown in Figure 4: from fig. 4, it can be seen that relative to Kmeans-SMOTE algorithm and GMM oversampler method, the method for the present invention DPMM
G value improve 16% and 4.8% respectively.Therefore, the method for the present invention produces the classification prediction effect of unequalizing network data
Effective promotion is given birth to.
Claims (1)
1. a kind of network failure data synthesis method based on Bayesian mixture models, which is characterized in that the method walks as follows
It is rapid:
Step 1: set collected Network data set asWherein xnBe made of six attributes, respectively packet loss,
Terminal downloads rate, propagation delay time, shake, video transmission quality, end-user experience scoring;The corresponding tally set of the data set
It is combined intoyn=0 or 1, i.e. X correspond to two class labels, wherein yn=0 is the normal class label of network, yn=1 class is net
Network failure classes label defines y since the data amount check of the normal class of network is far more than the data amount check of network failure classn=1 pair
The x answerednComposed collection is combined into minority classWhereinFor minority class sample, NalmFor minority class sample number
Mesh, and yi=0 corresponding xiComposed collection is combined into most classesWhereinFor most class samples, Nmaj
For most class numbers of samples;
Step 2: Bayesian mixture models being selected to indicate XalmDistribution, probability-distribution function expression formula includes:
Wherein, K is to be mixed into score, πj(V)、μj、ΛjAnd νjRespectively indicate weight, the mean value, covariance of j-th of blending constituent
Matrix and freedom degree parameter;For the probability density function of t distribution, indicate are as follows:
Wherein N () and Gam () respectively represent Gaussian distribution function and Gamma distribution function, unjFor with xnWith j-th
The relevant hidden variable of blending constituent, weight πj(V) meetIts expression formula are as follows:
Variable V in above formulajObey Beta distribution, i.e. p (Vj)=Beta (Vj| 1, α), α is the hyper parameter of Beta distribution, in addition,
μj,ΛjJoint Gaussian-Wishart distribution, the i.e. product of Gaussian Profile and Wishart distribution are obeyed, N () W ():
p(μj,Λj)=N (μj|mj,λjΛj)W(Λj|Wj,ρj)
WhereinFor the hyper parameter of joint Gaussian-Wishart distribution, mjFor sextuple column vector, λj
And ρjIt is scalar, WjFor the matrix of one (6 × 6);Introduce a hidden variableWherein znIndicate current data xn
It is to be generated by which ingredient in t mixed model, works as xnIt is z when being generated by j-th of blending constituentnj=1, it is based on the above institute
It states, the hyper parameter of entire model are as follows:
Step 3: utilizing XalmParameter Estimation is carried out to mixed model, specific as follows:
3-1) generate NalmEqually distributed random integers on a obedience [1, K] section, count each integer on the section occur it is general
Rate;That is, if producing NjA integer j, then δj=Nj/Nalm;For eachCorresponding hidden variable znInitial distribution
For
znFor K n dimensional vector n, in each dimension znjValue on (j=1 ..., K) is { 0,1 };
3-2) set hyper parameterThe initial value of α;For all j, j=1 ..., K, mj=0, λj
=1, ρjTake the arbitrary number between 3~20, Wj=10I, I are unit matrix, νjTaking the arbitrary number between 1~100, α takes 1~
Arbitrary number between 10;In addition, the number of iterations counting variable k=1;
3-3) update hidden variableDistribution, that is,Its hyper parameterMore new formula are as follows:
Wherein
It is calculated in iteration for the first timeWhen,
3-4) update stochastic variableDistribution, that is,
Corresponding hyper parameterMore new formula it is as follows:
Wherein,
3-5) update stochastic variableDistribution, that is,Corresponding hyper parameterMore new formula are as follows:
3-6) update hidden variableDistribution
Wherein
In above formula, the calculation formula of items expectation<>is as follows:
Wherein Γ () is the gamma function of standard, and Γ () ' is the derivative of standard gamma function;In addition,WithCalculation method respectively in step 3-3) and step 3-4) provide;
3-7) update freedom degree parameterThat is, solution contains ν as followsjEquation:
Common numerical computation method, such as Newton method are selected, this non trivial solution ν is obtainedj;
3-8) calculate the likelihood value LIK after current iterationitr, itr is current the number of iterations:
3-9) the difference △ LIK=LIK after calculating current iteration with the likelihood value after last iterationitr-LIKitr-1;If △
Otherwise LIK≤δ is gone to step (3-3) then parameter estimation procedure terminates, the value of itr increases by 1, continue next time repeatedly
Generation;The value range of threshold value δ is 10-5~10-4;
Step 4: using the Bayesian mixture models estimated, generating new Network data set (Xalm) ', if the number for needing to generate
It is N ' according to amount, comprising:
It 4-1) is randomly generated between one 0 to 1 and obeys equally distributed random number ε;
Obedience 4-2) is randomly generatedDistribution
4-3) calculate
Obedience 4-4) is randomly generatedDistribution
4-5) utilization estimatesIf ε ∈ [0, π1], then it generates an obedience t and is distributed t (μ1,Λ1,ν1) sample;
IfIt then generates an obedience t and is distributed t (μk,Λk,νk) sample
This;IfIt then generates an obedience t and is distributed t (μK,ΛK,νK) sample;
(4-1)~(4-5) N ' that 4-6) repeats the above steps is secondary, obtains (Xalm) ', final network failure data set isTotal data set after synthesis is
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910165006.7A CN109951327B (en) | 2019-03-05 | 2019-03-05 | Network fault data synthesis method based on Bayesian hybrid model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910165006.7A CN109951327B (en) | 2019-03-05 | 2019-03-05 | Network fault data synthesis method based on Bayesian hybrid model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109951327A true CN109951327A (en) | 2019-06-28 |
CN109951327B CN109951327B (en) | 2021-08-20 |
Family
ID=67008458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910165006.7A Active CN109951327B (en) | 2019-03-05 | 2019-03-05 | Network fault data synthesis method based on Bayesian hybrid model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109951327B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110688484A (en) * | 2019-09-24 | 2020-01-14 | 北京工商大学 | Microblog sensitive event speech detection method based on unbalanced Bayesian classification |
CN111652375A (en) * | 2020-06-02 | 2020-09-11 | 中南大学 | Intelligent detection and diagnosis method and device for cooling coil faults based on Bayesian inference and virtual sensing |
CN115037634A (en) * | 2022-05-30 | 2022-09-09 | 中电信数智科技有限公司 | K8s network fault prediction method based on Markov chain and Bayesian network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226595A (en) * | 2013-04-17 | 2013-07-31 | 南京邮电大学 | Clustering method for high dimensional data based on Bayes mixed common factor analyzer |
CN103955709A (en) * | 2014-05-13 | 2014-07-30 | 西安电子科技大学 | Weighted synthetic kernel and triple markov field (TMF) based polarimetric synthetic aperture radar (SAR) image classification method |
CN107180246A (en) * | 2017-04-17 | 2017-09-19 | 南京邮电大学 | A kind of IPTV user's report barrier data synthesis method based on mixed model |
US20180081914A1 (en) * | 2016-09-16 | 2018-03-22 | Oracle International Corporation | Method and system for adaptively imputing sparse and missing data for predictive models |
CN109327404A (en) * | 2018-09-30 | 2019-02-12 | 武汉思普崚技术有限公司 | P2P prediction technique and system, server and medium based on Naive Bayes Classification Algorithm |
-
2019
- 2019-03-05 CN CN201910165006.7A patent/CN109951327B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226595A (en) * | 2013-04-17 | 2013-07-31 | 南京邮电大学 | Clustering method for high dimensional data based on Bayes mixed common factor analyzer |
CN103955709A (en) * | 2014-05-13 | 2014-07-30 | 西安电子科技大学 | Weighted synthetic kernel and triple markov field (TMF) based polarimetric synthetic aperture radar (SAR) image classification method |
US20180081914A1 (en) * | 2016-09-16 | 2018-03-22 | Oracle International Corporation | Method and system for adaptively imputing sparse and missing data for predictive models |
CN107180246A (en) * | 2017-04-17 | 2017-09-19 | 南京邮电大学 | A kind of IPTV user's report barrier data synthesis method based on mixed model |
CN109327404A (en) * | 2018-09-30 | 2019-02-12 | 武汉思普崚技术有限公司 | P2P prediction technique and system, server and medium based on Naive Bayes Classification Algorithm |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110688484A (en) * | 2019-09-24 | 2020-01-14 | 北京工商大学 | Microblog sensitive event speech detection method based on unbalanced Bayesian classification |
CN111652375A (en) * | 2020-06-02 | 2020-09-11 | 中南大学 | Intelligent detection and diagnosis method and device for cooling coil faults based on Bayesian inference and virtual sensing |
CN111652375B (en) * | 2020-06-02 | 2023-06-06 | 中南大学 | Intelligent detection and diagnosis method and device for cooling coil faults based on Bayesian reasoning and virtual sensing |
CN115037634A (en) * | 2022-05-30 | 2022-09-09 | 中电信数智科技有限公司 | K8s network fault prediction method based on Markov chain and Bayesian network |
CN115037634B (en) * | 2022-05-30 | 2024-04-16 | 中电信数智科技有限公司 | K8s network fault prediction method based on Markov chain and Bayesian network |
Also Published As
Publication number | Publication date |
---|---|
CN109951327B (en) | 2021-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kalli et al. | Slice sampling mixture models | |
CN111797321B (en) | Personalized knowledge recommendation method and system for different scenes | |
CN105868281B (en) | Location aware recommender system based on non-dominated ranking multi-target method | |
WO2019005606A1 (en) | Gpu enhanced graph model build and scoring engine | |
Caron et al. | Bayesian nonparametric Plackett–Luce models for the analysis of preferences for college degree programmes | |
Baddeley et al. | Hybrids of Gibbs point process models and their implementation | |
CN107766929B (en) | Model analysis method and device | |
CN109951327A (en) | A kind of network failure data synthesis method based on Bayesian mixture models | |
CN104199818B (en) | Method is recommended in a kind of socialization based on classification | |
CN103377296B (en) | A kind of data digging method of many indexs evaluation information | |
JP2012058972A (en) | Evaluation prediction device, evaluation prediction method, and program | |
JP2011248829A (en) | Evaluation prediction device, evaluation prediction method and program | |
CN106708953A (en) | Discrete particle swarm optimization based local community detection collaborative filtering recommendation method | |
CN105302873A (en) | Collaborative filtering optimization method based on condition restricted Boltzmann machine | |
CN110427560A (en) | A kind of model training method and relevant apparatus applied to recommender system | |
CN110008404B (en) | Latent semantic model optimization method based on NAG momentum optimization | |
CN106127506A (en) | A kind of recommendation method solving commodity cold start-up problem based on Active Learning | |
CN104766219B (en) | Based on the user's recommendation list generation method and system in units of list | |
CN109410001A (en) | A kind of Method of Commodity Recommendation, system, electronic equipment and storage medium | |
CN112396492A (en) | Conversation recommendation method based on graph attention network and bidirectional long-short term memory network | |
Chatzis | Dynamic Bayesian probabilistic matrix factorization | |
Banerjee et al. | A note on the adaptive LASSO for zero-inflated Poisson regression | |
Liu et al. | Rsc: accelerate graph neural networks training via randomized sparse computations | |
Wang et al. | Levy measure decompositions for the beta and gamma processes | |
Jerfel et al. | Dynamic collaborative filtering with compound Poisson factorization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |