CN111612160A - Incremental Bayesian network learning method based on particle swarm optimization algorithm - Google Patents
Incremental Bayesian network learning method based on particle swarm optimization algorithm Download PDFInfo
- Publication number
- CN111612160A CN111612160A CN202010453090.5A CN202010453090A CN111612160A CN 111612160 A CN111612160 A CN 111612160A CN 202010453090 A CN202010453090 A CN 202010453090A CN 111612160 A CN111612160 A CN 111612160A
- Authority
- CN
- China
- Prior art keywords
- bayesian network
- particle
- particle swarm
- optimal
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to the technical field of network optimization, in particular to an incremental Bayesian network learning method based on a particle swarm optimization algorithm, which at least comprises the following steps: adopting a Bayesian network coding method similar to a two-dimensional array, which is different from the traditional adjacent matrix and adjacent table; the advantages of the discrete particle swarm optimization algorithm in the optimal search field are fully exerted, and the discrete particle swarm optimization algorithm is properly combined with the Bayesian network to complete the search process of the optimal structure in the Bayesian network structure learning process; the particle swarm algorithm is applied to Bayesian network learning, the real data growth environment is simulated, the common one-time learning process is changed into an incremental learning process, and the structure and parameters of the network model are dynamically updated along with time so as to adapt to the continuous arrival of new data.
Description
Technical Field
The invention belongs to the technical field of network optimization, and particularly relates to an incremental Bayesian network learning method based on a particle swarm optimization algorithm.
Background
With the rapid development of big data, statistical relationship models based on probability statistics are receiving wide attention and become research hotspots. The Bayesian network is an important research means, combines probability theory and graph theory, and is a classical probability graph model for describing causal relationship among variables. Compared with deep learning, the Bayesian network can explain a decision result through a causal relationship contained in a graph structure, can process incomplete data by easily combining priori knowledge, becomes an important tool for processing uncertainty problems in the field of artificial intelligence in recent years, and is greatly concerned by the industrial and academic fields.
However, in the real world, much data is continuously and incrementally generated by a data source, incremental learning is an online learning process for updating learning results in a new data sequence, existing learning results are not discarded, and the learned results are continuously updated and refined by using new data, so that the condition is well suitable for being processed. In recent years, many scholars have studied the incremental learning method of bayesian network. However, the traditional bayesian network incremental learning method usually assumes that the target probability distribution is unchanged, but the field problem is always inherent with dynamic change characteristics, and when the target probability distribution changes, the existing learning method is difficult to effectively process.
The existing particle swarm optimization algorithm has the advantages of simple structure, fewer parameters, high convergence speed and good robustness in the aspect of high-dimensional space optimization, but the performance is not good when the particle swarm optimization algorithm is applied to a discrete problem, so that the invention aims to solve the problem of Bayesian network structure learning by adopting the particle swarm optimization algorithm.
Disclosure of Invention
The invention aims to provide an incremental Bayesian network learning method based on a particle swarm optimization algorithm, which is characterized in that the particle swarm optimization algorithm is applied to Bayesian network learning, the real data growth environment is simulated, the common one-off learning process is changed into an incremental learning process, and the structure and parameters of a network model are dynamically updated along with time so as to adapt to the continuous arrival of new data.
In order to achieve the purpose, the invention adopts the technical scheme that:
the invention provides an incremental Bayesian network learning method based on a particle swarm optimization algorithm, which at least comprises the following steps:
adopting a Bayesian network coding method similar to a two-dimensional array, which is different from the traditional adjacent matrix and adjacent table;
the advantages of the discrete particle swarm optimization algorithm in the optimal search field are fully exerted, and the discrete particle swarm optimization algorithm is properly combined with the Bayesian network to complete the search process of the optimal structure in the Bayesian network structure learning process;
when the data is incrementally increased, generating a Bayesian network with the highest matching degree with the current data set in each batch, and generating a network of the next batch based on the network of the current batch and the input of new data; after the last batch of data is input and processed by a particle swarm algorithm, the generated Bayesian network is a final target network; next, calculating parameters of the network according to the finally generated Bayesian network structure, wherein the parameters are respectively expressed as conditional probability distribution and marginal probability distribution;
firstly, an initial Bayesian network is obtained according to a particle swarm algorithm, and the optimization process of the particle swarm algorithm comprises the following basic steps:
step 1: initializing the position and speed of the particle group according to the read data;
step 2: according to the evaluation function, calculating the optimal value of each particle, storing the individual optimal value and the optimal position of each particle, and storing the optimal position of each group;
and step 3: obtaining the optimal value of the population through comparison, and updating the speed and the position of the particles according to a formula;
and 4, step 4: judging whether the interception condition is met, if so, obtaining a Bayesian network result based on particle swarm optimization, and otherwise, turning to the step 2;
after an initial Bayesian network is obtained by combining a particle swarm optimization algorithm, the position and the speed of each particle are further updated and improved by an incremental learning method according to new data until a final required Bayesian network is obtained;
all the concrete implementation steps of the method are as follows:
step 1: reading in an original data set, converting the data into a form which can be processed by a program, and averagely dividing the processed data into a plurality of groups of data;
step 2: inputting a group of data, and randomly initializing an effective particle population, including the initial position and the initial speed of the particles;
and step 3: the first network structure learning, calculating the fitness value of each particle according to the BIC scoring function, and obtaining the optimal network structure represented by each particle and the global optimal network structure represented by all the particles;
and 4, step 4: inputting a group of new data, starting new incremental learning, and taking the optimal network structure represented by each particle and the global optimal network structure represented by all the particles obtained by the previous incremental learning as particle initialization values;
and 5: updating the position and the speed of each particle by utilizing a particle swarm algorithm;
step 6: recalculating the BIC score function for each particle;
and 7: obtaining a global optimal network structure represented by all the particles through comparison;
and 8: checking whether a program termination condition is met, namely whether the program calculates a good enough Bayesian network or reaches the maximum iteration number; if yes, ending the incremental learning of the current batch, outputting a global optimal solution, calculating parameters of the Bayesian network, namely, the parameters comprise conditional probability distribution and marginal probability distribution, and ending the program; if not, continuing the current program flow;
and step 9: the optimal network structure represented by each particle is updated and a jump is made to step 4.
The invention has the beneficial effects that: the invention discloses an incremental Bayesian network learning method based on a particle swarm optimization algorithm,
1. and redesigning a fitness function of the particle swarm optimization algorithm by means of the Bayesian information criterion scoring idea to balance the two aspects of the solution vector, namely the fitting degree of the Bayesian network and the old data and the fitting degree of the Bayesian network and the new data.
2. The method aims to better solve the problem that the particle swarm optimization algorithm has high convergence speed in the early stage of iteration but has low convergence precision in the later stage due to slow speed. We set a threshold to them and disturb them randomly when the particles are confined to the local optimal solution.
3. The particle swarm algorithm is applied to Bayesian network learning, the real data growth environment is simulated, the common one-time learning process is changed into an incremental learning process, and the structure and parameters of the network model are dynamically updated along with time so as to adapt to the continuous arrival of new data.
Drawings
FIG. 1 is an overall flow diagram of the present invention;
FIG. 2 is a block diagram of a Bayesian network calculation process by a particle swarm algorithm in the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
referring to fig. 1-2, the invention adopts a different bayesian network coding method from the traditional adjacent matrix, applies the particle swarm algorithm to bayesian network learning, simulates the data growth environment in reality, changes the common one-time learning process into the incremental learning process, and dynamically updates the structure and parameters of the network model along with time to adapt to the continuous coming of new data. The method comprises the following specific steps:
step 1: reading in an original data set, converting the data type into a numerical type, and dividing the preprocessed data into a plurality of data increases in simulation reality.
Step 2: a set of data is imported and the position and velocity of each particle (the structure of the bayesian network) is randomly initialized.
And step 3: and (3) learning the network structure for the first time, and calculating the fitness value of each particle according to the BIC scoring function to obtain a global optimal network structure and a local optimal network structure.
And 4, step 4: and importing a new set of data, taking the optimal network structure of each particle simulated by the old data as the initial position of the corresponding particle, and randomly initializing the speed.
And 5: and updating the position and the speed of each particle by utilizing a particle swarm algorithm, and continuously optimizing the Bayesian network structure.
Step 6: the BIC score for each particle was calculated.
And 7: and obtaining the globally optimal network structure with the highest BIC score through traversal.
And 8: and judging whether the link of calculating the Bayesian network structure is finished (namely a Bayesian network which is matched with the data is obtained or iteration is finished), if so, outputting a globally optimal Bayesian network structure, and starting parameter learning of the network.
And step 9: and updating the local optimal network structure and simultaneously jumping to the step 4.
In the process of obtaining the Bayesian network in the first network learning, the advantages of the Bayesian network in the search field are fully exerted by skillfully combining a particle swarm algorithm, and the basic steps are as follows:
step 1: based on the data read in, the position and velocity of the population of particles is initialized.
Step 2: and according to the evaluation function, calculating the optimal value of each particle, storing the individual optimal value and the optimal position of each particle, and storing the optimal position of the population.
And step 3: the optimal value of the population is obtained through comparison, and the speed and the position of the particles are updated according to a formula
And 4, step 4: and judging whether the interception condition is met, if so, obtaining a Bayesian network result based on particle swarm optimization, and otherwise, turning to the step 2.
Claims (3)
1. An incremental Bayesian network learning method based on a particle swarm optimization algorithm is characterized by comprising the following steps:
adopting a Bayesian network coding method similar to a two-dimensional array, which is different from the traditional adjacent matrix and adjacent table;
the advantages of the discrete particle swarm optimization algorithm in the optimal search field are fully exerted, and the discrete particle swarm optimization algorithm is properly combined with the Bayesian network to complete the search process of the optimal structure in the Bayesian network structure learning process;
when the data is incrementally increased, generating a Bayesian network with the highest matching degree with the current data set in each batch, and generating a network of the next batch based on the network of the current batch and the input of new data; after the last batch of data is input and processed by a particle swarm algorithm, the generated Bayesian network is a final target network; next, calculating parameters of the network according to the finally generated Bayesian network structure, wherein the parameters are respectively expressed as conditional probability distribution and marginal probability distribution;
firstly, solving an initial Bayesian network according to a particle swarm algorithm;
after the initial Bayesian network is obtained by combining the particle swarm optimization algorithm, the positions and the speeds of all the particles are further updated and improved by an incremental learning method according to new data until the finally required Bayesian network is obtained.
2. The incremental Bayesian network learning method based on the particle swarm optimization algorithm according to claim 1, wherein the particle swarm optimization algorithm comprises the following basic steps:
step 1: initializing the position and speed of the particle group according to the read data;
step 2: according to the evaluation function, calculating the optimal value of each particle, storing the individual optimal value and the optimal position of each particle, and storing the optimal position of each group;
and step 3: obtaining the optimal value of the population through comparison, and updating the speed and the position of the particles according to a formula;
and 4, step 4: and judging whether the interception condition is met, if so, obtaining a Bayesian network result based on particle swarm optimization, and otherwise, turning to the step 2.
3. The particle swarm optimization algorithm-based incremental Bayesian network learning method according to claim 1, wherein all the specific implementation steps of the incremental learning method are as follows:
step 1: reading in an original data set, converting the data into a form which can be processed by a program, and averagely dividing the processed data into a plurality of groups of data;
step 2: inputting a group of data, and randomly initializing an effective particle population, including the initial position and the initial speed of the particles;
and step 3: the first network structure learning, calculating the fitness value of each particle according to the BIC scoring function, and obtaining the optimal network structure represented by each particle and the global optimal network structure represented by all the particles;
and 4, step 4: inputting a group of new data, starting new incremental learning, and taking the optimal network structure represented by each particle and the global optimal network structure represented by all the particles obtained by the previous incremental learning as particle initialization values;
and 5: updating the position and the speed of each particle by utilizing a particle swarm algorithm;
step 6: recalculating the BIC score function for each particle;
and 7: obtaining a global optimal network structure represented by all the particles through comparison;
and 8: checking whether a program termination condition is met, namely whether the program calculates a good enough Bayesian network or reaches the maximum iteration number; if yes, ending the incremental learning of the current batch, outputting a global optimal solution, calculating parameters of the Bayesian network, namely, the parameters comprise conditional probability distribution and marginal probability distribution, and ending the program; if not, continuing the current program flow;
and step 9: the optimal network structure represented by each particle is updated and a jump is made to step 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010453090.5A CN111612160A (en) | 2020-05-26 | 2020-05-26 | Incremental Bayesian network learning method based on particle swarm optimization algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010453090.5A CN111612160A (en) | 2020-05-26 | 2020-05-26 | Incremental Bayesian network learning method based on particle swarm optimization algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111612160A true CN111612160A (en) | 2020-09-01 |
Family
ID=72200621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010453090.5A Pending CN111612160A (en) | 2020-05-26 | 2020-05-26 | Incremental Bayesian network learning method based on particle swarm optimization algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111612160A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112911705A (en) * | 2021-02-02 | 2021-06-04 | 湖南大学 | Bayesian iteration improved particle swarm optimization algorithm-based indoor positioning method |
CN114464291A (en) * | 2021-12-22 | 2022-05-10 | 北京理工大学 | MDI dosage suggestion system based on Bayesian optimization |
CN114861834A (en) * | 2022-07-04 | 2022-08-05 | 深圳新闻网传媒股份有限公司 | Method for continuously updating data information of big data storage system |
-
2020
- 2020-05-26 CN CN202010453090.5A patent/CN111612160A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112911705A (en) * | 2021-02-02 | 2021-06-04 | 湖南大学 | Bayesian iteration improved particle swarm optimization algorithm-based indoor positioning method |
CN114464291A (en) * | 2021-12-22 | 2022-05-10 | 北京理工大学 | MDI dosage suggestion system based on Bayesian optimization |
CN114861834A (en) * | 2022-07-04 | 2022-08-05 | 深圳新闻网传媒股份有限公司 | Method for continuously updating data information of big data storage system |
CN114861834B (en) * | 2022-07-04 | 2022-09-30 | 深圳新闻网传媒股份有限公司 | Method for continuously updating data information of big data storage system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111612160A (en) | Incremental Bayesian network learning method based on particle swarm optimization algorithm | |
Zhang et al. | Enhanced ELITE-load: A novel CMPSOATT methodology constructing short-term load forecasting model for industrial applications | |
EP3611799A1 (en) | Array element arrangement method for l-type array antenna based on inheritance of acquired characteristics | |
Yadav et al. | An overview of genetic algorithm and modeling | |
CN105893694A (en) | Complex system designing method based on resampling particle swarm optimization algorithm | |
CN112700060A (en) | Station terminal load prediction method and prediction device | |
CN114282646B (en) | Optical power prediction method and system based on two-stage feature extraction and BiLSTM improvement | |
CN117649552A (en) | Image increment learning method based on contrast learning and active learning | |
CN110210623A (en) | Adaptive multiple target hybrid differential evolution algorithm based on simulated annealing and comentropy | |
CN112561200A (en) | Wind power station output hybrid prediction technology based on complete set empirical mode decomposition and improved ant colony optimization long-short term memory network | |
CN117273080A (en) | Neural network architecture based on evolutionary algorithm | |
Chen et al. | Feature selection of parallel binary moth-flame optimization algorithm based on spark | |
Du et al. | An adaptive human learning optimization with enhanced exploration–exploitation balance | |
CN116054144A (en) | Distribution network reconstruction method, system and storage medium for distributed photovoltaic access | |
CN110766072A (en) | Automatic generation method of computational graph evolution AI model based on structural similarity | |
Wang et al. | Research on the prediction model of greenhouse temperature based on fuzzy neural network optimized by genetic algorithm | |
CN109859062A (en) | A kind of community discovery analysis method of combination depth sparse coding device and quasi-Newton method | |
Li et al. | An improved adaptive particle swarm optimization algorithm | |
Yang et al. | Optimization of classification algorithm based on gene expression programming | |
CN113486952A (en) | Multi-factor model optimization method of gene regulation and control network | |
CN114202063A (en) | Fuzzy neural network greenhouse temperature prediction method based on genetic algorithm optimization | |
Zhang et al. | Boosting the performance of inference algorithms for transcriptional regulatory networks using a phylogenetic approach | |
CN114841472B (en) | GWO optimization Elman power load prediction method based on DNA hairpin variation | |
Bhattacharya et al. | DAFHEA: a dynamic approximate fitness-based hybrid EA for optimisation problems | |
CN118297353B (en) | Industrial production process multi-objective optimization method based on branch non-dominant sorting algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200901 |