CN114219069B - Brain effect connection network learning method based on automatic variation self-encoder - Google Patents

Brain effect connection network learning method based on automatic variation self-encoder Download PDF

Info

Publication number
CN114219069B
CN114219069B CN202111356966.5A CN202111356966A CN114219069B CN 114219069 B CN114219069 B CN 114219069B CN 202111356966 A CN202111356966 A CN 202111356966A CN 114219069 B CN114219069 B CN 114219069B
Authority
CN
China
Prior art keywords
network
brain
fmri data
effect connection
latent variables
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111356966.5A
Other languages
Chinese (zh)
Other versions
CN114219069A (en
Inventor
冀俊忠
邹爱笑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202111356966.5A priority Critical patent/CN114219069B/en
Publication of CN114219069A publication Critical patent/CN114219069A/en
Application granted granted Critical
Publication of CN114219069B publication Critical patent/CN114219069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A brain effect connection network learning method based on an automatic variation self-encoder belongs to the field of deep learning algorithms. Firstly, initializing parameters of a model, then, utilizing an encoding network of an automatic variation self-encoder to learn latent variables from fMRI data of each brain region, and obtaining generated fMRI data from the latent variables through a decoding network. Finally, when the generated fMRI data and the true fMRI data are highly similar, the model can learn an optimal brain effect connection network in the iterative training process. The invention utilizes the variable self-encoder fused with the proportional-integral controller to adaptively adjust the parameters of the model, and automatically and accurately learns the effect connection network of the human brain in the end-to-end training process. Therefore, the method has the advantages of few parameters, high accuracy, strong generalization capability and the like, and can effectively solve the problem of difficult manual parameter adjustment in the existing brain effect connection network deep learning method.

Description

Brain effect connection network learning method based on automatic variation self-encoder
Technical Field
The invention belongs to the fields of brain science research, neural network deep learning theory and application research, and particularly relates to a brain effect connection network learning method based on an automatic variation self-encoder.
Background
Human brain connectivity group studies have attempted to build brain network group maps from multiple levels that delineate the functions, structures, of different living human brain, brain effect connectivity networks being a graph model composed of nodes, typically defined as brain regions, and directed edges that delineate the causal effects of neural activity exerted by one brain region on another brain region. At present, learning of brain effect connection networks from human brain functional magnetic resonance imaging (functional magnetic resonance imaging, fMRI) data by using a calculation method has become a leading-edge hotspot in the research.
In recent years, with the continuous convergence of information science and neuroscience, many conventional machine learning and data mining methods have been successfully used for learning brain effect connection networks. However, these methods are limited by shallow models and learning mechanisms, and deep features are difficult to extract from fMRI data, thus greatly limiting the development of such methods.
With the explosive development of deep learning and its great success in various fields of image, voice, etc., some deep learning methods have been explored for learning brain effect connection networks from fMRI data. For example: a multi-layer perceptron neural network method, a cyclic neural network graininess causal method and a brain effect connection learning method based on an antagonism generation network. Experimental results show that these methods can achieve better performance than traditional machine learning methods. However, currently these methods require manual setting of many super parameters, and the performance of the algorithm depends heavily on the set parameters. Once the parameter settings are unreasonable, it will be difficult for the algorithm to accurately learn the brain effect connection network.
Disclosure of Invention
Aiming at the challenges faced by the brain effect connection network learning, the invention provides a brain effect connection network learning method based on an automatic variation self-encoder. The method can automatically adjust the model parameters, so that the model can adaptively learn the brain effect connection network in the process of generating brain region fMRI data.
In order to achieve the above purpose, the technical scheme adopted by the invention is a brain effect connection network learning method based on an automatic variation self-encoder. The method firstly utilizes a coding network to learn latent variables from fMRI data of each brain region, and then generates fMRI data of each brain region through a decoding network based on the latent variables. Finally, when the generated fMRI data and the true fMRI data are highly similar, the model can learn an optimal brain effect connection network in the iterative training process.
The brain effect connection network learning method based on the automatic variation self-encoder is characterized by comprising the following steps of:
Step (1): parameter setting: the brain region data comprises the number n of brain regions, an initialized brain effect connection parameter matrix A (the Pearson correlation coefficient between brain regions is calculated as the initialized brain effect connection matrix), a super parameter lambda of a network sparse loss function, an expected KL divergence value V KL, a proportional controller coefficient K P and an integral controller coefficient K I.
Step (2): the encoder is used for learning latent variables from fMRI data, and the specific steps are as follows:
Step (2.1): the brain effect connection parameter matrix and brain region fMRI data are encoded into latent variables by using a structural equation model, and the expression is as follows:
Z=(I-AT)X (1)
wherein z= [ Z 1,...,zn ] represents a latent variable, I represents an identity matrix, a represents a brain effect connection parameter matrix, and x= [ X 1,...,xn ] represents fMRI data of each brain region.
Step (2.2): a multi-layer perceptron-based coding network is designed, which consists of a 3-layer neural network structure (comprising an input layer, a hidden layer and an output layer), and then the mean and variance of the posterior distribution of latent variables are estimated through the coding network. Assuming that the latent variable obeys normal distribution, the posterior distribution of the latent variable can be deduced from the obtained mean and variance. Next, through monte carlo sampling and re-parameterization techniques, the latent variables corresponding to each brain region can be sampled from the distribution of the latent variables.
Step (3): brain region fMRI data is generated from the obtained latent variables using a decoding network. Designing a decoding network based on a multi-layer perceptron, wherein the decoding network consists of a 3-layer neural network structure (comprising an input layer, a hidden layer and an output layer) and is used for obtaining the distribution of generated fMRI data of each brain region from the latent variables obtained in the step (2), and the expression is as follows:
pθ(X|Z)=ReLU((I-AT)-1Zθ) (2)
Where p θ (X|Z) represents the distribution of fMRI data of each brain region learned from the latent variables, reLU represents the activation function, and θ represents the weighting coefficients learned by the decoding network during forward and backward propagation of the model.
Step (4): a penalty function is designed that includes fMRI data generation penalty terms and network sparseness penalty terms. The goal is to minimize the loss function so that the model automatically learns a brain effect connection network during iterative training. The specific steps for constructing the loss function are as follows:
Step (4.1): constructing brain region fMRI data generates loss terms. In order to make the generated fMRI data approximate to real fMRI data, a lower bound of evidence is used as an objective function for data generation, and the expression is as follows:
Wherein L ELBO represents a brain region fMRI data generation loss term, Representing the resulting fMRI data, phi and theta representing the network weight coefficients learned by the encoder and decoder during the model's forward and backward propagation, respectively, p (Z) representing the true distribution of the latent variable, q φ (z|x) representing the posterior distribution of the latent variable resulting from step (2.2), and/>Representing the distribution of fMRI data of each brain region learned from latent variables,/>D KL(qφ (z|x) |p (Z)) is a KL divergence value indicating the expectation of the generated fMRI data, and represents the error of the generated fMRI data from the actual fMRI data.
Step (4.2): since the KL divergence plays an important role in the data generation process, too large or too small KL divergence value can influence the learning performance of the model. Therefore, a proportional-integral controller is designed to enable the model to automatically adjust the magnitude of the KL divergence. The calculation formula of the proportional-integral controller is as follows:
Wherein β (T) represents a proportional-integral controller, K P =0.005 represents a proportional control coefficient, K I =0.01 represents an integral control coefficient, e (T) represents an error between a KL divergence value estimated by a model and a desired KL divergence value at time T, T represents a time when model training is completed once, the desired KL divergence value v KL =1.5, and an actual KL divergence value obtained by running an algorithm
Step (4.3): introducing the designed proportional-integral controller into equation (1), as shown in fig. 2, a new brain region fMRI data generation loss term can be obtained, which is expressed as follows:
Where M represents the number of monte carlo samples of the latent variable, M represents the mth time of monte carlo samples of the latent variable, σ z represents the variance of the posterior distribution of the latent variable, and μ z represents the mean of the posterior distribution of the latent variable.
Step (4.4): in order to construct a sparse brain effect connection network structure, a network sparsity loss function for maintaining the sparsity of the brain effect connection network is designed, and the expression is as follows:
Wherein L S represents a network sparsity loss function, λ represents a hyper-parameter of the network sparsity loss function, i and j represent any two brain regions, and a ij represents an effector connection between brain region i and brain region j.
Step (4.5): generating a loss term and a network sparse loss term according to the designed fMRI data, wherein the expression of a loss function L constructed by the invention is as follows:
L=-LELBO+LS (8)
The model can automatically learn a brain effect connection network in the iterative training process with the aim of minimizing the joint loss function L.
The beneficial effects of the invention are as follows: the invention provides a brain effect connection network learning method based on an automatic variation self-encoder, which utilizes an encoding network to encode brain region fMRI data and brain effect connection matrixes into latent variables, and then obtains generated fMRI data from the latent variables through a decoding network based on the latent variables. The method can adaptively adjust model parameters due to the introduction of a proportional-integral controller in the loss function, and automatically learn an optimal effect connection network in the iterative training process. Therefore, the method provided by the invention has the advantages of few super parameters, high accuracy, strong model generalization capability and the like, and can effectively solve the problem that a large number of parameters need to be manually adjusted in the traditional brain effect connection network deep learning method.
Drawings
Fig. 1: a brain effect connection network learning method schematic diagram based on an automatic variation self-encoder.
Fig. 2: the proportional-integral controller generates a tuning effect of the loss function on the fMRI data of the brain region.
Detailed Description
And selecting a Sim3 simulation data set from the Smith simulation data set, taking fMRI data of 15 brain regions as input, and learning an effect connection network of the brain regions by using an automatic variation self-encoder. The basic structure of the method is shown in fig. 1, and the method comprises the following specific implementation steps:
Step (1): parameter setting: the brain effect connection network parameter matrix A is initialized through the Pearson correlation coefficient of the brain interval, the super parameter lambda=0.5 of the network sparse loss function, the expected KL divergence value V KL =1.5, the proportional controller coefficient K P =0.005 and the integral controller coefficient K I =0.01.
Step (2): the encoder is used for learning latent variables from fMRI data, and the specific steps are as follows:
Step (2.1): the brain effect connection parameter matrix and brain region fMRI data are encoded into latent variables by using a structural equation model, and the expression is as follows:
Z=(I-AT)X (1)
wherein z= [ Z 1,...,zn ] represents a latent variable, I represents an identity matrix, a represents a brain effect connection parameter matrix, and x= [ X 1,...,xn ] represents fMRI data of each brain region.
Step (2.2): a multi-layer perceptron-based coding network is designed, which consists of a 3-layer neural network structure (comprising an input layer, a hidden layer and an output layer), and then the mean and variance of the posterior distribution of latent variables are estimated through the coding network. Assuming that the latent variable obeys normal distribution, the posterior distribution of the latent variable can be deduced from the obtained mean and variance. Next, through monte carlo sampling and re-parameterization techniques, the latent variables corresponding to each brain region can be sampled from the distribution of the latent variables.
Step (3): brain region fMRI data is generated from the obtained latent variables using a decoding network. A decoding network based on a multi-layer perceptron is designed, the decoding network consists of a 3-layer neural network structure (comprising an input layer, a hidden layer and an output layer) and is used for obtaining the distribution of generated fMRI data of each brain region from the latent variables obtained in the step (2), and the expression is as follows:
pθ(X|Z)=ReLU((I-AT)-1Zθ) (2)
where p θ (X|Z) represents the distribution of fMRI data of each brain region learned from the latent variables, reLU represents the activation function, and θ represents the network weight coefficients that the model decodes the network learned during forward and backward propagation.
Step (4): a penalty function is designed that includes fMRI data generation penalty terms and network sparseness penalty terms. The goal is to minimize the loss function so that the model automatically learns a brain effect connection network during iterative training. The specific steps for constructing the loss function are as follows:
Step (4.1): constructing brain region fMRI data generates loss terms. In order to make the generated fMRI data approximate to real fMRI data, a lower bound of evidence is used as an objective function for data generation, and the expression is as follows:
Wherein L ELBO represents a brain region fMRI data generation loss term, Representing the resulting fMRI data, phi and theta representing the weight coefficients learned by the encoder and decoder during forward and backward propagation of the model, respectively, p (Z) representing the true distribution of the latent variable, q φ (z|x) representing the posterior distribution of the latent variable resulting from step (2.2), v >, respectivelyRepresenting the distribution of fMRI data of each brain region learned from latent variables,/>D KL(qφ (z|x) |p (Z)) is a KL divergence value indicating the expectation of the generated fMRI data, and represents the error of the generated fMRI data from the actual fMRI data.
Step (4.2): since the KL divergence plays an important role in the data generation process, too large or too small KL divergence value can influence the learning performance of the model. Therefore, a proportional-integral controller is designed to enable the model to automatically adjust the magnitude of the KL divergence. The calculation formula of the proportional-integral controller is as follows:
Wherein β (T) represents a proportional-integral controller, K P =0.005 represents a proportional control coefficient, K I =0.01 represents an integral control coefficient, e (T) represents an error between a KL divergence value estimated by a model and a desired KL divergence value at time T, T represents a time when model training is completed once, the desired KL divergence value v KL =1.5, and an actual KL divergence value obtained by running an algorithm
Step (4.3): introducing the designed proportional-integral controller into equation (1), as shown in fig. 2, a new brain region fMRI data generation loss term can be obtained, which is expressed as follows:
Where M represents the number of monte carlo samples of the latent variable, M represents the mth time of monte carlo samples of the latent variable, σ z represents the variance of the posterior distribution of the latent variable, and μ z represents the mean of the posterior distribution of the latent variable.
Step (4.4): in order to construct a sparse brain effect connection network structure, a network sparsity loss function for maintaining the sparsity of the brain effect connection network is designed, and the expression is as follows:
Wherein L S represents a network sparsity loss function, λ represents a hyper-parameter of the network sparsity loss function, i and j represent any two brain regions, and a ij represents an effector connection between brain region i and brain region j.
Step (4.5): generating a loss term and a network sparse loss term according to the designed fMRI data, wherein the expression of a loss function L constructed by the invention is as follows:
L=-LELBO+LS (8)
The model can automatically learn a brain effect connection network in the iterative training process with the aim of minimizing the joint loss function L.
Table 1 gives a comparison of the performance of the auto-variational self-encoder based approach with several other typical algorithms on Sim3 simulation data sets. In table 1, patel represents a condition independence dependent method, RNN-GC represents a cyclic neural network glanger causal learning method, EC-GAN represents an antagonism generation network brain effect connection learning method, AVAEEC represents a model-based learning method proposed by the present invention.
In order to compare AVAEEC with learning performance of other algorithms, four evaluation indexes of precision, recall, accuracy and F1 value are adopted, and the evaluation results provided in Table 1 represent the average value and variance obtained by running each algorithm 50 times, respectively. As can be seen from table 1, the learning method based on the automatic variation self-encoder obtains better performance than the other three algorithms in terms of accuracy, recall, accuracy and F1 value.
Table 1 AVAEEC and learning performance of other comparative algorithms on Sim3
Algorithm Precision of Recall rate of recall Accuracy rate of F1 value
Patel 0.76±0.03 0.70±0.03 0.95±0.01 0.73±0.02
RNN-GC 0.77±0.04 0.81±0.09 0.96±0.01 0.80±0.06
EC-GAN 0.80±0.03 0.78±0.03 0.96±0.01 0.79±0.03
STGCMEC 0.81±0.03 0.81±0.03 0.97±0.01 0.81±1.10

Claims (1)

1. The brain effect connection network learning method based on the automatic variation self-encoder is characterized by comprising the following steps of:
(1): parameter setting: the brain effect connection parameter matrix A is constructed by calculating the Pearson correlation coefficient of the brain region; super parameter lambda of the network sparse loss function, expected KL divergence value V KL, proportional controller coefficient K P and integral controller coefficient K I;
(2): the encoder is used for learning latent variables from fMRI data, and the specific steps are as follows:
① The brain effect connection parameter matrix and brain region fMRI data are encoded into latent variables by using a structural equation model, and the expression is as follows:
Z=(I-AT)X (1)
Wherein z= [ Z 1,...,zn ] represents a latent variable, I represents an n×n identity matrix, a represents a brain effect connection parameter matrix, and x= [ X 1,...,xn ] represents fMRI data of each brain region;
② Designing a coding network based on a multi-layer perceptron, wherein the coding network consists of a 3-layer neural network structure, and comprises an input layer, a hidden layer and an output layer, and then estimating the mean value and the variance of posterior distribution of latent variables through the coding network; assuming that the latent variables are subject to normal distribution, deducing posterior distribution of the latent variables according to the obtained mean and variance; next, sampling to obtain the latent variable corresponding to each brain region from the distribution of the latent variables through Monte Carlo sampling and re-parameterization skills;
(3): generating brain region fMRI data from the obtained latent variables by using a decoding network; designing a decoding network based on a multi-layer perceptron, wherein the decoding network consists of a 3-layer network structure and comprises an input layer, a hidden layer and an output layer; obtaining the distribution of the generated fMRI data of each brain region from the latent variables obtained in the step (2) by using a decoding network, wherein the expression is as follows:
pθ(X|Z)=ReLU((I-AT)-1Zθ) (2)
wherein p θ (X|Z) represents the distribution of fMRI data of each brain region learned from the latent variables, reLU represents the activation function, and θ represents the weight coefficient learned by the decoding network in the training process;
(4): designing a loss function comprising fMRI data generation loss terms and network sparseness loss terms; aiming at minimizing the loss function, automatically learning a brain effect connection network by the model in the iterative training process;
① Constructing brain region fMRI data to generate loss items; in order to make the generated fMRI data approximate to real fMRI data, a lower bound of evidence is used as an objective function for data generation, and the expression is as follows:
Wherein L ELBO represents a brain region fMRI data generation loss term, Representing the generated fMRI data, phi and theta representing the weight coefficients automatically learned by the encoder and decoder during training, respectively, p (Z) representing the true distribution of latent variables, q φ (Z|X) representing the posterior distribution of latent variables resulting from step (2), and/>Representing the distribution of fMRI data of each brain region learned from latent variables,/>Indicating that the generated fMRI data is expected, D KL(qφ (z|x) |p (Z)) is a KL divergence value, indicating the error of generating fMRI data from actual fMRI data;
② A proportional-integral controller is designed, so that the model can automatically adjust the KL divergence; the calculation formula of the proportional-integral controller is as follows:
Wherein β (T) represents a proportional-integral controller, K P =0.005 represents a proportional control coefficient, K I =0.01 represents an integral control coefficient, exp represents an expectation, e (T) represents an error between a model estimated KL divergence value and a desired KL divergence value at time T, T represents a time when model training is completed once, the desired KL divergence value v KL =1.5, and an actual KL divergence value obtained by running an algorithm
③ Introducing the designed proportional-integral controller into the formula (1) can obtain a new brain region fMRI data generation loss term, and the expression is as follows:
Wherein M represents the number of times the latent variable is Monte Carlo sampled, M represents the mth time the latent variable is Monte Carlo sampled, σ z represents the variance of the posterior distribution of the latent variable, and μ z represents the mean of the posterior distribution of the latent variable;
④ In order to construct a sparse brain effect connection network structure, a network sparsity loss function for maintaining the sparsity of the brain effect connection network is designed, and the expression is as follows:
Wherein L S represents a network sparse loss function, λ=0.5 represents a hyper-parameter of the network sparse loss function, i and j represent any two brain regions, a ij represents an effector connection between brain region i and brain region j;
⑤ Generating a loss term and a network sparseness loss term according to the designed fMRI data, wherein the constructed loss function L has the expression:
L=-LELBO+LS (8)
The model can automatically learn a brain effect connection network in the iterative training process with the aim of minimizing the joint loss function L.
CN202111356966.5A 2021-11-16 2021-11-16 Brain effect connection network learning method based on automatic variation self-encoder Active CN114219069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111356966.5A CN114219069B (en) 2021-11-16 2021-11-16 Brain effect connection network learning method based on automatic variation self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111356966.5A CN114219069B (en) 2021-11-16 2021-11-16 Brain effect connection network learning method based on automatic variation self-encoder

Publications (2)

Publication Number Publication Date
CN114219069A CN114219069A (en) 2022-03-22
CN114219069B true CN114219069B (en) 2024-04-26

Family

ID=80697281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111356966.5A Active CN114219069B (en) 2021-11-16 2021-11-16 Brain effect connection network learning method based on automatic variation self-encoder

Country Status (1)

Country Link
CN (1) CN114219069B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034360A (en) * 2018-07-13 2018-12-18 北京工业大学 A kind of ant colony method constructing brain effective connectivity network from fMRI and DTI data
CN110889496A (en) * 2019-12-11 2020-03-17 北京工业大学 Human brain effect connection identification method based on confrontation generation network
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11615317B2 (en) * 2020-04-10 2023-03-28 Samsung Electronics Co., Ltd. Method and apparatus for learning stochastic inference models between multiple random variables with unpaired data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034360A (en) * 2018-07-13 2018-12-18 北京工业大学 A kind of ant colony method constructing brain effective connectivity network from fMRI and DTI data
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
CN110889496A (en) * 2019-12-11 2020-03-17 北京工业大学 Human brain effect connection identification method based on confrontation generation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于功能磁共振成像的人脑效应连接网络识别方法综述;冀俊忠 等;《自动化学报》;20210228;第47卷(第2期);第278-296页 *

Also Published As

Publication number Publication date
CN114219069A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN107256393A (en) The feature extraction and state recognition of one-dimensional physiological signal based on deep learning
CN104751842B (en) The optimization method and system of deep neural network
CN110334580A (en) The equipment fault classification method of changeable weight combination based on integrated increment
CN114022693B (en) Single-cell RNA-seq data clustering method based on double self-supervision
CN112464004A (en) Multi-view depth generation image clustering method
WO2020143253A1 (en) Method employing sparse autoencoder to cluster power system operation modes
CN110244689A (en) A kind of AUV adaptive failure diagnostic method based on identification feature learning method
CN112215339B (en) Medical data expansion method based on generation countermeasure network
CN113761777B (en) HP-OVMD-based ultra-short-term photovoltaic power prediction method
CN110222830B (en) Deep feed-forward network fault diagnosis method based on adaptive genetic algorithm optimization
CN114036850A (en) Runoff prediction method based on VECGM
CN113780664A (en) Time sequence prediction method based on TDT-SSA-BP
CN111355633A (en) Mobile phone internet traffic prediction method in competition venue based on PSO-DELM algorithm
Shan et al. Residual learning of deep convolutional neural networks for image denoising
CN113806559B (en) Knowledge graph embedding method based on relationship path and double-layer attention
CN116824584A (en) Diversified image description method based on conditional variation transducer and introspection countermeasure learning
CN116960978A (en) Offshore wind power prediction method based on wind speed-power combination decomposition reconstruction
CN109408896B (en) Multi-element intelligent real-time monitoring method for anaerobic sewage treatment gas production
CN114219069B (en) Brain effect connection network learning method based on automatic variation self-encoder
CN108538301A (en) A kind of intelligent digital musical instrument based on neural network Audiotechnica
CN112329918A (en) Anti-regularization network embedding method based on attention mechanism
CN117077737A (en) Knowledge tracking system for dynamic collaboration of knowledge points
CN115903901A (en) Output synchronization optimization control method for unmanned cluster system with unknown internal state
CN115906959A (en) Parameter training method of neural network model based on DE-BP algorithm
CN112862173B (en) Lake and reservoir cyanobacterial bloom prediction method based on self-organizing deep confidence echo state network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant