CN117808040B - Method and device for predicting low forgetting hot events based on brain map - Google Patents

Method and device for predicting low forgetting hot events based on brain map Download PDF

Info

Publication number
CN117808040B
CN117808040B CN202410232308.2A CN202410232308A CN117808040B CN 117808040 B CN117808040 B CN 117808040B CN 202410232308 A CN202410232308 A CN 202410232308A CN 117808040 B CN117808040 B CN 117808040B
Authority
CN
China
Prior art keywords
bnn
similarity
brain
graph
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410232308.2A
Other languages
Chinese (zh)
Other versions
CN117808040A (en
Inventor
荣欢
徐瑞洋
杨启坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202410232308.2A priority Critical patent/CN117808040B/en
Publication of CN117808040A publication Critical patent/CN117808040A/en
Application granted granted Critical
Publication of CN117808040B publication Critical patent/CN117808040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for predicting a low forgetting hot event based on a brain map, wherein the method comprises the following steps: inputting a social network hot event stream to be predicted into a constructed BNN model, and predicting the subsequent development of the event by using the constructed BNN model; the BNN model is constructed by the following steps: extracting a brain region fMRI image which is predicted by reasoning; converting the fMRI imaging of the brain region into a brain map network and fusing; replacing nodes in the fusion graph with a neuron model to obtain original BNN; inputting events in a plurality of event streams into an original BNN one by one in batches, and carrying out threshold processing by using similarity calculation after each batch carries out weight updating on the original BNN to obtain a plurality of groups of BNN1 and BNN2 … BNNn; and (3) fusing a plurality of groups of BNN1 and BNN2 … BNNn successively, performing experience replay on the events needing to be replayed by using similarity calculation, and completing BNN model construction. The method and the device improve the prediction precision of the hot events.

Description

Method and device for predicting low forgetting hot events based on brain map
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a device for predicting a low-forgetting hot event based on a brain map.
Background
With the advent of the information age, the news events of the five flowers and eight flowers come to the eyes of people by means of social networks, naturally improving the attention of people to social network hot events, and in order to predict the subsequent development of these hot events, the utilization of life-long learning is also developing gradually.
Life-long learning, also known as continuous learning, incremental learning, forever learning, is a learning feature that aims to mimic the ability of humans to continuously learn and transfer, acquire and refine knowledge throughout life. A life-long learning model should be able to identify similarities between upcoming knowledge and previous knowledge. When a new task is very similar to previous experience, the learning agent should modify the existing model to perform better. At the same time, when a new task has little similarity or relevance to previous experiences, the system may shift knowledge to solve the new task. Most popular deep learning algorithms do not have the ability to incrementally learn incoming new information, which results in catastrophic forgetfulness or interference problems. In particular, the new information may interfere with the previous knowledge, resulting in a degradation of the model on the old task, as the new information is overlaying it. Catastrophic forgetfulness refers to the sudden and severe loss of previously acquired information by a learning system when acquiring new information, which has been a fatal vulnerability of standard artificial neural networks in learning. The learning system must be stable enough to preserve prior knowledge for long periods of time while having some plasticity that can be used to accommodate new learning, both conditions being contradictory, thus constituting a "stability-plasticity puzzle". Current artificial neural networks, including deep neural networks, generally have excessive plasticity, which can lead to serious catastrophic forgetfulness.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a brain graph-based low-forgetting hot event prediction method and device, which are used for solving the problem that disastrous forgetting is caused by continuous input of event streams and time lapse when predicting social network hot events.
In order to solve the technical problems, the invention is realized by adopting the following scheme:
the invention provides a low forgetting hot spot event prediction method based on a brain map, which comprises the following steps:
inputting a social network hot event stream to be predicted into a constructed BNN model, and predicting the subsequent development of the event by using the constructed BNN model;
The BNN model building method comprises the following steps:
Extracting three brain region fMRI images which are predicted by reasoning;
converting three brain region fMRI imaging into brain map network and fusing to obtain fusion map;
replacing nodes in the fusion graph with a neuron model to obtain an original BNN;
Inputting events in a plurality of event streams into the original BNN one by one in batches by taking one event stream as a batch, and carrying out threshold processing by utilizing similarity calculation after each batch carries out weight updating on the original BNN to obtain a plurality of groups of BNN1 and BNN2 … BNNn;
And sequentially fusing a plurality of groups of BNN1 and BNN2 … BNNn, and empirically replaying the events needing to be replayed by using similarity calculation to obtain a constructed BNN model.
Further, converting three brain region fMRI imaging into a brain map network, comprising:
Converting fMRI imaging into time sequence data, and constructing SHRINKAGE covariance matrixes through the time sequence data;
Converting SHRINKAGE covariance matrix into a correlation coefficient matrix, and constructing a functional brain area network through the correlation coefficient matrix;
using a Kamada-Kawai layout algorithm to visualize the functional brain area network;
solving energy of single functional brain region network system for randomly selected nodes m in fMRI imaging by using two-dimensional Newton-Raphson method Is the minimum of (2);
Energy using single function brain region network system Drawing a brain graph network according to the parameters obtained under the minimum;
Wherein SHRINKAGE covariance matrices are:
In the method, in the process of the invention, Is SHRINKAGE covariance matrix,/>In order to smooth the parameters of the image,Is covariance matrix of time sequence data, n is sample number of time sequence data,/>For the i-th sample,/>For the sample mean, D is a diagonal matrix with 0 off-diagonal elements and the diagonal elements are: p is the dimension of the matrix S;
The correlation coefficient matrix is:
In the method, in the process of the invention, And/>Is the standard deviation of variable i and variable j,/>,/>,/>The range of values of the elements of the correlation coefficient matrix is [ -1,1] and/>When the absolute value is larger than the set value, it is determined that the node i is linked with the node j.
Further, when the maximum value of the gradient is smaller than the set threshold value, obtaining the energy of the single functional brain region network systemIs the minimum of (2);
Wherein, single functional brain area network system energy The calculation formula is as follows:
wherein x and y are coordinates of a functional brain area network node, k is a set constant, and l is a difference value between the length and the balance state;
The calculation formula of the gradient is:
In the method, in the process of the invention, And/>For/>And/>Is:
further, the fusion of the brain graph network and the sequential fusion of the sets of BNN1, BNN2 … BNNn are performed using a consensus iterative algorithm, comprising:
Step a: performing union processing on the initial graph to obtain a consensus graph;
step b: calculating the similarity between each initial image and the consensus image, and distributing the weight corresponding to each initial image according to the similarity;
step c: performing union processing on the initial graphs after weight distribution and fusing a new consensus graph;
Step d: repeating the step b and the step c until the set iteration times K are reached, and obtaining a final consensus diagram
Step e: according to the set threshold value of the edge and the node, the final consensus diagram is obtainedErasing edges and nodes with the weight lower than a threshold value;
the calculation formula of the consensus diagram is as follows:
In the method, in the process of the invention, Is a consensus diagram,/>Is an initial graph;
The calculation formula of the similarity between each initial graph and the consensus graph is as follows:
In the method, in the process of the invention, For the similarity of initial graph i,/>And/>For the edges and nodes of the initial graph i,/>And/>The method comprises the steps that the edges and the nodes of a consensus diagram are obtained, a is the initial weight of each edge and each node of the set initial diagram and the consensus diagram, and a constant is obtained;
The weight distribution formula corresponding to each initial graph is as follows:
In the method, in the process of the invention, For the weight of the initial graph i,/>Is the sum of the similarity of the initial graphs.
The weights of each edge and each node of the new consensus graph are:
In the method, in the process of the invention, Weights for edges or nodes i in the new consensus graph,/>Is the weight corresponding to the initial graph j,/>Is the weight of the edge or node i in the initial graph j.
Further, the method for performing threshold processing by using similarity calculation after the weight update of the original BNN comprises the following steps:
the weight of the original BNN is updated by using the cross entropy loss function, and a new BNN is obtained;
calculating the similarity between each node and edge in the new BNN and the original BNN, and deleting the edge or node lower than the threshold according to the set similarity threshold;
the cross entropy loss function and the weight updating formula are as follows:
In the method, in the process of the invention, Is a true label,/>For prediction output,/>And/>For new and old weights,/>G is the gradient of the loss function to the weight for the learning rate;
The calculation formula of the similarity between each node and each edge in the new BNN and the original BNN is as follows:
In the method, in the process of the invention, For the similarity of edge i,/>And/>Is the new and old weight of the edge i,/>And/>New and old weights for nodes connected to edge i; /(I)For the similarity of node j,/>And/>For new and old weights of node j,/>And/>New and old weights for the edges connected to node j.
Further, experience replay of events requiring replay using similarity calculations includes:
For each group of BNN1 and BNN2 … BNNn, calculating the similarity between the BNN1 and BNN2 … BNNn and the original BNN respectively, arranging the similarity, selecting a plurality of BNNi with the similarity meeting a set threshold, storing a plurality of BNNi corresponding events into a preset replay pool and adding the events to the tail of the corresponding event stream;
When a plurality of BNNi corresponding events are stored in the replay pool, sorting the similarity corresponding to a plurality of BNNi and the similarity corresponding to the events stored in the replay pool, selecting the event with higher similarity for storing, updating the replay pool and maintaining the number of the events stored in the replay pool unchanged;
The calculation formulas of the similarity between BNN1 and BNN2 … BNNn and the original BNN are as follows:
In the method, in the process of the invention, For the similarity of BNNk to the original BNN,/>For the similarity of edge i,/>For the similarity of the node j, n and m are the total number of the edges and the nodes respectively.
The invention also provides a device for predicting the low forgetting hot events based on the brain map, which comprises the following steps:
The prediction module is used for inputting the social network hot event stream to be predicted into the constructed BNN model, and predicting the subsequent development of the event by utilizing the constructed BNN model;
The BNN model building method comprises the following steps:
Extracting three brain region fMRI images which are predicted by reasoning;
converting three brain region fMRI imaging into brain map network and fusing to obtain fusion map;
replacing nodes in the fusion graph with a neuron model to obtain an original BNN;
Inputting events in a plurality of event streams into the original BNN one by one in batches by taking one event stream as a batch, and carrying out threshold processing by utilizing similarity calculation after each batch carries out weight updating on the original BNN to obtain a plurality of groups of BNN1 and BNN2 … BNNn;
And sequentially fusing a plurality of groups of BNN1 and BNN2 … BNNn, and empirically replaying the events needing to be replayed by using similarity calculation to obtain a constructed BNN model.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the low forgetting hot event prediction method based on brain images when executing the program.
The present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the brain map-based low forgetting hot event prediction method.
Compared with the prior art, the invention has the beneficial effects that: aiming at the improvement of the traditional forgetting prevention method, the essence of the catastrophic forgetting is to forget the content learned before, refer to the working principle of human brain and combine with a progressive network in a dynamic architecture, so as to relieve the catastrophic forgetting caused by the continuous input of event streams and the time lapse when the prediction of the hot events of the social network is carried out, and improve the accuracy of the prediction of the hot events; the problem that excessive storage space is occupied in the traditional experience playback method is solved through a playback pool; the design of the new graph fusion algorithm improves the rationality of graph fusion.
Drawings
FIG. 1 is a flowchart of a low forgetting hot event prediction method based on brain map provided by an embodiment of the present invention;
Fig. 2 is a flowchart of step 40 in a low forgetting hot event prediction method based on brain map according to an embodiment of the present invention;
Fig. 3 is a flowchart of a consensus iterative algorithm adopted by a low forgetting hot event prediction method based on a brain map according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Examples
As shown in fig. 1, the embodiment provides a method for predicting a low forgetting hot event based on brain map, which includes the following steps:
Step 10: extracting three brain region fMRI images which are predicted by reasoning, and converting the three brain region fMRI images into a brain map network.
And extracting a precondition processing stage in the cerebral reasoning prediction process, and constructing a functional brain region by performing fMRI imaging of three brain regions in a precondition integration stage and a verification stage. In fMRI imaging of the brain region, each voxel (voxel) in space may be assigned to a specific brain region, which regions may be referred to as "labels", whereas in data processing we need to convert these labels into a binary mask (mask) in order to extract the signal of the corresponding region from fMRI imaging of the brain region.
For each voxel vin image: if the label value of v is equal to the label value of the brain region of interest, the corresponding position of the voxel in the mask is marked as 1, otherwise the corresponding position of the voxel in the mask is marked as 0; if the value in the mask is 1, indicating that the voxel or point in time belongs to the region of interest, we can preserve the corresponding data and discard it otherwise. And averaging voxel signals in each reserved data area to obtain time sequence signals representing the area, namely time sequence data.
The time series data is input SHRINKAGE to a covariance estimator to construct a covariance matrix. Calculating a sample covariance matrix of the time series data:
(1.1)
where n is the number of samples of the time series data, Is the i < th > sample,/>Is the sample mean.
Constructing SHRINKAGE covariance matrices:
(1.2)
Wherein, Is SHRINKAGE covariance matrix,/>As a smoothing parameter, S is a covariance matrix of sample data, D is a diagonal matrix, a non-diagonal element thereof is 0, and elements on a diagonal are: /(I)P is the dimension of the matrix S.
Converting SHRINKAGE covariance matrix into correlation coefficient matrix:
(1.3)
Wherein, And/>Is the standard deviation of variable i and variable j,/>The range of values of the elements of the correlation coefficient matrix is [ -1,1] and/>
When (when)(And/>The two are the same size) is greater than 0.5, we consider node i to be connected to node j to construct a functional brain area network.
The geometric distance between two brain regions in the drawing space is approximately the shortest path length between them. And visualizing the functional brain area network by using a Kamada-Kawai layout algorithm to obtain the topological structure of the brain map network. The Kamada-Kawai layout algorithm is inspired by the elastic potential energy of a spring, and the topological structure of the brain map network is obtained by minimizing the energy (elastic potential energy) of the whole functional brain region network system. Energy of single functional brain region network system for selected node m (randomly selected) in brain region fMRI imaging by using two-dimensional Newton-Raphson methodIs a minimum of (2).
The energy calculation formula of the single functional brain area network system is as follows:
(1.4)
Wherein x and y are coordinates of a functional brain area network node, k is a set constant (elastic coefficient of a spring), and l is a difference between the length and the balance state (the property of a reference spring).
When (when)When at local minima,/>And/>The partial derivatives of (2) are all 0 as follows:
(1.5)
and/> The partial derivatives of (2) are as follows:
(1.6)
(1.7)
According to the above formula Local minima.
When the gradient is maximumWhen the energy is smaller than a set threshold (manually defined and adjusted according to the need, such as e -5 and the like), the energy/>, of the single-function brain region network system is obtainedMinimum value of (2), gradient maximum value/>The calculation formula of (2) is as follows:
(1.8)
By the above Local minimum check gradient maximum/>Obtained/>Whether the minimum value of (2) is extremely small.
Network system energy in single functional brain regionAnd obtaining x and y parameters under minimum, and drawing a brain graph network by using the obtained x and y parameters.
Step 20: the fusion of the three brain map networks in the step 10 is completed by utilizing a consensus iterative algorithm, and a fusion map is obtained; the process flow is the same as step 50 described below, see step 50 for details.
Step 30: and replacing nodes in the fusion map with a neuron model by using an adaptive exponential integration-release model to obtain an original BNN (Brain Network).
An adaptive exponential integration-issuance model was introduced, a model for describing the electrical activity of neurons, which is an extension of the traditional exponential integration-issuance model (Exponential Integrate-and-Fire, EIF model), and an adaptive mechanism was introduced to better model the behavior of neurons, whose equations are generally expressed as follows:
(1.9)
where V is the membrane potential of the neuron, Is the resting membrane potential, R is the membrane resistance, I is the input current,/>Is the film time constant,/>Is a threshold potential,/>Is transmembrane potential sensitivity,/>Is an adaptive current component.
Step 40: as shown in fig. 2, the events in the event streams are input into the original BNN one by one in batches (taking one event stream as a batch), and each batch performs a weight update on the original BNN and then performs a threshold processing by using a similarity calculation to obtain a plurality of sets of BNN1 and BNN2 … BNNn.
The cross entropy loss function is constructed as follows, and the loss parameters are updated subsequently:
(2.1)
(2.2)
Wherein, Is a true label,/>For predictive output of network,/>And/>Is the new and old weight,/>Is the learning rate and g is the gradient of the loss function versus the weight.
After the new BNN is obtained by updating the weight of the original BNN by using the cross loss function, the new BNN is compared with the original BNN, namely, the similarity between each node and each side in the new BNN and the original BNN is calculated, according to the set lower limit of the similarity threshold, the side or the node below the threshold is deleted (the influence of deleting the node is considered to be larger, the threshold of the node is properly regulated), and the similarity calculation formula of the side and the node (the lower similarity is higher) is as follows:
(3.1)
(3.2)
Wherein, For the similarity of edge i,/>And/>Is the new and old weight of the edge i,/>And/>New and old weights for nodes connected to edge i; /(I)For the similarity of node j,/>And/>For new and old weights of node j,/>And/>Is the new and old weight of the edge connected with the node j.
Step 50: and successively fusing the BNN1 and BNN2 … BNNn by using a graph fusion method of a consensus iterative algorithm. For each group of fusion of BNN1 and BNN2 … BNNn, fusion of BNN1 and BNN2 is performed to obtain BNN 1; then fusing the obtained BNN1 with BNN3 to obtain BNN2, and obtaining BNNn-1 by pushing the same; for the fusion of different sets of BNN1 and BNN2 … BNNn, BNNn-1 obtained by the fusion of the previous set is fused with the BNN1 of the next set, and a plurality of sets of BNN1 and BNN2 … BNNn are fused successively.
Using a consensus iterative algorithm, obtaining consensus graphs through union processing for each graph to be fused, and then respectively distributing weights of each graph to be fused through similarity between each graph to be fused and the consensus graph; performing union processing on the two images to be fused again through the weights, so as to obtain a new consensus image; iterating K times to obtain a final consensus diagram; and (5) carrying out threshold processing on the final consensus diagram to obtain a fusion diagram.
As shown in fig. 3, the method specifically comprises the following steps:
step 501: performing union processing on the initial graph to obtain a consensus graph:
(4.1)
Wherein, Is a consensus diagram,/>Is the initial graph.
Step 502: and calculating the similarity between each initial graph and the consensus graph, wherein the calculation formula is as follows:
(4.2)
Wherein, For the similarity of initial graph i,/>And/>For the edges and nodes of the initial graph i,/>And/>And a is the initial weight of each side and each node of the set initial graph and the consensus graph, and is a constant.
According to the calculated similarity, the weight corresponding to each initial graph is distributed:
(4.3)
Wherein, Is the weight of the initial graph,/>Is the sum of the similarity of the initial graphs.
Step 503: and performing union processing on the initial graph again through the weight, and fusing a new consensus graph, wherein for each edge and each node of the new consensus graph, the method comprises the following steps:
(4.4)
Wherein, Weights for edges or nodes i in the new consensus graph,/>Is the weight corresponding to the initial graph j,/>Is the weight of the edge or node i in the initial graph j.
Step 504: repeating step 502 and step 503 until reaching the set iteration number K to obtain the final consensus diagram
Step 505: the final consensus diagram is obtained through the set threshold value of the edge and the nodeAnd erasing the edges and nodes with the weight lower than the threshold value to obtain a fusion graph.
Step 60: and performing experience replay on the events needing to be replayed by using similarity calculation to obtain a constructed BNN model.
For each group of BNN1 and BNN2 … BNNn, the similarity between BNN1 and the original BNN, the similarity between BNN2 and the original BNN … BNNn and the similarity between the original BNN are sequentially calculated by using the similarity calculation formula of the node and the edge in the step 40, namely:
(5.1)
Wherein, For the similarity of BNNk to the original BNN,/>For the similarity of edge i,/>For the similarity of the node j, n and m are the total number of the edges and the nodes respectively.
In the summation, considering the influence of weakening the similarity of single nodes or edges on the global, the following mapping function is constructed:
Namely:
(5.2)。
For each group of BNN1 and BNN2 … BNNn, sorting the n similarity, selecting a plurality of BNNi corresponding to higher similarity (sorting is located in front, and a set threshold is met or selected), performing experience replay on a plurality of events corresponding to a plurality of BNNi, namely storing the events corresponding to a plurality of BNNi into a pre-prepared replay pool, and adding the events in the replay pool to the end of an event stream; every time a plurality of BNNi of corresponding events are stored in a 'replay pool', the similarity corresponding to a plurality of BNNi of the events stored in the 'replay pool' is ordered and the events with higher similarity are selected for storing, the 'replay pool' is updated, and the number of the events stored in the 'replay pool' is maintained unchanged.
Step 70: and (5) obtaining a constructed BNN model through the steps (50) and (60), inputting a social network hot event stream to be predicted into the constructed BNN model, and predicting the subsequent development of the event by using the constructed BNN model.
In summary, aiming at the improvement of the traditional forgetting prevention method, the essence of the catastrophic forgetting is to forget the content learned before, the working principle of human brain is consulted, and a progressive network in a dynamic architecture is combined, so that the catastrophic forgetting caused by the continuous input of event streams along with the time when the prediction of the hot events of the social network is carried out is relieved, and the accuracy rate of the prediction of the hot events is improved; the problem that excessive storage space is occupied in the traditional experience playback method is solved through a playback pool; the design of the new graph fusion algorithm improves the rationality of graph fusion.

Claims (9)

1. The method for predicting the low forgetting hot events based on the brain map is characterized by comprising the following steps of:
inputting a social network hot event stream to be predicted into a constructed BNN model, and predicting the subsequent development of the event by using the constructed BNN model;
The BNN model building method comprises the following steps:
Extracting three brain region fMRI images which are predicted by reasoning;
converting three brain region fMRI imaging into brain map network and fusing to obtain fusion map;
replacing nodes in the fusion graph with a neuron model to obtain an original BNN;
Inputting events in a plurality of event streams into the original BNN one by one in batches by taking one event stream as a batch, and carrying out threshold processing by utilizing similarity calculation after each batch carries out weight updating on the original BNN to obtain a plurality of groups of BNN1 and BNN2 … BNNn;
And sequentially fusing a plurality of groups of BNN1 and BNN2 … BNNn, and empirically replaying the events needing to be replayed by using similarity calculation to obtain a constructed BNN model.
2. The method for predicting low forgetting hot events based on brain map of claim 1, wherein converting three brain region fMRI images to a brain map network comprises:
Converting fMRI imaging into time sequence data, and constructing SHRINKAGE covariance matrixes through the time sequence data;
Converting SHRINKAGE covariance matrix into a correlation coefficient matrix, and constructing a functional brain area network through the correlation coefficient matrix;
using a Kamada-Kawai layout algorithm to visualize the functional brain area network;
solving energy of single functional brain region network system for randomly selected nodes m in fMRI imaging by using two-dimensional Newton-Raphson method Is the minimum of (2);
Energy using single function brain region network system Drawing a brain graph network according to the parameters obtained under the minimum;
Wherein SHRINKAGE covariance matrices are:
In the method, in the process of the invention, Is SHRINKAGE covariance matrix,/>For smoothing parameters,/>Is covariance matrix of time sequence data, n is sample number of time sequence data,/>For the i-th sample,/>For the sample mean, D is a diagonal matrix with 0 off-diagonal elements and the diagonal elements are: /(I)P is the dimension of the matrix S;
The correlation coefficient matrix is:
In the method, in the process of the invention, And/>Is the standard deviation of variable i and variable j,/>,/>,/>The range of values of the elements of the correlation coefficient matrix is [ -1,1] and/>When the absolute value is larger than the set value, it is determined that the node i is linked with the node j.
3. The brain map-based low forgetting hot spot event prediction method according to claim 2, wherein when the gradient maximum value is smaller than a set threshold value, single functional brain region network system energy is obtainedIs the minimum of (2);
The energy calculation formula of the single functional brain area network system is as follows:
Wherein x and y are coordinates of a functional brain area network node, k is a set constant, and l is a difference value between the length and the balance state;
The calculation formula of the gradient is:
In the method, in the process of the invention, And/>For/>And/>Is:
4. the method for predicting low forgetting hot events based on brain map according to claim 1, wherein the fusion of brain map network and the sequential fusion of several sets of BNN1, BNN2 … BNNn are performed by using a consensus iterative algorithm, comprising:
Step a: performing union processing on the initial graph to obtain a consensus graph;
step b: calculating the similarity between each initial image and the consensus image, and distributing the weight corresponding to each initial image according to the similarity;
step c: performing union processing on the initial graphs after weight distribution and fusing a new consensus graph;
Step d: repeating the step b and the step c until the set iteration times K are reached, and obtaining a final consensus diagram
Step e: according to the set threshold value of the edge and the node, the final consensus diagram is obtainedErasing edges and nodes with the weight lower than a threshold value;
the calculation formula of the consensus diagram is as follows:
In the method, in the process of the invention, Is a consensus diagram,/>Is an initial graph;
The calculation formula of the similarity between each initial graph and the consensus graph is as follows:
In the method, in the process of the invention, For the similarity of initial graph i,/>And/>For the edges and nodes of the initial graph i,/>And/>The method comprises the steps that the edges and the nodes of a consensus diagram are obtained, a is the initial weight of each edge and each node of the set initial diagram and the consensus diagram, and a constant is obtained;
The weight distribution formula corresponding to each initial graph is as follows:
In the method, in the process of the invention, For the weight of the initial graph i,/>Is the sum of the similarity of the initial graphs;
The weights of each edge and each node of the new consensus graph are:
In the method, in the process of the invention, Weights for edges or nodes i in the new consensus graph,/>Is the weight corresponding to the initial graph j,/>Is the weight of the edge or node i in the initial graph j.
5. The brain-graph-based low-forgetting hot event prediction method according to claim 1, wherein the thresholding by similarity calculation after the weight update of the original BNN comprises:
the weight of the original BNN is updated by using the cross entropy loss function, and a new BNN is obtained;
calculating the similarity between each node and edge in the new BNN and the original BNN, and deleting the edge or node lower than the threshold according to the set similarity threshold;
the cross entropy loss function and the weight updating formula are as follows:
In the method, in the process of the invention, Is a true label,/>For prediction output,/>And/>For new and old weights,/>G is the gradient of the loss function to the weight for the learning rate;
The calculation formula of the similarity between each node and each edge in the new BNN and the original BNN is as follows:
In the method, in the process of the invention, For the similarity of edge i,/>And/>Is the new and old weight of the edge i,/>And/>New and old weights for nodes connected to edge i; /(I)For the similarity of node j,/>And/>For new and old weights of node j,/>AndNew and old weights for the edges connected to node j.
6. The brain map-based low forgetting hot event prediction method according to claim 1, wherein empirically replaying an event to be replayed using similarity calculation comprises:
For each group of BNN1 and BNN2 … BNNn, calculating the similarity between the BNN1 and BNN2 … BNNn and the original BNN respectively, arranging the similarity, selecting a plurality of BNNi with the similarity meeting a set threshold, storing a plurality of BNNi corresponding events into a preset replay pool and adding the events to the tail of the corresponding event stream;
When a plurality of BNNi corresponding events are stored in the replay pool, sorting the similarity corresponding to a plurality of BNNi and the similarity corresponding to the events stored in the replay pool, selecting the event with higher similarity for storing, updating the replay pool and maintaining the number of the events stored in the replay pool unchanged;
The calculation formulas of the similarity between BNN1 and BNN2 … BNNn and the original BNN are as follows:
In the method, in the process of the invention, For the similarity of BNNk to the original BNN,/>For the similarity of edge i,/>For the similarity of the node j, n and m are the total number of the edges and the nodes respectively.
7. A brain map-based low forgetting hot spot event prediction device, comprising:
The prediction module is used for inputting the social network hot event stream to be predicted into the constructed BNN model, and predicting the subsequent development of the event by utilizing the constructed BNN model;
The BNN model building method comprises the following steps:
Extracting three brain region fMRI images which are predicted by reasoning;
converting three brain region fMRI imaging into brain map network and fusing to obtain fusion map;
replacing nodes in the fusion graph with a neuron model to obtain an original BNN;
Inputting events in a plurality of event streams into the original BNN one by one in batches by taking one event stream as a batch, and carrying out threshold processing by utilizing similarity calculation after each batch carries out weight updating on the original BNN to obtain a plurality of groups of BNN1 and BNN2 … BNNn;
And sequentially fusing a plurality of groups of BNN1 and BNN2 … BNNn, and empirically replaying the events needing to be replayed by using similarity calculation to obtain a constructed BNN model.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the brain map based low forgetting hot event prediction method of any of claims 1 to 6 when the program is executed by the processor.
9. A computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the brain map based low forgetting hot event prediction method of any one of claims 1 to 6.
CN202410232308.2A 2024-03-01 2024-03-01 Method and device for predicting low forgetting hot events based on brain map Active CN117808040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410232308.2A CN117808040B (en) 2024-03-01 2024-03-01 Method and device for predicting low forgetting hot events based on brain map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410232308.2A CN117808040B (en) 2024-03-01 2024-03-01 Method and device for predicting low forgetting hot events based on brain map

Publications (2)

Publication Number Publication Date
CN117808040A CN117808040A (en) 2024-04-02
CN117808040B true CN117808040B (en) 2024-05-14

Family

ID=90420181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410232308.2A Active CN117808040B (en) 2024-03-01 2024-03-01 Method and device for predicting low forgetting hot events based on brain map

Country Status (1)

Country Link
CN (1) CN117808040B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018175698A1 (en) * 2017-03-22 2018-09-27 Larsx Continuously learning and optimizing artificial intelligence (ai) adaptive neural network (ann) computer modeling methods and systems
CN109871259A (en) * 2013-08-16 2019-06-11 运软网络科技(上海)有限公司 A kind of imitative brain calculates the method and system of virtualization
CN112115998A (en) * 2020-09-11 2020-12-22 昆明理工大学 Method for overcoming catastrophic forgetting based on anti-incremental clustering dynamic routing network
CN115100142A (en) * 2022-06-22 2022-09-23 迈格生命科技(深圳)有限公司 Image processing method, apparatus and computer-readable storage medium
CN115393269A (en) * 2022-07-13 2022-11-25 中国科学院大学 Extensible multi-level graph neural network model based on multi-modal image data
WO2022248676A1 (en) * 2021-05-27 2022-12-01 Deepmind Technologies Limited Continual learning neural network system training for classification type tasks
CN115721861A (en) * 2022-12-06 2023-03-03 北京理工大学 Multi-level neuron transcranial magnetic stimulation method oriented to brain atlas
CN116542320A (en) * 2023-05-06 2023-08-04 广东工业大学 Small sample event detection method and system based on continuous learning
CN116805158A (en) * 2022-03-23 2023-09-26 间尼赛码(深圳)科技有限公司 Development network model for learning consciousness in biological brain
CN117035073A (en) * 2023-08-16 2023-11-10 南京信息工程大学 Future meteorological event prediction method based on hierarchical event development mode induction
CN117058514A (en) * 2023-10-12 2023-11-14 之江实验室 Multi-mode brain image data fusion decoding method and device based on graph neural network
WO2023225037A1 (en) * 2022-05-17 2023-11-23 Pisner Derek Connectome ensemble transfer learning
CN117196908A (en) * 2023-09-21 2023-12-08 浙江师范大学 Multi-mode mixed teaching resource construction method and system based on cognitive neuroscience

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019193462A1 (en) * 2018-04-02 2019-10-10 King Abdullah University Of Science And Technology Incremental learning method through deep learning and support data
US20210383158A1 (en) * 2020-05-26 2021-12-09 Lg Electronics Inc. Online class-incremental continual learning with adversarial shapley value
US20220027672A1 (en) * 2020-07-27 2022-01-27 Nvidia Corporation Label Generation Using Neural Networks

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871259A (en) * 2013-08-16 2019-06-11 运软网络科技(上海)有限公司 A kind of imitative brain calculates the method and system of virtualization
WO2018175698A1 (en) * 2017-03-22 2018-09-27 Larsx Continuously learning and optimizing artificial intelligence (ai) adaptive neural network (ann) computer modeling methods and systems
CN112115998A (en) * 2020-09-11 2020-12-22 昆明理工大学 Method for overcoming catastrophic forgetting based on anti-incremental clustering dynamic routing network
WO2022248676A1 (en) * 2021-05-27 2022-12-01 Deepmind Technologies Limited Continual learning neural network system training for classification type tasks
CN116805158A (en) * 2022-03-23 2023-09-26 间尼赛码(深圳)科技有限公司 Development network model for learning consciousness in biological brain
WO2023225037A1 (en) * 2022-05-17 2023-11-23 Pisner Derek Connectome ensemble transfer learning
CN115100142A (en) * 2022-06-22 2022-09-23 迈格生命科技(深圳)有限公司 Image processing method, apparatus and computer-readable storage medium
CN115393269A (en) * 2022-07-13 2022-11-25 中国科学院大学 Extensible multi-level graph neural network model based on multi-modal image data
CN115721861A (en) * 2022-12-06 2023-03-03 北京理工大学 Multi-level neuron transcranial magnetic stimulation method oriented to brain atlas
CN116542320A (en) * 2023-05-06 2023-08-04 广东工业大学 Small sample event detection method and system based on continuous learning
CN117035073A (en) * 2023-08-16 2023-11-10 南京信息工程大学 Future meteorological event prediction method based on hierarchical event development mode induction
CN117196908A (en) * 2023-09-21 2023-12-08 浙江师范大学 Multi-mode mixed teaching resource construction method and system based on cognitive neuroscience
CN117058514A (en) * 2023-10-12 2023-11-14 之江实验室 Multi-mode brain image data fusion decoding method and device based on graph neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
An effective multimodal representation and fusion method for multimodal intent recognition;Xuejian Huang等;《Neurocomputing》;20230606;第548卷;第126373(1-15)页 *
High-resolution image reconstruction with latent diffusion models from human brain activity;Yu Takagi等;《2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)》;20230822;第14453-14463页 *
碎片化知识处页理与网络化人工智能;汪建基等;《中国科学 : 信息科学》;20170213;第47卷(第02期);第171-192页 *

Also Published As

Publication number Publication date
CN117808040A (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN112116090B (en) Neural network structure searching method and device, computer equipment and storage medium
Xu et al. Partially-connected neural architecture search for reduced computational redundancy
CN106575377B (en) Classifier updates on common features
CN110276442B (en) Searching method and device of neural network architecture
CN114387486A (en) Image classification method and device based on continuous learning
CN112288086A (en) Neural network training method and device and computer equipment
CN111401557B (en) Agent decision making method, AI model training method, server and medium
CN113190688A (en) Complex network link prediction method and system based on logical reasoning and graph convolution
CN111047078A (en) Traffic characteristic prediction method, system and storage medium
Dekhovich et al. Continual prune-and-select: class-incremental learning with specialized subnetworks
CN111309923B (en) Object vector determination method, model training method, device, equipment and storage medium
CN111695046A (en) User portrait inference method and device based on spatio-temporal mobile data representation learning
CN113298129A (en) Polarized SAR image classification method based on superpixel and graph convolution network
CN112749737A (en) Image classification method and device, electronic equipment and storage medium
Zhang et al. Improved GAP-RBF network for classification problems
Urgun et al. Composite power system reliability evaluation using importance sampling and convolutional neural networks
CN109697511B (en) Data reasoning method and device and computer equipment
CN117808040B (en) Method and device for predicting low forgetting hot events based on brain map
Takayama et al. Bayesian Tensor Completion and Decomposition with Automatic CP Rank Determination Using MGP Shrinkage Prior
CN114267422A (en) Method and system for predicting surface water quality parameters, computer equipment and storage medium
CN114611673A (en) Neural network compression method, device, equipment and readable storage medium
CN114708110A (en) Joint training method and device for continuous guarantee behavior prediction model and electronic equipment
CN113420879A (en) Prediction method and device of multi-task learning model
US12106550B2 (en) Cell nuclei classification with artifact area avoidance
CN116702837A (en) Deployment method and device of model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant