CN117808040A - Method and device for predicting low forgetting hot events based on brain map - Google Patents

Method and device for predicting low forgetting hot events based on brain map Download PDF

Info

Publication number
CN117808040A
CN117808040A CN202410232308.2A CN202410232308A CN117808040A CN 117808040 A CN117808040 A CN 117808040A CN 202410232308 A CN202410232308 A CN 202410232308A CN 117808040 A CN117808040 A CN 117808040A
Authority
CN
China
Prior art keywords
bnn
similarity
brain
graph
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410232308.2A
Other languages
Chinese (zh)
Other versions
CN117808040B (en
Inventor
荣欢
徐瑞洋
杨启坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202410232308.2A priority Critical patent/CN117808040B/en
Priority claimed from CN202410232308.2A external-priority patent/CN117808040B/en
Publication of CN117808040A publication Critical patent/CN117808040A/en
Application granted granted Critical
Publication of CN117808040B publication Critical patent/CN117808040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a method and a device for predicting a low forgetting hot event based on a brain map, wherein the method comprises the following steps: inputting a social network hot event stream to be predicted into a constructed BNN model, and predicting the subsequent development of the event by using the constructed BNN model; the BNN model is constructed by the following steps: extracting a brain region fMRI image which is predicted by reasoning; converting the fMRI imaging of the brain region into a brain map network and fusing; replacing nodes in the fusion graph with a neuron model to obtain original BNN; inputting events in a plurality of event streams into original BNN one by one in batches, and carrying out threshold processing by using similarity calculation after each batch carries out weight updating on the original BNN to obtain a plurality of groups of BNN1 and BNN2 … BNN; several sets of BNN1, BNN2 … BNN are fused successively, events needing to be replayed are replayed empirically by using similarity calculation, and BNN model construction is completed. The method and the device improve the prediction precision of the hot events.

Description

Method and device for predicting low forgetting hot events based on brain map
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a device for predicting a low-forgetting hot event based on a brain map.
Background
With the advent of the information age, the news events of the five flowers and eight flowers come to the eyes of people by means of social networks, naturally improving the attention of people to social network hot events, and in order to predict the subsequent development of these hot events, the utilization of life-long learning is also developing gradually.
Life-long learning, also known as continuous learning, incremental learning, forever learning, is a learning feature that aims to mimic the ability of humans to continuously learn and transfer, acquire and refine knowledge throughout life. A life-long learning model should be able to identify similarities between upcoming knowledge and previous knowledge. When a new task is very similar to previous experience, the learning agent should modify the existing model to perform better. At the same time, when a new task has little similarity or relevance to previous experiences, the system may shift knowledge to solve the new task. Most popular deep learning algorithms do not have the ability to incrementally learn incoming new information, which results in catastrophic forgetfulness or interference problems. In particular, the new information may interfere with the previous knowledge, resulting in a degradation of the model on the old task, as the new information is overlaying it. Catastrophic forgetfulness refers to the sudden and severe loss of previously acquired information by a learning system when acquiring new information, which has been a fatal vulnerability of standard artificial neural networks in learning. The learning system must be stable enough to preserve prior knowledge for long periods of time while having some plasticity that can be used to accommodate new learning, both conditions being contradictory, thus constituting a "stability-plasticity puzzle". Current artificial neural networks, including deep neural networks, generally have excessive plasticity, which can lead to serious catastrophic forgetfulness.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a brain graph-based low-forgetting hot event prediction method and device, which are used for solving the problem that disastrous forgetting is caused by continuous input of event streams and time lapse when predicting social network hot events.
In order to solve the technical problems, the invention is realized by adopting the following scheme:
the invention provides a low forgetting hot spot event prediction method based on a brain map, which comprises the following steps:
inputting a social network hot event stream to be predicted into a constructed BNN model, and predicting the subsequent development of the event by using the constructed BNN model;
the BNN model building method comprises the following steps:
extracting three brain region fMRI images which are predicted by reasoning;
converting three brain region fMRI imaging into brain map network and fusing to obtain fusion map;
replacing nodes in the fusion graph with a neuron model to obtain an original BNN;
inputting events in a plurality of event streams into original BNN one by one in batches by taking one event stream as a batch, and carrying out threshold processing by utilizing similarity calculation after each batch carries out weight updating on the original BNN to obtain a plurality of groups of BNN1 and BNN2 … BNN;
and sequentially fusing a plurality of groups of BNN1 and BNN2 and … BNN, and empirically replaying the events needing to be replayed by using similarity calculation to obtain a constructed BNN model.
Further, converting three brain region fMRI imaging into a brain map network, comprising:
converting fMRI imaging into time sequence data, and constructing a Shrinkage covariance matrix through the time sequence data;
converting the Shrinkage covariance matrix into a correlation coefficient matrix, and constructing a functional brain area network through the correlation coefficient matrix;
using a Kamada-Kawai layout algorithm to visualize the functional brain area network;
follow-up in fMRI imaging using two-dimensional Newton-Raphson methodMechanically selected nodemSolving energy of single functional brain region network systemIs the minimum of (2);
energy using single function brain region network systemDrawing a brain graph network according to the parameters obtained under the minimum;
wherein, the Skrikage covariance matrix is:
in the method, in the process of the invention,is a Shrinkage covariance matrix,>in order to smooth the parameters of the image,as a covariance matrix of the time series data,nfor the number of samples of the time series data, +.>Is the firstiSample number->As the mean value of the sample,Da diagonal matrix, whose off-diagonal elements are 0, and whose diagonal elements are:pis a matrixSDimension of (2);
the correlation coefficient matrix is:
in the method, in the process of the invention,and->As a variableiSum variablejStandard deviation of>,/>,/>The range of values of the elements of the correlation coefficient matrix is [ -1,1]And->When the absolute value is larger than the set value, judging the nodeiAnd nodejThere is a connection.
Further, when the maximum value of the gradient is smaller than the set threshold value, obtaining the energy of the single functional brain region network systemIs the minimum of (2);
wherein, single functional brain area network system energyThe calculation formula is as follows:
in the method, in the process of the invention,xyis the coordinates of the network nodes of the functional brain region,kfor the set constant value, the control unit,lis the difference between the length and the equilibrium state;
the calculation formula of the gradient is:
in the method, in the process of the invention,and->Is->And->Is:
further, the fusion of the brain graph network and the sequential fusion of the sets of BNN1, BNN2 … BNN are all performed using a consensus iterative algorithm, comprising:
step a: performing union processing on the initial graph to obtain a consensus graph;
step b: calculating the similarity between each initial image and the consensus image, and distributing the weight corresponding to each initial image according to the similarity;
step c: performing union processing on the initial graphs after weight distribution and fusing a new consensus graph;
step d: repeating the step b and the step c until the set iteration times are reachedKObtaining a final consensus diagram
Step e: according to the set threshold value of the edge and the node, the final consensus diagram is obtainedErasing edges and nodes with the weight lower than a threshold value;
the calculation formula of the consensus diagram is as follows:
in the method, in the process of the invention,for consensus diagram, add->Is an initial graph;
the calculation formula of the similarity between each initial graph and the consensus graph is as follows:
in the method, in the process of the invention,is the initial diagramiSimilarity of->And->Is the initial diagramiEdges and nodes of->And->To co-identify the edges and nodes of the graph,athe initial weight of each side and each node of the set initial graph and the consensus graph is constant;
the weight distribution formula corresponding to each initial graph is as follows:
in the method, in the process of the invention,is the initial diagramiWeight of->Is the sum of the similarity of the initial graphs.
The weights of each edge and each node of the new consensus graph are:
in the method, in the process of the invention,for edges or nodes in a new consensus graphiWeight of->Is the initial diagramjWeight corresponding to->Is the initial diagramjMiddle edge or nodeiIs a weight of (2).
Further, the method for performing threshold processing by using similarity calculation after the weight update of the original BNN comprises the following steps:
the weight of the original BNN is updated by using the cross entropy loss function, and a new BNN is obtained;
calculating the similarity between each node and edge in the new BNN and the original BNN, and deleting the edge or node lower than the threshold according to the set similarity threshold;
the cross entropy loss function and the weight updating formula are as follows:
in the method, in the process of the invention,for a real label->For prediction output, ++>And->For new and old weights, +.>In order for the rate of learning to be high,ggradient of weight for loss function;
the calculation formula of the similarity between each node and each edge in the new BNN and the original BNN is as follows:
in the method, in the process of the invention,is a sideiSimilarity of->And->Is a sideiNew and old weights of->And->Is the same as the edgeiNew and old weights of the connected nodes; />Is a nodejSimilarity of->And->Is a nodejNew and old weights of ∈10->And->Is a nodejConnected with each otherNew and old weights of the edges of (a).
Further, experience replay of events requiring replay using similarity calculations includes:
for each group of BNN1 and BNN2 … BNN, calculating the similarity between the BNN1 and the original BNN and the BNN2 … BNN respectively, arranging the similarity in parallel, selecting a plurality of BNNIs with the similarity meeting a set threshold, storing the events corresponding to the plurality of BNNIs into a preset replay pool, and adding the events to the tail of the corresponding event stream;
when the events corresponding to the BNNI are stored in the replay pool, sorting the similarity corresponding to the BNNI and the similarity corresponding to the events stored in the replay pool, selecting the event with higher similarity to store, updating the replay pool and keeping the number of the events stored in the replay pool unchanged;
the calculation formulas of the similarity between BNN1 and BNN2 … BNN and original BNN are as follows:
in the method, in the process of the invention,for the similarity of BNNK to the original BNN, < >>Is a sideiSimilarity of->Is a nodejIs used for the degree of similarity of (c) to (c),nmthe total number of edges and nodes, respectively.
The invention also provides a device for predicting the low forgetting hot events based on the brain map, which comprises the following steps:
the prediction module is used for inputting the social network hot event stream to be predicted into the constructed BNN model, and predicting the subsequent development of the event by utilizing the constructed BNN model;
the BNN model building method comprises the following steps:
extracting three brain region fMRI images which are predicted by reasoning;
converting three brain region fMRI imaging into brain map network and fusing to obtain fusion map;
replacing nodes in the fusion graph with a neuron model to obtain an original BNN;
inputting events in a plurality of event streams into original BNN one by one in batches by taking one event stream as a batch, and carrying out threshold processing by utilizing similarity calculation after each batch carries out weight updating on the original BNN to obtain a plurality of groups of BNN1 and BNN2 … BNN;
and sequentially fusing a plurality of groups of BNN1 and BNN2 and … BNN, and empirically replaying the events needing to be replayed by using similarity calculation to obtain a constructed BNN model.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the low forgetting hot event prediction method based on brain images when executing the program.
The present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the brain map-based low forgetting hot event prediction method.
Compared with the prior art, the invention has the beneficial effects that: aiming at the improvement of the traditional forgetting prevention method, the essence of the catastrophic forgetting is to forget the content learned before, refer to the working principle of human brain and combine with a progressive network in a dynamic architecture, so as to relieve the catastrophic forgetting caused by the continuous input of event streams and the time lapse when the prediction of the hot events of the social network is carried out, and improve the accuracy of the prediction of the hot events; the problem that excessive storage space is occupied in the traditional experience playback method is solved through a playback pool; the design of the new graph fusion algorithm improves the rationality of graph fusion.
Drawings
FIG. 1 is a flowchart of a low forgetting hot event prediction method based on brain map provided by an embodiment of the present invention;
fig. 2 is a flowchart of step 40 in a low forgetting hot event prediction method based on brain map according to an embodiment of the present invention;
fig. 3 is a flowchart of a consensus iterative algorithm adopted by a low forgetting hot event prediction method based on a brain map according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Examples
As shown in fig. 1, the embodiment provides a method for predicting a low forgetting hot event based on brain map, which includes the following steps:
step 10: extracting three brain region fMRI images which are predicted by reasoning, and converting the three brain region fMRI images into a brain map network.
And extracting a precondition processing stage in the cerebral reasoning prediction process, and constructing a functional brain region by performing fMRI imaging of three brain regions in a precondition integration stage and a verification stage. In fMRI imaging of the brain region, each voxel (voxel) in space may be assigned to a specific brain region, which regions may be referred to as "labels", whereas in data processing we need to convert these labels into a binary mask (mask) in order to extract the signal of the corresponding region from fMRI imaging of the brain region.
For each voxel vin image: if the label value of v is equal to the label value of the brain region of interest, the corresponding position of the voxel in the mask is marked as 1, otherwise the corresponding position of the voxel in the mask is marked as 0; if the value in the mask is 1, indicating that the voxel or point in time belongs to the region of interest, we can preserve the corresponding data and discard it otherwise. And averaging voxel signals in each reserved data area to obtain time sequence signals representing the area, namely time sequence data.
The time sequence data is input into a Sringange covariance estimator to construct a covariance matrix. Calculating a sample covariance matrix of the time series data:
(1.1)
wherein,nas the number of samples of the time series data,is the firstiSample number->Is the sample mean.
Constructing a Shrinkage covariance matrix:
(1.2)
wherein,is a Shrinkage covariance matrix,>as a result of the smoothing parameter being a smoothing parameter,Sas a covariance matrix of the sample data,Da diagonal matrix, whose off-diagonal elements are 0, and whose diagonal elements are: />pIs a matrixSIs a dimension of (c).
Converting the Shrinkage covariance matrix into a correlation coefficient matrix:
(1.3)
wherein,and->As a variableiSum variablejStandard deviation of>Is a correlation systemThe element value range of the number matrix is [ -1,1]And->
When (when)(and->The two are the same size) is greater than 0.5, we consider a nodeiAnd nodejThere is a connection to construct a functional brain area network.
The geometric distance between two brain regions in the drawing space is approximately the shortest path length between them. And visualizing the functional brain area network by using a Kamada-Kawai layout algorithm to obtain the topological structure of the brain map network. The Kamada-Kawai layout algorithm is inspired by the elastic potential energy of a spring, and the topological structure of the brain map network is obtained by minimizing the energy (elastic potential energy) of the whole functional brain region network system. Energy of single functional brain region network system for selected node m (randomly selected) in brain region fMRI imaging by using two-dimensional Newton-Raphson methodIs a minimum of (2).
The energy calculation formula of the single functional brain area network system is as follows:
(1.4)
wherein,xand (3) withyIs the coordinates of the network nodes of the functional brain region,kfor a set constant (spring rate),lis the difference between the length and the equilibrium state (the nature of the reference spring).
When (when)Is at a local minimum +.>And->The partial derivatives of (2) are all 0 as follows:
(1.5)
and->The partial derivatives of (2) are as follows:
(1.6)
(1.7)
according to the above formulaLocal minima.
When the gradient is maximumLess than a set threshold (manually defined, e.g. e, adjusted as desired -5 Etc.), energy of a single functional brain region network system is obtained +.>Minimum value, gradient maximum value ++>The calculation formula of (2) is as follows:
(1.8)
by the aboveLocal minimum check gradientMaximum value->Obtained as follows->Whether the minimum value of (2) is extremely small.
Network system energy in single functional brain regionObtaining under minimumxyParameters, using the resultsxyAnd (5) drawing a brain graph network by parameters.
Step 20: the fusion of the three brain map networks in the step 10 is completed by utilizing a consensus iterative algorithm, and a fusion map is obtained; the process flow is the same as step 50 described below, see step 50 for details.
Step 30: and replacing nodes in the fusion map with a neuron model by using an adaptive exponential integration-release model to obtain an original BNN (Brain Network).
An adaptive exponential integration-release model was introduced, a model for describing the electrical activity of neurons, which is an extension of the traditional exponential integration-release model (Exponential Integrate-and-Fire, EIF model), and an adaptive mechanism was introduced to better model the behavior of neurons, whose equations are generally expressed as follows:
(1.9)
wherein,Vis the membrane potential of the neuron,is the potential of the resting membrane,Ris the resistance of the film and is,Iis the input current, +.>Is the membrane time constant, < >>Is a threshold potential->Is transmembrane potential sensitivity, < >>Is an adaptive current component.
Step 40: as shown in fig. 2, the events in the event streams are input into the original BNN one by one in batches (taking one event stream as a batch), each batch performs weight update on the original BNN and then performs threshold processing by using similarity calculation, so as to obtain a plurality of sets of BNN1 and BNN2 … BNN.
The cross entropy loss function is constructed as follows, and the loss parameters are updated subsequently:
(2.1)
(2.2)
wherein,for a real label->For predictive output of the network,/->And->Is new and old weight, ++>Is the rate of learning to be performed,gis the gradient of the loss function versus the weight.
After the new BNN is obtained by updating the weight of the original BNN by using the cross loss function, the new BNN is compared with the original BNN, namely, the similarity between each node and each side in the new BNN and the original BNN is calculated, according to the set lower limit of the similarity threshold, the side or the node below the threshold is deleted (the influence of deleting the node is considered to be larger, the threshold of the node is properly regulated), and the similarity calculation formula of the side and the node (the lower similarity is higher) is as follows:
(3.1)
(3.2)
wherein,is a sideiSimilarity of->And->Is a sideiNew and old weights of->And->Is the same as the edgeiNew and old weights of the connected nodes; />Is a nodejSimilarity of->And->Is a nodejNew and old weights of ∈10->And->Is a nodejThe new and old weights of the connected edges.
Step 50: several sets of BNN1, BNN2 … BNN are fused successively by a graph fusion method of a consensus iterative algorithm. For each group of BNN1 and BNN2 … BNN fusion, the BNN1 and the BNN2 are fused to obtain BNN 1; then fusing the obtained BNN1 with BNN3 to obtain BNN2, and obtaining BNN-1 by pushing; for the fusion of different BNN1 and BNN2 … BNN, the BNN-1 obtained by the previous fusion is fused with the BNN1 of the next fusion, and the fusion of a plurality of BNN1 and BNN2 … BNN is completed in sequence.
Using a consensus iterative algorithm, obtaining consensus graphs through union processing for each graph to be fused, and then respectively distributing weights of each graph to be fused through similarity between each graph to be fused and the consensus graph; performing union processing on the two images to be fused again through the weights, so as to obtain a new consensus image; such an iterationKObtaining a final consensus diagram; and (5) carrying out threshold processing on the final consensus diagram to obtain a fusion diagram.
As shown in fig. 3, the method specifically comprises the following steps:
step 501: performing union processing on the initial graph to obtain a consensus graph:
(4.1)
wherein,for consensus diagram, add->Is the initial graph.
Step 502: and calculating the similarity between each initial graph and the consensus graph, wherein the calculation formula is as follows:
(4.2)
wherein,is the initial diagramiSimilarity of->And->Is the initial diagramiEdges and nodes of->And->To co-identify the edges and nodes of the graph,athe initial weight of each side and each node of the set initial graph and the consensus graph is constant.
According to the calculated similarity, the weight corresponding to each initial graph is distributed:
(4.3)
wherein,weight of initial graph, +.>Is the sum of the similarity of the initial graphs.
Step 503: and performing union processing on the initial graph again through the weight, and fusing a new consensus graph, wherein for each edge and each node of the new consensus graph, the method comprises the following steps:
(4.4)
wherein,for edges or nodes in a new consensus graphiWeight of->Is the initial diagramjWeight corresponding to->Is the initial diagramjMiddle edge or nodeiIs a weight of (2).
Step 504: repeating steps 502 and 503 until the set number of iterations is reachedKObtaining a final consensus diagram
Step 505: the final consensus diagram is obtained through the set threshold value of the edge and the nodeAnd erasing the edges and nodes with the weight lower than the threshold value to obtain a fusion graph.
Step 60: and performing experience replay on the events needing to be replayed by using similarity calculation to obtain a constructed BNN model.
For each group of BNN1 and BNN2 … BNN, the similarity between BNN1 and original BNN and the similarity between BNN2 and original BNN … BNN and original BNN are sequentially calculated by using the similarity calculation formula of the nodes and the edges in the step 40, namely:
(5.1)
wherein,for the similarity of BNNK to the original BNN, < >>Is a sideiSimilarity of->Is a nodejIs used for the degree of similarity of (c) to (c),nmthe total number of edges and nodes, respectively.
In the summation, considering the influence of weakening the similarity of single nodes or edges on the global, the following mapping function is constructed:
namely:
(5.2)。
for each group BNN1, BNN2 … BNN, for the abovenSorting the similarity, selecting a plurality of BNNI corresponding to the higher similarity (sorting is positioned in front, and the selected BNNI meets a set threshold), and performing experience replay on a plurality of events corresponding to the plurality of BNNI, namely storing the events corresponding to the plurality of BNNI into a pre-prepared replay pool, and adding the events in the replay pool to the tail of an event stream; and when the events corresponding to the BNNI are stored in the replay pool, sorting the similarities corresponding to the BNNI and the similarities corresponding to the events stored in the replay pool, selecting the events with higher similarities for storing, updating the replay pool and keeping the number of the events stored in the replay pool unchanged.
Step 70: and (5) obtaining a constructed BNN model through the steps (50) and (60), inputting a social network hot event stream to be predicted into the constructed BNN model, and predicting the subsequent development of the event by using the constructed BNN model.
In summary, aiming at the improvement of the traditional forgetting prevention method, the essence of the catastrophic forgetting is to forget the content learned before, the working principle of human brain is consulted, and a progressive network in a dynamic architecture is combined, so that the catastrophic forgetting caused by the continuous input of event streams along with the time when the prediction of the hot events of the social network is carried out is relieved, and the accuracy rate of the prediction of the hot events is improved; the problem that excessive storage space is occupied in the traditional experience playback method is solved through a playback pool; the design of the new graph fusion algorithm improves the rationality of graph fusion.

Claims (9)

1. The method for predicting the low forgetting hot events based on the brain map is characterized by comprising the following steps of:
inputting a social network hot event stream to be predicted into a constructed BNN model, and predicting the subsequent development of the event by using the constructed BNN model;
the BNN model building method comprises the following steps:
extracting three brain region fMRI images which are predicted by reasoning;
converting three brain region fMRI imaging into brain map network and fusing to obtain fusion map;
replacing nodes in the fusion graph with a neuron model to obtain an original BNN;
inputting events in a plurality of event streams into original BNN one by one in batches by taking one event stream as a batch, and carrying out threshold processing by utilizing similarity calculation after each batch carries out weight updating on the original BNN to obtain a plurality of groups of BNN1 and BNN2 … BNN;
and sequentially fusing a plurality of groups of BNN1 and BNN2 and … BNN, and empirically replaying the events needing to be replayed by using similarity calculation to obtain a constructed BNN model.
2. The method for predicting low forgetting hot events based on brain map of claim 1, wherein converting three brain region fMRI images to a brain map network comprises:
converting fMRI imaging into time sequence data, and constructing a Shrinkage covariance matrix through the time sequence data;
converting the Shrinkage covariance matrix into a correlation coefficient matrix, and constructing a functional brain area network through the correlation coefficient matrix;
using a Kamada-Kawai layout algorithm to visualize the functional brain area network;
randomly selected nodes in fMRI imaging using two-dimensional Newton-Raphson methodmSolving energy of single functional brain region network systemIs the minimum of (2);
energy using single function brain region network systemDrawing a brain graph network according to the parameters obtained under the minimum;
wherein, the Skrikage covariance matrix is:
in the method, in the process of the invention,is a Shrinkage covariance matrix,>for smooth parameters +.>As a covariance matrix of the time series data,nfor the number of samples of the time series data, +.>Is the firstiSample number->As the mean value of the sample,Da diagonal matrix, whose off-diagonal elements are 0, and whose diagonal elements are: /> pIs a matrixSDimension of (2);
the correlation coefficient matrix is:
in the method, in the process of the invention,and->As a variableiSum variablejStandard deviation of>,/>,/>The range of values of the elements of the correlation coefficient matrix is [ -1,1]And->When the absolute value is larger than the set value, judging the nodeiAnd nodejThere is a connection.
3. The brain map-based low forgetting hot spot event prediction method according to claim 2, wherein when the gradient maximum value is smaller than a set threshold value, single functional brain region network system energy is obtainedIs the minimum of (2);
the energy calculation formula of the single functional brain area network system is as follows:
in the method, in the process of the invention, xyis the coordinates of the network nodes of the functional brain region,kfor the set constant value, the control unit,lis the difference between the length and the equilibrium state;
the calculation formula of the gradient is:
in the method, in the process of the invention,and->Is->And->Is:
4. the method for predicting low forgetting hot events based on brain map according to claim 1, wherein the fusion of brain map network and the successive fusion of several sets of BNN1, BNN2 … BNN are performed by using a consensus iterative algorithm, comprising:
step a: performing union processing on the initial graph to obtain a consensus graph;
step b: calculating the similarity between each initial image and the consensus image, and distributing the weight corresponding to each initial image according to the similarity;
step c: performing union processing on the initial graphs after weight distribution and fusing a new consensus graph;
step d: repeating the step b and the step c until the set iteration times are reachedKObtaining a final consensus diagram
Step e: according to the set threshold value of the edge and the node, the final consensus diagram is obtainedErasing edges and nodes with the weight lower than a threshold value;
the calculation formula of the consensus diagram is as follows:
in the method, in the process of the invention,for consensus diagram, add->Is an initial graph;
the calculation formula of the similarity between each initial graph and the consensus graph is as follows:
in the method, in the process of the invention,is the initial diagramiSimilarity of->And->Is the initial diagramiEdges and nodes of->And->To co-identify the edges and nodes of the graph,athe initial weight of each side and each node of the set initial graph and the consensus graph is constant;
the weight distribution formula corresponding to each initial graph is as follows:
in the method, in the process of the invention,is the initial diagramiWeight of->Is the sum of the similarity of the initial graphs;
the weights of each edge and each node of the new consensus graph are:
in the method, in the process of the invention,for edges or nodes in a new consensus graphiWeight of->Is the initial diagramjWeight corresponding to->Is the initial diagramjMiddle edge or nodeiIs a weight of (2).
5. The brain-graph-based low-forgetting hot event prediction method according to claim 1, wherein the thresholding by similarity calculation after the weight update of the original BNN comprises:
the weight of the original BNN is updated by using the cross entropy loss function, and a new BNN is obtained;
calculating the similarity between each node and edge in the new BNN and the original BNN, and deleting the edge or node lower than the threshold according to the set similarity threshold;
the cross entropy loss function and the weight updating formula are as follows:
in the method, in the process of the invention,for a real label->For prediction output, ++>And->For new and old weights, +.>In order for the rate of learning to be high, ggradient of weight for loss function;
the calculation formula of the similarity between each node and each edge in the new BNN and the original BNN is as follows:
in the method, in the process of the invention,is a sideiSimilarity of->And->Is a sideiNew and old weights of->And->Is the same as the edgeiNew and old weights of the connected nodes; />Is a nodejSimilarity of->And->Is a nodejNew and old weights of ∈10->Andis a nodejNew and old weights of the connected edges.
6. The brain map-based low forgetting hot event prediction method according to claim 1, wherein empirically replaying an event to be replayed using similarity calculation comprises:
for each group of BNN1 and BNN2 … BNN, calculating the similarity between the BNN1 and the original BNN and the BNN2 … BNN respectively, arranging the similarity in parallel, selecting a plurality of BNNIs with the similarity meeting a set threshold, storing the events corresponding to the plurality of BNNIs into a preset replay pool, and adding the events to the tail of the corresponding event stream;
when the events corresponding to the BNNI are stored in the replay pool, sorting the similarity corresponding to the BNNI and the similarity corresponding to the events stored in the replay pool, selecting the event with higher similarity to store, updating the replay pool and keeping the number of the events stored in the replay pool unchanged;
the calculation formulas of the similarity between BNN1 and BNN2 … BNN and original BNN are as follows:
in the method, in the process of the invention,for the similarity of BNNK to the original BNN, < >>Is a sideiSimilarity of->Is a nodejIs used for the degree of similarity of (c) to (c), nmthe total number of edges and nodes, respectively.
7. A brain map-based low forgetting hot spot event prediction device, comprising:
the prediction module is used for inputting the social network hot event stream to be predicted into the constructed BNN model, and predicting the subsequent development of the event by utilizing the constructed BNN model;
the BNN model building method comprises the following steps:
extracting three brain region fMRI images which are predicted by reasoning;
converting three brain region fMRI imaging into brain map network and fusing to obtain fusion map;
replacing nodes in the fusion graph with a neuron model to obtain an original BNN;
inputting events in a plurality of event streams into original BNN one by one in batches by taking one event stream as a batch, and carrying out threshold processing by utilizing similarity calculation after each batch carries out weight updating on the original BNN to obtain a plurality of groups of BNN1 and BNN2 … BNN;
and sequentially fusing a plurality of groups of BNN1 and BNN2 and … BNN, and empirically replaying the events needing to be replayed by using similarity calculation to obtain a constructed BNN model.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the brain map based low forgetting hot event prediction method of any of claims 1 to 6 when the program is executed by the processor.
9. A computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the brain map based low forgetting hot event prediction method of any one of claims 1 to 6.
CN202410232308.2A 2024-03-01 Method and device for predicting low forgetting hot events based on brain map Active CN117808040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410232308.2A CN117808040B (en) 2024-03-01 Method and device for predicting low forgetting hot events based on brain map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410232308.2A CN117808040B (en) 2024-03-01 Method and device for predicting low forgetting hot events based on brain map

Publications (2)

Publication Number Publication Date
CN117808040A true CN117808040A (en) 2024-04-02
CN117808040B CN117808040B (en) 2024-05-14

Family

ID=

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018175698A1 (en) * 2017-03-22 2018-09-27 Larsx Continuously learning and optimizing artificial intelligence (ai) adaptive neural network (ann) computer modeling methods and systems
CN109871259A (en) * 2013-08-16 2019-06-11 运软网络科技(上海)有限公司 A kind of imitative brain calculates the method and system of virtualization
CN112115998A (en) * 2020-09-11 2020-12-22 昆明理工大学 Method for overcoming catastrophic forgetting based on anti-incremental clustering dynamic routing network
US20200410299A1 (en) * 2018-04-02 2020-12-31 King Abdullah University Of Science And Technology Incremental learning method through deep learning and support data
US20210383158A1 (en) * 2020-05-26 2021-12-09 Lg Electronics Inc. Online class-incremental continual learning with adversarial shapley value
US20220027672A1 (en) * 2020-07-27 2022-01-27 Nvidia Corporation Label Generation Using Neural Networks
CN115100142A (en) * 2022-06-22 2022-09-23 迈格生命科技(深圳)有限公司 Image processing method, apparatus and computer-readable storage medium
CN115393269A (en) * 2022-07-13 2022-11-25 中国科学院大学 Extensible multi-level graph neural network model based on multi-modal image data
WO2022248676A1 (en) * 2021-05-27 2022-12-01 Deepmind Technologies Limited Continual learning neural network system training for classification type tasks
CN115721861A (en) * 2022-12-06 2023-03-03 北京理工大学 Multi-level neuron transcranial magnetic stimulation method oriented to brain atlas
CN116542320A (en) * 2023-05-06 2023-08-04 广东工业大学 Small sample event detection method and system based on continuous learning
CN116805158A (en) * 2022-03-23 2023-09-26 间尼赛码(深圳)科技有限公司 Development network model for learning consciousness in biological brain
CN117035073A (en) * 2023-08-16 2023-11-10 南京信息工程大学 Future meteorological event prediction method based on hierarchical event development mode induction
CN117058514A (en) * 2023-10-12 2023-11-14 之江实验室 Multi-mode brain image data fusion decoding method and device based on graph neural network
WO2023225037A1 (en) * 2022-05-17 2023-11-23 Pisner Derek Connectome ensemble transfer learning
CN117196908A (en) * 2023-09-21 2023-12-08 浙江师范大学 Multi-mode mixed teaching resource construction method and system based on cognitive neuroscience

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871259A (en) * 2013-08-16 2019-06-11 运软网络科技(上海)有限公司 A kind of imitative brain calculates the method and system of virtualization
WO2018175698A1 (en) * 2017-03-22 2018-09-27 Larsx Continuously learning and optimizing artificial intelligence (ai) adaptive neural network (ann) computer modeling methods and systems
US20200410299A1 (en) * 2018-04-02 2020-12-31 King Abdullah University Of Science And Technology Incremental learning method through deep learning and support data
US20210383158A1 (en) * 2020-05-26 2021-12-09 Lg Electronics Inc. Online class-incremental continual learning with adversarial shapley value
US20220027672A1 (en) * 2020-07-27 2022-01-27 Nvidia Corporation Label Generation Using Neural Networks
CN112115998A (en) * 2020-09-11 2020-12-22 昆明理工大学 Method for overcoming catastrophic forgetting based on anti-incremental clustering dynamic routing network
WO2022248676A1 (en) * 2021-05-27 2022-12-01 Deepmind Technologies Limited Continual learning neural network system training for classification type tasks
CN116805158A (en) * 2022-03-23 2023-09-26 间尼赛码(深圳)科技有限公司 Development network model for learning consciousness in biological brain
WO2023225037A1 (en) * 2022-05-17 2023-11-23 Pisner Derek Connectome ensemble transfer learning
CN115100142A (en) * 2022-06-22 2022-09-23 迈格生命科技(深圳)有限公司 Image processing method, apparatus and computer-readable storage medium
CN115393269A (en) * 2022-07-13 2022-11-25 中国科学院大学 Extensible multi-level graph neural network model based on multi-modal image data
CN115721861A (en) * 2022-12-06 2023-03-03 北京理工大学 Multi-level neuron transcranial magnetic stimulation method oriented to brain atlas
CN116542320A (en) * 2023-05-06 2023-08-04 广东工业大学 Small sample event detection method and system based on continuous learning
CN117035073A (en) * 2023-08-16 2023-11-10 南京信息工程大学 Future meteorological event prediction method based on hierarchical event development mode induction
CN117196908A (en) * 2023-09-21 2023-12-08 浙江师范大学 Multi-mode mixed teaching resource construction method and system based on cognitive neuroscience
CN117058514A (en) * 2023-10-12 2023-11-14 之江实验室 Multi-mode brain image data fusion decoding method and device based on graph neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XUEJIAN HUANG等: "An effective multimodal representation and fusion method for multimodal intent recognition", 《NEUROCOMPUTING》, vol. 548, 6 June 2023 (2023-06-06), pages 1 - 15, XP087345869, DOI: 10.1016/j.neucom.2023.126373 *
YU TAKAGI等: "High-resolution image reconstruction with latent diffusion models from human brain activity", 《2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》, 22 August 2023 (2023-08-22), pages 14453 - 14463 *
汪建基等: "碎片化知识处页理与网络化人工智能", 《中国科学 : 信息科学》, vol. 47, no. 02, 13 February 2017 (2017-02-13), pages 171 - 192 *

Similar Documents

Publication Publication Date Title
Xu et al. Partially-connected neural architecture search for reduced computational redundancy
CN112288086B (en) Neural network training method and device and computer equipment
CN112116090A (en) Neural network structure searching method and device, computer equipment and storage medium
CN109447096B (en) Glance path prediction method and device based on machine learning
CN114387486A (en) Image classification method and device based on continuous learning
CN113190688A (en) Complex network link prediction method and system based on logical reasoning and graph convolution
CN111695046A (en) User portrait inference method and device based on spatio-temporal mobile data representation learning
CN113065525A (en) Age recognition model training method, face age recognition method and related device
Delasalles et al. Spatio-temporal neural networks for space-time data modeling and relation discovery
WO2022039675A1 (en) Method and apparatus for forecasting weather, electronic device and storage medium thereof
Dekhovich et al. Continual prune-and-select: class-incremental learning with specialized subnetworks
Shiguemori et al. Estimation of initial condition in heat conduction by neural network
CN115168720A (en) Content interaction prediction method and related equipment
Urgun et al. Composite power system reliability evaluation using importance sampling and convolutional neural networks
CN111309923B (en) Object vector determination method, model training method, device, equipment and storage medium
CN112988851A (en) Counterfactual prediction model data processing method, device, equipment and storage medium
CN112749737A (en) Image classification method and device, electronic equipment and storage medium
CN117808040B (en) Method and device for predicting low forgetting hot events based on brain map
CN109697511B (en) Data reasoning method and device and computer equipment
CN117808040A (en) Method and device for predicting low forgetting hot events based on brain map
Zhao et al. U-net for satellite image segmentation: Improving the weather forecasting
CN114267422B (en) Method and system for predicting surface water quality parameters, computer equipment and storage medium
Takayama et al. Bayesian Tensor Completion and Decomposition with Automatic CP Rank Determination Using MGP Shrinkage Prior
CN117077813A (en) Training method and training system for machine learning model
CN117763483A (en) Fault detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant