CN116756496A - Cortical brain network construction method based on causal convolution graph neural network - Google Patents

Cortical brain network construction method based on causal convolution graph neural network Download PDF

Info

Publication number
CN116756496A
CN116756496A CN202310718382.0A CN202310718382A CN116756496A CN 116756496 A CN116756496 A CN 116756496A CN 202310718382 A CN202310718382 A CN 202310718382A CN 116756496 A CN116756496 A CN 116756496A
Authority
CN
China
Prior art keywords
layer
cortical
cortex
network
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310718382.0A
Other languages
Chinese (zh)
Inventor
徐鹏
陈婉钧
张舒涵
易婵林
李存波
李发礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202310718382.0A priority Critical patent/CN116756496A/en
Publication of CN116756496A publication Critical patent/CN116756496A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • G06F18/15Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/27Regression, e.g. linear or logistic regression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)

Abstract

The invention provides a cortical brain network construction method based on a causal convolution graph neural network, and belongs to the field of cortical brain network construction. Direct mining of brain cortex source interaction patterns from acquired head surface EEG signals the method directly mines cortex networks from head surface brain electrical signals in an end-to-end fashion. The invention excavates the end-to-end implicit learning relation between the electroencephalogram signals and the cortex source space brain network based on the deep learning method, directly estimates the cortex directed network mode corresponding to the electroencephalogram signals recorded by the head surface, and avoids the problems of traceability pathological conditions, instability, model, hypothesis constraint and the like faced by the traditional display solving framework, which is based on the network construction of the electroencephalogram signals by adopting the traditional causal network estimation method. The method can estimate the cortex causal network mode which accords with physiological significance from head surface electroencephalogram signals in a robust way, and has important significance for the research of fine brain causal interaction in time-space double dimensions.

Description

Cortical brain network construction method based on causal convolution graph neural network
Technical Field
The invention belongs to the field of cortical brain network construction, and particularly relates to a cortical brain network construction method based on a causal convolution graph neural network.
Background
Electroencephalogram (EEG) is a reflection of the conduction of electrical activity of neurons within the brain to the surface of the scalp, which has the advantage of high temporal resolution, relatively inexpensive and noninvasive acquisition, and can be used as a powerful tool to study neural activity of the brain. Brain networks refer to a network of links between different areas of the brain, typically obtained by multichannel electroencephalogram time series analysis, which can be analyzed in head space (head space network) or cortex space (cortex network). Because of the inherent low spatial resolution of the head surface space EEG, the influence of volume effects, and the interference of head movements and blinks, the head surface network has unreliability and ambiguity, and thus the brain cortex activity pattern cannot be accurately described, the analysis of cortex networks is the currently mainstream neuroscience method. Currently, one feasible method is to invert (trace back) the head surface signals to the cortical source space through the electroencephalogram inverse problem, and build a cortical brain network based on trace back signal modeling. The accuracy of such strategies relies on the solution of the electroencephalogram inverse problem and cortical source signal interaction modeling. However, the problem of brain electrical reversal is severely underdetermined, and different source activities on the cortex can produce the same head surface activity. Traditional traceability analysis methods, such as LORETA, SOMP, LASSO, MNE, solve by applying different physical and physiological constraints. Instability is likely to occur due to noise sensitivity, physiological priors not conforming to practical, regularized parameters, etc., producing neurophysiologically unreasonable results. Accordingly, the brain network constructed based on the above-described tracing signals is also unreliable. Meanwhile, the traditional brain network construction method also suffers from problems such as prior constraint, explicit solution and the like. In recent years, strong data mining capability of deep learning is paid attention to, wherein a causal convolutional neural network for processing time series problems and a graph neural network for mining interaction relations are widely applied in various fields, and the causal convolutional-based graph neural network is provided.
Disclosure of Invention
In order to avoid the problems of prior constraint and noise influence of the traceability equation solving and the traditional explicit construction network, a cerebral cortex source interaction mode is directly mined from the acquired head surface EEG signals. The invention provides a cortical brain network construction method and system based on a causal convolution graph neural network.
The invention provides a cortical brain network construction method based on a causal convolution graph neural network, which comprises the following steps:
step S1: the preprocessing process of the following steps is carried out on the electroencephalogram signals collected by the head table:
re-referencing the brain electrical signal by adopting a reference electrode standardization technology, filtering out signal noise components by using band-pass filtering, downsampling the signal, segmenting the signal, and removing a signal segment containing artifacts;
step S2: randomly selecting the pretreated electroencephalogram signals with the fixed channel number as input of the nerve activity signals of the simulated cortex sources, simulating the nerve activity signals of the rest cortex sources by adopting a multiple autoregressive equation, and constructing a cortex brain network connection matrix;
the generation steps of the cortical source nerve activity and cortical brain network connection matrix are as follows:
step S21: set T time step analog cortex source signalA total of m+n cortical sources; head surface electroencephalogram signals of different channels are randomly selected to be used as activity signals corresponding to m cortex sources +.>Activity signals corresponding to the remaining n cortex sources +.>The initial value of the amplitude is 0;
step S22: for S 1 (t) performing a multiple linear regression fit, solving parameters of the linear autoregressive model using least squaresTaking model parameters of each order +.>Sum of absolute values of element values of corresponding positions to obtain S 1 (t) corresponding cortical brain network connection matrix->The order P E [1,3 ] of the model];
Step S23: generating a matrix of gaussian distributions with an average value of 0The element value of alpha% before ordering is reserved, so that B= [ W O ]]Then the state space system matrix k= [ B A ]] T Wherein O is zero matrix 0 (P×m×n)
Step S24: calculating the eigenvalue of the state space system matrix, and repeating the step S23 if the eigenvalue is not smaller than 1;
step S25: generation of cortical source signal S according to multiple autoregressive equation 2 (t):
Where ε (t) represents the system noise at time t,representing an A matrix corresponding to the p-th order;
step S26: s is S 2 (t) corresponding cortical brain network connection matrixLet q= [ D 1 O]Then the cortical brain network connection matrix d= [ Q D ] corresponding to the cortical source signal S 2 ] T Wherein O is zero matrix 0 (P×m×n)
Step S3: modeling the cortex source activity signal in the step S2 in the forward direction to obtain a simulated head surface electroencephalogram signal;
step S4: dividing the head table electroencephalogram signals obtained in the step S3 into a training set and a verification set, adding random noise and carrying out normalization processing based on channels, inputting the simulated head table electroencephalogram signals of the training set into a causal convolution-based graph neural network for each training to obtain a cortical brain network connection matrix, calculating a loss function and gradient thereof by adopting an optimization algorithm, updating parameters of the graph neural network, and determining optimal parameters of the graph neural network by adopting the verification set according to the magnitude of the loss function value;
step S5: leading in the optimal graphic neural network parameters determined after training, processing the head surface electroencephalogram signals to be analyzed by using the step 1, and inputting the processed head surface electroencephalogram signals into the trained graphic neural network to obtain a cortical brain network connection matrix;
step S6: setting a threshold value to obtain remarkable directed connection between cortex sources;
further, the specific process of obtaining the electroencephalogram signal of the head table through forward modeling in the step S3: calculation of head table signal using y=hs+eWherein->For the current density in the cortical space,mapping transfer matrix for cortical space to head table, < ->Is noise.
Further, the causal convolution-based graph neural network of step 4 is constructed as follows:
the neural network structure of the map is composed of a cortex source signal characteristic extraction module and a cortex source interaction mining module in sequence; the cortical source signal feature extraction module is a space-time coding feature extractor, and learns a feature mapping process from a head surface to a deep cortical space of a time sequence to obtain a feature sequence of a cortical space source; the cortex source interaction mining module is used for mining the interaction relation between the cortex sources according to the characteristic sequence of the cortex space source to obtain a directed connection matrix; each element of the connection matrix represents a connection weight between two cortex sources, and a row index of the connection matrix is a transmitting node and a column index of the connection matrix is a receiving node.
Further, the training process of the causal convolution-based graph neural network in the step 4 is as follows:
step S41: dividing the data set into a training set and a verification set, and setting iteration times and initialization model super-parameters (including learning rate);
step S42: adding random noise into the training set, normalizing based on the channel, and inputting to a cortical source signal characteristic extraction module to obtain cortical characteristic signalsThen obtaining a cortical brain network connection matrix through a cortical source interactive mining moduleThe loss function is: />Model parameters of the two modules are synchronously updated by adopting a combined training strategy, bsize represents the number of samples trained in each batch of training set, T represents the sequence length, and +.>Indicating the directional connection strength of the ith source signal and the jth source signal in the z-th sample corresponding to the nB-th batch,/for the sample>Representing the directed connection strength of the ith source signal and the jth source signal in the z-th sample corresponding to the nB-th batch of the model prediction, < + >>Indicating the magnitude of the jth time step in the ith channel corresponding to the nB lot,representing the magnitude of the jth time step in the ith channel corresponding to the model predicted nB lot;
step S43: after each iteration training, inputting verification into the neural network model of the corresponding iteration round to obtain a cortical feature signalAnd cortical brain network connection matrix->Calculating a loss function, if the loss value is smaller than the recorded minimum verification loss value, saving the current model parameter, and setting the minimum verification error as the current loss value;
step S44: and repeating the steps S42 and S43 each time of iteration until the iteration number reaches the initially set iteration number, and completing training.
Further, the cortical source signal feature extraction module in the graph neural network structure is constructed as follows:
the cortex source signal characteristic extraction module comprises: the system comprises a time sequence convolution network and a linear mapping layer, wherein the time sequence convolution network system is composed of a plurality of residual error connection modules, each residual error connection module is composed of a plurality of residual error connection modules, the output characteristics of the residual error connection modules are obtained through the process of superposing information input by the modules to the output of the last characteristic sub-module, and the characteristic sub-modules are composed of a one-dimensional expansion causality convolution layer, a weight normalization layer, an activation function layer and a random zeroing layer sequence;
the residual error connection module is used for: o=f (x+f (x)), wherein the activation function operation F, the combination of feature sub-modules is denoted as F, and the input is x;
the one-dimensional dilation causal convolution operation: set one-dimensional input sequenceAnd a filter: f:deconvolution operation F on sequence elements: />Wherein d is an expansion factor, r is a filter length, s-d.i is a history feature quantity, the input is x, 0 filling is carried out on the sequence before convolution operation in order to meet the causal characteristic based on time sequence and ensure that the lengths of input and output sequences are the same, the filling length is r-1, and then conventional one-dimensional sequence convolution operation is carried out.
Further, the cortex source interaction mining module in the graph neural network structure is constructed as follows:
the module adopts a coder and decoder structure, wherein the coder sequentially comprises: the system comprises a first remodelling layer, a first time sequence convolution network, a first pooling layer, a second time sequence convolution network, a second pooling layer, a first side interactive coding layer, a second remodelling layer, a first convolution network, a first average characteristic layer, a third remodelling layer, a node coding layer, a second side interactive coding layer, a fourth remodelling layer, a second convolution network, a second average characteristic layer and a splicing layer, wherein the output of the first average characteristic layer and the output characteristic of the second average characteristic layer are input into the splicing layer for characteristic splicing;
wherein, the edge interactive coding layer operation:
dividing connection modes among each cortex source into a transmitting source end and a receiving source end respectively, adopting independent thermal coding, respectively coding sequence numbers of the transmitting source end and the receiving source end into matrixes, respectively carrying out matrix multiplication operation on layer input signals and coding matrixes of the transmitting source end and the receiving source end, extracting a characteristic sequence of the transmitting source end and a characteristic sequence of the receiving source end, splicing the characteristic sequence of the transmitting source end and the characteristic sequence of the receiving source end corresponding to the connection modes, wherein the characteristic sequence of the transmitting-receiving source is a characteristic corresponding to the connection modes;
node coding layer operation:
transpose the coding matrix of the sequence number of the receiving source end, and then perform matrix multiplication operation with the layer input, dividing the obtained feature sequence by the number of the skin sources excluding the feature sequence, so as to obtain the average feature that all the skin sources flow into the skin sources;
the convolutional network structure is as follows:
the structure is composed of a one-dimensional conventional convolution layer, a weight normalization layer and a time sequence convolution network in sequence;
the average feature layer operation:
averaging the elements along the row dimension of each sample;
splicing layer operation:
stitching the sequences along the column dimension of each sample;
the decoder structure is as follows:
the decoder is an artificial neural network structure and is composed of a plurality of multi-layer perceptron modules, a linear mapping layer and a grading function of a correction linear unit with parameters, wherein each multi-layer perceptron module is composed of a full-connection layer activated by a nonlinear function, a random zero setting layer, a full-connection layer activated by the nonlinear function and a layer normalization layer sequentially;
the modified linear element activation function with parameters:
wherein λ is a weight factor, which is a constant term;
the layer normalizes layer operation:
calculating the mean value mu and standard deviation sigma of the input data along the row dimension, and calculating according to a formulaEpsilon is e -6 The parameters gamma, beta may be trained.
In a second aspect, the present invention provides a cortical brain network construction system based on a causal convolution map neural network, comprising:
and a pretreatment module: preprocessing the head surface brain electrical signals acquired by a sensor to be analyzed, and reducing noise interference;
and a data generation module: simulating cortical source nerve activity signals, constructing a cortical brain network and corresponding head surface brain electrical signals, and providing samples for training a model and verifying the performance of the model;
training module: inputting training samples into the graphic neural network to obtain cortical feature signalsAnd a cortical brain network connection matrix D, calculating an average mean square error of the predefined cortical brain network connection matrix and the estimated cortical brain network connection matrixDifferential and cortical source neural activity signal S and model-learned cortical feature signal +.>The average mean square error and the gradient of the model are updated by adopting a random optimization algorithm of self-adaptive momentum, and a verification sample is set to verify the effect of the model after each round of training so as to determine the model parameters of the graph neural network;
and an analysis module: and analyzing the electroencephalogram signal data set by using the trained graph neural network to obtain a directed brain network matrix, and obtaining a remarkable directed connection result in the data set according to the set threshold.
The invention has the advantages that:
features of future information based on causal convolution cannot be leaked from features of the current features to simulate space-time transmission characteristics of inversion of head surface signals to cortex source space, and compared with a cyclic neural network, the method can reconstruct a plurality of cortex source active electric signals rapidly and parallelly; the causal convolution graph neural network realizes time sequence feature coding based on interaction between cortex sources, so that the cortex network is constructed with higher accuracy. After supervised learning, the method can be used for acquiring electric signals from the head surface and mining an interactive network mode of deep cortex sources.
Drawings
Fig. 1 is a schematic diagram of steps of a cortical brain network construction method based on a causal convolution graph neural network.
Fig. 2 is a schematic diagram of a cortical source signal feature extraction module in the neural network structure according to the present invention.
Fig. 3 is a schematic diagram of a cortex source interaction mining module in the neural network structure according to the present invention.
Fig. 4 is a cortical brain network connection topology in a resting state.
Detailed Description
Embodiments of the present invention are further described below with reference to the accompanying drawings.
In a first aspect, according to an embodiment of the present invention, a cortical brain network construction method based on a causal convolution graph neural network is provided, please refer to fig. 1, including the following steps:
step S1: the preprocessing process of the following steps is carried out on the electroencephalogram signals collected by the head table: processing signals by adopting a reference electrode standardization technology, filtering out information of other frequency bands of the signals by using band-pass filtering, reducing the signal sampling rate, sectioning the signals, and removing signal fragments containing artifacts;
step S2: randomly selecting the pretreated electroencephalogram signals with the fixed channel number as simulated partial cortical source nerve activity signals, simulating the nerve activity signals of other cortical sources by adopting a multiple autoregressive equation, and constructing a cortical brain network connection matrix;
step S3: modeling the cortex source activity signal of the step S2 in the forward direction by adopting a linear model to obtain an electroencephalogram signal of the simulated head surface;
step S4: dividing the head table electroencephalogram signals obtained in the step S3 into a training set and a verification set, adding random noise and carrying out normalization processing based on channels, inputting the simulated head table electroencephalogram signals of the training set into a causal convolution-based graph neural network for each training to obtain a cortical brain network connection matrix, calculating a loss function and gradient thereof by adopting an optimization algorithm, updating parameters of the graph neural network, and determining optimal parameters of the graph neural network by adopting the verification set according to the magnitude of the loss function value;
step S4: leading in the optimal graphic neural network parameters determined after training, processing the head surface electroencephalogram signals to be analyzed by using the step 1, and inputting the processed head surface electroencephalogram signals into the trained graphic neural network to obtain a cortical brain network connection matrix;
step S5: setting a threshold value to obtain remarkable directed connection between cortex sources; in this example, the threshold is set to the sum of the mean and 0.5 times the variance
Further, the step S1 specifically includes:
step S11: re-referencing the brain electrical signals using a reference electrode normalization technique (Reference Electrode Standardization Technique, REST);
step S12: the band-pass filtering is adopted to screen signals in a specific frequency band and filter power frequency noise interference, and in the embodiment, the upper limit of the filtering of the electroencephalogram signals for generating data is 30Hz, and the lower limit of the filtering is 1Hz;
step S13: the electroencephalogram signal is downsampled, and in the embodiment, the sampling rate of the electroencephalogram signal used for generating data is 1000Hz, and the sampling rate after processing is 256Hz;
step S14: segmenting potential signals continuously collected for a period of time in a non-overlapping mode at certain same intervals L, wherein the window length is 2s in the embodiment;
step S15: and removing the artifact sections of the signal sections obtained in the step S14. Namely: comparing each time point in the signal segment with a set artifact threshold one by one, and removing the signal segment if a certain time point of the segment exceeds the threshold, wherein the artifact threshold is set to 75 mu V in the embodiment;
further, the step S2 specifically includes:
step S21: set T time step analog cortex source signalA total of m+n cortical sources; head surface electroencephalogram signals of different channels are randomly selected to be used as activity signals corresponding to m cortex sources +.>Activity signals corresponding to the remaining n cortex sources +.>The initial value of the amplitude is 0;
step S22: for S 1 Performing multiple linear regression analysis, and solving parameters of the linear autoregressive model by least squares(determination of the order p.epsilon.1, 3 of model before modeling Using AIC criterion)]) Taking model parameters of each orderElement value of corresponding positionSum of absolute values to obtain S 1 Corresponding cortical brain network connection matrix
Step S23: generating a matrix of gaussian distributions with an average value of 0The element value of alpha% before ordering is reserved, so that B= [ W O ]]Then the state space system matrix k= [ B A ]] T Wherein O is zero matrix 0 (P×m×n)
Step S24: calculating the eigenvalue of the state space system matrix, and repeating the step S23 if the eigenvalue is not smaller than 1;
step S25: generation of cortical source signal S according to multiple autoregressive equation 2 (t):
Where ε (t) represents the system noise at time t,representing an A matrix corresponding to the p-th order;
step S26: s is S 2 (t) corresponding cortical brain network connection matrixLet q= [ D 1 O]Then the cortical brain network connection matrix d= [ Q D ] corresponding to the cortical source signal S 2 ] T Wherein O is zero matrix 0 (P×m×n)
Further, the specific process of obtaining the electroencephalogram signal of the head table through forward modeling in the step S3: calculation of head table signal using y=hs+eWherein->For the current density in the cortical space,conductive space matrix from cortical source space to head surface, < ->Is noise introduced in the electroencephalogram signal recording. In this example, the cortex sources are selected to be dorsal medial prefrontal cortex (dorsal medial prefrontal cortex, dMPFC, MNI coordinates [ x y z ]]:[0 52 26]) Anterior medial prefrontal cortex (anterior medial prefrontal cortex, aMPFC, MNI coordinate [ x y z ]]:[-6 52 -2]) Ventral medial prefrontal cortex (ventral medial prefrontal cortex, vMPFC, MNI coordinates [ x y z ]]:[0 26 -18]) Left lateral temporal cortex (left lateral temporal cortex, LLTC, MNI coordinate [ x y z ]]:[-60 -24 -18]) Left lateral temporal cortex (left lateral temporal cortex, LLTC, MNI coordinate [ x y z ]]:[60 -24-18]) Rear cingulum sheath (posterior cingulate cortex, PCC, MNI coordinate [ x y z ]]:[0 -58 27]);
Further, the causal convolution-based graph neural network of step 4 is constructed as follows:
the image neural network structure is formed by a cortex source signal feature extraction module and a cortex source interaction mining module in sequence, wherein the cortex source signal feature extraction module is designed as a space-time coding feature extractor, a feature mapping process from a head surface (a sensor acquisition space) to a deep cortex space of a time sequence is learned to obtain a feature sequence of the cortex space source, the cortex source interaction mining module is used for mining interaction relation between the cortex source and the cortex source according to the feature sequence of the cortex space source to obtain a directed connection matrix, element values of the directed connection matrix represent connection weights, a matrix row index is a transmitting node, and a matrix array index is a receiving node;
further, referring to fig. 2, the cortical source signal feature extraction module in the neural network structure is constructed as follows:
the cortex source signal feature extraction module comprises a time sequence convolution network architecture and a linear mapping layer, convolution operation in the time sequence convolution network architecture enables information transmission to accord with causal logic (namely, current information is only determined by features of historical information, features of future information cannot be leaked), the time sequence convolution network architecture is composed of a plurality of residual error connection modules, output features of each residual error connection module are obtained through a process of superposing information input by the modules to output of a last feature sub-module, and the feature sub-module is composed of a one-dimensional expansion causal convolution layer, a weight normalization layer, an activation function layer and a random zero setting layer sequentially;
further, the residual connection module: o=f (x+f (x)), wherein the activation function operation F, the combination of feature sub-modules is denoted as F, and the input is x;
further, the one-dimensional dilation causal convolution operation: set one-dimensional input sequenceAnd a filter: f:deconvolution operation F on sequence elements: />Wherein d is an expansion factor, r is a filter length, s-d.i is a history feature quantity, the input is x, 0 filling is carried out on the sequence before convolution operation in order to meet the causal characteristic based on time sequence and ensure that the lengths of input and output sequences are the same, the filling length is (r-1), and then conventional one-dimensional sequence convolution operation is carried out;
further, referring to fig. 3, the cortex source interaction mining module in the graph neural network structure is constructed as follows:
the module adopts a coder-decoder structure, wherein the coder consists of a remodelling layer, a time sequence convolution network, a pooling layer, a time sequence convolution network, an edge interactive coding layer, a remodelling layer, a convolution network, an average feature layer, a node coding layer, a remodelling layer, a node coding layer, an edge interactive coding layer, a remodelling layer, a convolution network, an average feature layer and a splicing layer in sequence, wherein the output of a first average feature layer (obtaining shallow coding features of interaction between nodes) and the output features of a second average feature layer (obtaining deep coding features of interaction between nodes) are input into the splicing layer for feature splicing;
further, the edge interactive coding layer operates:
dividing each connection mode between cortex sources (namely, information flows from cortex source 1 to cortex source 2 or from cortex source 2 to cortex source 1) into a transmitting source end and a receiving source end respectively, adopting one-hot coding, respectively coding sequence numbers of the transmitting source end and the receiving source end into matrixes, respectively carrying out matrix multiplication operation on layer input signals and coding matrixes of the transmitting source end and the receiving source end, extracting a characteristic sequence of the transmitting source end and a characteristic sequence of the receiving source end, splicing the characteristic sequence of the transmitting source end and the characteristic sequence of the receiving source end of the corresponding connection mode, wherein the characteristic sequence of the transmitting-receiving source is the characteristic of the corresponding connection mode;
further, the node coding layer operates:
transpose the coding matrix of the sequence number of the receiving source end, and then perform matrix multiplication operation with the layer input, dividing the obtained feature sequence by the number of the skin sources excluding the feature sequence, so as to obtain the average feature that all the skin sources flow into the skin sources;
further, the convolutional network structure is as follows:
the structure is composed of a one-dimensional conventional convolution layer, a weight normalization layer and a time sequence convolution network in sequence;
further, the average feature layer operates:
averaging the elements along the row dimension of each sample;
further, the splicing layer operates:
stitching the sequences along the column dimension of each sample;
further, the decoder has the following structure:
the decoder is an artificial neural network structure and is composed of a plurality of multi-layer perceptron (multilayer perceptrons, MLP) modules, a linear mapping layer and a grading function of a correction linear unit (Parametric Rectified Linear Unit, PReLu) with parameters, wherein each MLP module is composed of a full-connection layer activated by a nonlinear function, a random zero setting layer, a full-connection layer activated by the nonlinear function and a layer normalization layer sequentially; in this example, the fully connected layer, which is non-linear function activated, employs an ELU activation function:
wherein, ζ is a weight factor, which is a constant term;
further, the modified linear element activation function with parameters:
wherein λ is a weight factor, which is a constant term;
further, the layer normalization layer operates:
calculating the mean value mu and standard deviation sigma of the input data along the row dimension, and calculating according to a formula(epsilon is e -6 Trainable parameters γ, β);
further, the training process of the step 3 is as follows:
step S31: dividing the data set into a training set and a verification set, and setting iteration times and initialization model super-parameters (including learning rate);
step S32: adding random noise into the training set, normalizing based on the channel, and inputting to a cortical source signal characteristic extraction module to obtain cortical characteristic signalsThen obtaining a cortical brain network connection matrix D through a cortical source interaction mining module, wherein the loss function is as follows: />Model parameters of the two modules are synchronously updated by adopting a combined training strategy, bsize represents the number of samples trained in each batch of training set, T represents the sequence length, and +.>Indicating the directional connection strength of the ith source signal and the jth source signal in the z-th sample corresponding to the nB-th batch,/for the sample>Representing the directed connection strength of the ith source signal and the jth source signal in the z-th sample corresponding to the nB-th batch of the model prediction, < + >>Indicating the magnitude of the jth time step in the ith channel corresponding to the nB lot,representing the magnitude of the jth time step in the ith channel corresponding to the model predicted nB lot;
step S33: after each iteration training, inputting verification into a neural network model of a corresponding iteration round to obtain a cortical feature signal S and a cortical brain network connection matrix D, calculating a loss function, if the loss value is smaller than a recorded minimum verification loss value, storing current model parameters, and setting a minimum verification error as a current loss value;
step S34: and repeating the steps S32 and S33 each time of iteration until the iteration number reaches the initially set iteration number, and completing training.
The embodiment constructs an average cortical brain network connection matrix of 7 normal people in a resting state, which shows the feasibility of the invention:
collecting 7 brain electrical signals (about 5 minutes, the sampling rate is 1000 Hz) of normal people in a resting state, preprocessing the signals according to the specific parameters in the step S1 and the specific example, inputting the signals into the causal convolution-based graph neural network after channel normalization processing to obtain each tested cortical brain network connection matrix, adding element values of corresponding positions of all tested cortical brain network connection matrices, averaging to obtain an average brain network connection matrix of 7 normal people, averaging the mean value and variance of the brain network connection matrices, obtaining a set connection screening threshold, obtaining remarkable connection of the average brain network connection matrix and marking the remarkable connection as red, marking the non-remarkable connection as gray, and the larger weight is darker, please refer to FIG. 4: aM. the aMPFC region and dM. the dMPFC region, which is shown as interacting internally with the frontal lobe region aMPFC, vMPFC, dMPFC in rest and interacting with the bilateral temporal lobe region and occipital lobe region.
Therefore, the invention provides a causal convolution-based graph neural network which directly obtains the interaction relation of deep cortex sources through signals acquired by sensors arranged on a head surface in an end-to-end manner, so as to avoid the problem of solving the electroencephalogram inverse problem and obtain a direct mapping function from an electroencephalogram signal to the cortex brain network in a supervised learning manner.
The foregoing description is only of the preferred embodiments of the invention, and all changes and modifications that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (7)

1. A cortical brain network construction method based on a causal convolution graph neural network comprises the following steps:
step S1: the preprocessing process of the following steps is carried out on the electroencephalogram signals collected by the head table:
re-referencing the brain electrical signal by adopting a reference electrode standardization technology, filtering out signal noise components by using band-pass filtering, downsampling the signal, segmenting the signal, and removing a signal segment containing artifacts;
step S2: randomly selecting the pretreated electroencephalogram signals with the fixed channel number as input of the nerve activity signals of the simulated cortex sources, simulating the nerve activity signals of the rest cortex sources by adopting a multiple autoregressive equation, and constructing a cortex brain network connection matrix;
the generation steps of the cortical source nerve activity and cortical brain network connection matrix are as follows:
step S21: set T time step analog cortex source signalA total of m+n cortical sources; head surface electroencephalogram signals of different channels are randomly selected to be used as activity signals corresponding to m cortex sources +.>Activity signals corresponding to the remaining n cortex sources +.>The initial value of the amplitude is 0;
step S22: for S 1 (t) performing a multiple linear regression fit, solving parameters of the linear autoregressive model using least squaresTaking model parameters of each order +.>Sum of absolute values of element values of corresponding positions to obtain S 1 (t) corresponding cortical brain network connection matrix->The order P E [1,3 ] of the model];
Step S23: generating a matrix of gaussian distributions with an average value of 0The element value of alpha% before ordering is reserved, so that B= [ W O ]]Then the state space system matrix k= [ B A ]] T Wherein O is zero matrix 0 (P×m×n)
Step S24: calculating the eigenvalue of the state space system matrix, and repeating the step S23 if the eigenvalue is not smaller than 1;
step S25: generating cortical source message according to multiple autoregressive equationNumber S 2 (t):
Where ε (t) represents the system noise at time t,representing an A matrix corresponding to the p-th order;
step S26: s is S 2 (t) corresponding cortical brain network connection matrixLet q= [ D 1 O]Then the cortical brain network connection matrix d= [ Q D ] corresponding to the cortical source signal S 2 ] T Wherein O is zero matrix 0 (P×m×n)
Step S3: modeling the cortex source activity signal in the step S2 in the forward direction to obtain a simulated head surface electroencephalogram signal;
step S4: dividing the head table electroencephalogram signals obtained in the step S3 into a training set and a verification set, adding random noise and carrying out normalization processing based on channels, inputting the simulated head table electroencephalogram signals of the training set into a causal convolution-based graph neural network for each training to obtain a cortical brain network connection matrix, calculating a loss function and gradient thereof by adopting an optimization algorithm, updating parameters of the graph neural network, and determining optimal parameters of the graph neural network by adopting the verification set according to the magnitude of the loss function value;
step S5: leading in the optimal graphic neural network parameters determined after training, processing the head surface electroencephalogram signals to be analyzed by using the step 1, and inputting the processed head surface electroencephalogram signals into the trained graphic neural network to obtain a cortical brain network connection matrix;
step S6: setting the threshold results in a significant directional connection between cortical sources.
2. The method for constructing a cortical brain network based on a causal convolutional graph neural network as set forth in claim 1, whereinThe specific process of obtaining the electroencephalogram signal of the head table through forward modeling in the step S3 is as follows: calculation of head table signal using y=hs+eWherein->For current density in cortical space, +.>Mapping transfer matrix for cortical space to head table, < ->Is noise.
3. The method for constructing a cortical brain network based on a causal convolution map neural network according to claim 1, wherein the causal convolution map neural network of step 4 is constructed as follows:
the neural network structure of the map is composed of a cortex source signal characteristic extraction module and a cortex source interaction mining module in sequence; the cortical source signal feature extraction module is a space-time coding feature extractor, and learns a feature mapping process from a head surface to a deep cortical space of a time sequence to obtain a feature sequence of a cortical space source; the cortex source interaction mining module is used for mining the interaction relation between the cortex sources according to the characteristic sequence of the cortex space source to obtain a directed connection matrix; each element of the connection matrix represents a connection weight between two cortex sources, and a row index of the connection matrix is a transmitting node and a column index of the connection matrix is a receiving node.
4. The method for constructing a cortical brain network based on a causal convolution graph neural network according to claim 3, wherein the training process of the causal convolution graph neural network in the step 4 is as follows:
step S41: dividing the data set into a training set and a verification set, and setting iteration times and initialization model super-parameters (including learning rate);
step S42: adding random noise into the training set, normalizing based on the channel, and inputting to a cortical source signal characteristic extraction module to obtain cortical characteristic signalsThen obtaining a cortical brain network connection matrix by a cortical source interaction mining module>The loss function is: />Model parameters of the two modules are synchronously updated by adopting a combined training strategy, bsize represents the number of samples trained in each batch of training set, T represents the sequence length, and +.>Indicating the directional connection strength of the ith source signal and the jth source signal in the z-th sample corresponding to the nB-th batch,/for the sample>Representing the directed connection strength of the ith source signal and the jth source signal in the z-th sample corresponding to the nB-th batch of the model prediction, < + >>Indicating the magnitude of the jth time step in the ith channel corresponding to the nB lot,/>Representing the magnitude of the jth time step in the ith channel corresponding to the model predicted nB lot;
step S43: after each iteration training, inputting verification into the neural network model of the corresponding iteration round to obtain cortex characteristicsSignal signalAnd cortical brain network connection matrix->Calculating a loss function, if the loss value is smaller than the recorded minimum verification loss value, saving the current model parameter, and setting the minimum verification error as the current loss value;
step S44: and repeating the steps S42 and S43 each time of iteration until the iteration number reaches the initially set iteration number, and completing training.
5. The method for constructing a cortical brain network based on a causal convolutional graph neural network as set forth in claim 4, wherein the cortical source signal feature extraction module in the graph neural network structure is constructed as follows:
the cortex source signal characteristic extraction module comprises: the system comprises a time sequence convolution network and a linear mapping layer, wherein the time sequence convolution network system is composed of a plurality of residual error connection modules, each residual error connection module is composed of a plurality of residual error connection modules, the output characteristics of the residual error connection modules are obtained through the process of superposing information input by the modules to the output of the last characteristic sub-module, and the characteristic sub-modules are composed of a one-dimensional expansion causality convolution layer, a weight normalization layer, an activation function layer and a random zeroing layer sequence;
the residual error connection module is used for: o=f (x+f (x)), wherein the activation function operation F, the combination of feature sub-modules is denoted as F, and the input is x;
the one-dimensional dilation causal convolution operation: set one-dimensional input sequenceAnd a filter: f: -a->Deconvolution operation F on sequence elements: />Wherein d is an expansion factor, r is a filter length, s-d.i is a history feature quantity, the input is x, 0 filling is carried out on the sequence before convolution operation in order to meet the causal characteristic based on time sequence and ensure that the lengths of input and output sequences are the same, the filling length is r-1, and then conventional one-dimensional sequence convolution operation is carried out.
6. The method for constructing a cortical brain network based on a causal convolutional graph neural network according to claim 4, wherein the cortical source interaction mining module in the graph neural network structure is constructed as follows:
the module adopts a coder and decoder structure, wherein the coder sequentially comprises: the system comprises a first remodelling layer, a first time sequence convolution network, a first pooling layer, a second time sequence convolution network, a second pooling layer, a first side interactive coding layer, a second remodelling layer, a first convolution network, a first average characteristic layer, a third remodelling layer, a node coding layer, a second side interactive coding layer, a fourth remodelling layer, a second convolution network, a second average characteristic layer and a splicing layer, wherein the output of the first average characteristic layer and the output characteristic of the second average characteristic layer are input into the splicing layer for characteristic splicing;
wherein, the edge interactive coding layer operation:
dividing connection modes among each cortex source into a transmitting source end and a receiving source end respectively, adopting independent thermal coding, respectively coding sequence numbers of the transmitting source end and the receiving source end into matrixes, respectively carrying out matrix multiplication operation on layer input signals and coding matrixes of the transmitting source end and the receiving source end, extracting a characteristic sequence of the transmitting source end and a characteristic sequence of the receiving source end, splicing the characteristic sequence of the transmitting source end and the characteristic sequence of the receiving source end corresponding to the connection modes, wherein the characteristic sequence of the transmitting-receiving source is a characteristic corresponding to the connection modes;
node coding layer operation:
transpose the coding matrix of the sequence number of the receiving source end, and then perform matrix multiplication operation with the layer input, dividing the obtained feature sequence by the number of the skin sources excluding the feature sequence, so as to obtain the average feature that all the skin sources flow into the skin sources;
the convolutional network structure is as follows:
the structure is composed of a one-dimensional conventional convolution layer, a weight normalization layer and a time sequence convolution network in sequence;
the average feature layer operation:
averaging the elements along the row dimension of each sample;
splicing layer operation:
stitching the sequences along the column dimension of each sample;
the decoder structure is as follows:
the decoder is an artificial neural network structure and is composed of a plurality of multi-layer perceptron modules, a linear mapping layer and a grading function of a correction linear unit with parameters, wherein each multi-layer perceptron module is composed of a full-connection layer activated by a nonlinear function, a random zero setting layer, a full-connection layer activated by the nonlinear function and a layer normalization layer sequentially;
the modified linear element activation function with parameters:
wherein λ is a weight factor, which is a constant term;
the layer normalizes layer operation:
calculating the mean value mu and standard deviation sigma of the input data along the row dimension, and calculating according to a formulaEpsilon is e -6 The parameters gamma, beta may be trained.
7. A system of weight 1 causal convolutional graph neural network-based cortical brain network construction method, the system comprising:
and a pretreatment module: preprocessing the head surface brain electrical signals acquired by a sensor to be analyzed, and reducing noise interference;
and a data generation module: simulating cortical source nerve activity signals, constructing a cortical brain network and corresponding head surface brain electrical signals, and providing samples for training a model and verifying the performance of the model;
training module: inputting training samples into the graphic neural network to obtain cortical feature signalsAnd a cortical brain network connection matrix D, calculating the mean square error of the predefined cortical brain network connection matrix and the estimated cortical brain network connection matrix, and the cortical source neural activity signal S and the model-learned cortical feature signal>The average mean square error and the gradient of the model are updated by adopting a random optimization algorithm of self-adaptive momentum, and a verification sample is set to verify the effect of the model after each round of training so as to determine the model parameters of the graph neural network;
and an analysis module: and analyzing the electroencephalogram signal data set by using the trained graph neural network to obtain a directed brain network matrix, and obtaining a remarkable directed connection result in the data set according to the set threshold.
CN202310718382.0A 2023-06-16 2023-06-16 Cortical brain network construction method based on causal convolution graph neural network Pending CN116756496A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310718382.0A CN116756496A (en) 2023-06-16 2023-06-16 Cortical brain network construction method based on causal convolution graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310718382.0A CN116756496A (en) 2023-06-16 2023-06-16 Cortical brain network construction method based on causal convolution graph neural network

Publications (1)

Publication Number Publication Date
CN116756496A true CN116756496A (en) 2023-09-15

Family

ID=87958457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310718382.0A Pending CN116756496A (en) 2023-06-16 2023-06-16 Cortical brain network construction method based on causal convolution graph neural network

Country Status (1)

Country Link
CN (1) CN116756496A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952195A (en) * 2024-03-26 2024-04-30 博睿康医疗科技(上海)有限公司 Brain network construction method and display equipment based on task related brain electrical activity

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952195A (en) * 2024-03-26 2024-04-30 博睿康医疗科技(上海)有限公司 Brain network construction method and display equipment based on task related brain electrical activity

Similar Documents

Publication Publication Date Title
Seal et al. DeprNet: A deep convolution neural network framework for detecting depression using EEG
CN110090017B (en) Electroencephalogram signal source positioning method based on LSTM
Pancholi et al. Source aware deep learning framework for hand kinematic reconstruction using EEG signal
US11693071B2 (en) Systems and methods for mapping neuronal circuitry and clinical applications thereof
CN110522412B (en) Method for classifying electroencephalogram signals based on multi-scale brain function network
CN112957014A (en) Pain detection and positioning method and system based on brain waves and neural network
Xie et al. Physics-constrained deep learning for robust inverse ecg modeling
CN116756496A (en) Cortical brain network construction method based on causal convolution graph neural network
Jafarian et al. Structure learning in coupled dynamical systems and dynamic causal modelling
Prakarsha et al. Time series signal forecasting using artificial neural networks: An application on ECG signal
Ghaderi-Kangavari et al. A general integrative neurocognitive modeling framework to jointly describe EEG and decision-making on single trials
Onak et al. Effects of a priori parameter selection in minimum relative entropy method on inverse electrocardiography problem
Wein et al. Forecasting brain activity based on models of spatiotemporal brain dynamics: A comparison of graph neural network architectures
Vijayvargiya et al. PC-GNN: Pearson Correlation-Based Graph Neural Network for Recognition of Human Lower Limb Activity Using sEMG Signal
Chu et al. Ahed: A heterogeneous-domain deep learning model for IoT-enabled smart health with few-labeled EEG data
Nazari et al. A new approach to detect the coding rule of the cortical spiking model in the information transmission
Zhu et al. Spatio-Temporal Graph Hubness Propagation Model for Dynamic Brain Network Classification
CN116596046A (en) Method for reconstructing image by utilizing electroencephalogram signals and visual features
CN116662742A (en) Brain electrolysis code method based on hidden Markov model and mask empirical mode decomposition
CN114947740A (en) Method for predicting epileptic seizure based on quantum CNN-GRU
Faye et al. Electroencephalogram Channel Selection using Deep Q-Network
Pentari et al. A study on the effect of distinct adjacency matrices for graph signal denoising
Jin et al. Uncertainty-Aware Denoising Network for Artifact Removal in EEG Signals
Luo et al. Mapping effective connectivity by virtually perturbing a surrogate brain
Kuzmanov et al. Transformer Models for Processing Biological Signal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination