CN112347984A - Olfactory stimulus-based EEG (electroencephalogram) acquisition and emotion recognition method and system - Google Patents

Olfactory stimulus-based EEG (electroencephalogram) acquisition and emotion recognition method and system Download PDF

Info

Publication number
CN112347984A
CN112347984A CN202011364249.2A CN202011364249A CN112347984A CN 112347984 A CN112347984 A CN 112347984A CN 202011364249 A CN202011364249 A CN 202011364249A CN 112347984 A CN112347984 A CN 112347984A
Authority
CN
China
Prior art keywords
stimulation
video
eeg
olfactory
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011364249.2A
Other languages
Chinese (zh)
Inventor
吕钊
薛敬怡
薛冰
吴敏超
胡世昂
李平
裴胜兵
张超
吴小培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202011364249.2A priority Critical patent/CN112347984A/en
Publication of CN112347984A publication Critical patent/CN112347984A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Abstract

The invention discloses an EEG acquisition and emotion recognition method and system based on olfactory stimulation, which comprises the following steps: the experimental paradigm design module is used for designing a novel paradigm so that the emotional experience of a subject can be enhanced by adding olfactory stimulation while the visual stimulation is provided; the data acquisition module selects a proper testee to acquire an EEG signal and obtains a preliminary emotion feedback label of the testee; the preprocessing module is used for preprocessing the collected EEG signals to obtain purer EEG signals; the feature extraction module is used for extracting four different emotional features of PSD, DE, DASM and RASM from the processed EEG signal; and the classification and identification module is used for classifying the extracted features into a test set and a training set and putting the test set and the training set into a Support Vector Machine (SVM) classifier for learning and classification. The invention compares the brain topographic map and the recognition rate under the mixed stimulation of pure video stimulation and odor video so as to confirm that the olfactory stimulation can more strongly induce the emotion of the subject.

Description

Olfactory stimulus-based EEG (electroencephalogram) acquisition and emotion recognition method and system
Technical Field
The invention relates to the technical field of electroencephalogram signal processing, in particular to the fields of electroencephalogram signal acquisition, machine learning, smell and vision and multi-label classification under olfactory stimulation, and particularly relates to an EEG acquisition and emotion recognition method and system based on olfactory stimulation.
Background
Emotion plays an important role in man-machine interaction and social activities, emotion recognition is an important field of current artificial intelligence application, has wide application prospects in the aspects of psychotherapy, disease prevention and the like, and cognitive theory and psychological experiment results show that brain activities are closely related to emotion, so that emotion recognition based on electroencephalogram signals arouses strong interest of researchers.
Conventional multimedia components currently stimulate two senses of humans: (1) vision; (2) and (4) hearing. Stimulating the subject's visual sense is primarily by watching different pictures or videos, and stimulating the subject's auditory sense is by playing audio. In addition, researches show that the addition of olfactory stimulation can enrich multimedia experience, reflect reality better and improve the emotional induction level of users. In addition, the sense of smell is the oldest sense of human beings, and the smell is strongly connected with the memory and has an inseparable relationship with emotion, thereby playing an important role in the emotional life of people. Furthermore, olfactory damage is already described in the preclinical stage of certain neurodegenerative diseases and may be related to disease progression, so that the action mechanism of olfactory stimulation on the brain is further explored, and the method has important significance for the research of some clinical diseases. However, the olfactory stimulation has adaptability, and the effect of no visual stimulation is lasting, thereby causing certain difficulty in data acquisition. And the international absence of a database of olfactory stimuli-based EEG that severely limits the study of olfactory stimuli-induced affective mechanisms. Therefore, it is desirable to provide an EEG signal acquisition method and a suitable emotion recognition method for inducing emotion by olfactory stimulation.
Disclosure of Invention
The invention aims to solve the technical problem of providing an olfactory stimulus-based EEG acquisition and emotion recognition method and system, which can obtain EEG signals with rich information so as to obtain more advantageous recognition results in the emotion recognition process.
In order to solve the technical problems, the invention adopts a technical scheme that: an olfactory stimulus based EEG acquisition and emotion recognition method is provided, comprising the steps of:
s1: designing an experimental paradigm: the method comprises the following steps that a subject receives various smells and feeds back emotional state labels, the smells are classified into positive, negative and neutral smells suitable for the subject according to the emotional state label corresponding to each smell, and the classified smells are combined with video clips with the same properties to serve as video and smell mixed stimuli;
s2: data acquisition: collecting EEG signals of a subject under two stimulation modes of pure video stimulation and video odor mixed stimulation, and feeding an emotional state label after the subject receives one stimulation; intercepting useful data from an event occurrence index point to an event end index point of all collected EEG data;
s3: data preprocessing: preprocessing the EEG data intercepted and obtained in the step S2 to obtain a preprocessed EEG signal;
s4: feature extraction: intercepting a single experimental sample of the preprocessed EEG signal according to different time lengths, and respectively extracting four characteristics of Power Spectral Density (PSD), Differential Entropy (DE), differential asymmetric DASM and rational asymmetric RASM;
s5: and (3) learning and classifying: the features extracted under the two stimulation modes are respectively divided into a training set and a testing set, the training set and the corresponding emotional state labels are put into an SVM classifier to be subjected to learning classification to obtain a trained model, the model is used for predicting the emotional state labels of the testing set to obtain the recognition rate of emotion recognition, and the results obtained by the two stimulation modes are compared and analyzed.
In a preferred embodiment of the present invention, in step S2, the pure video stimulus is that only video stimulus is presented to the subject, the video stimulus is that three-classification emotion assessment is performed on multiple segments of videos by multiple assessors, and videos with the same assessment result are selected; the video and odor mixed stimulation means that the video stimulation and the olfactory stimulation of the same type are presented to the subject at the same time, and the two types of stimulation are presented alternately every 20s in the video for a certain time.
In a preferred embodiment of the present invention, in step S2, the pure video stimulus and the olfactory video mixed stimulus induce the subject three emotions positive, neutral and negative, obtaining a valid 32-lead EEG signal and the actual emotional state label L ═ L1, L2, …, LqQ represents the number of emotional state labels, and the value of each label is Lk-7, -6, …, 0, …, 6, 7}, wherein k denotes the kth tag, k { -1, 2, …, q; the emotion labels are classified by using Valence dimension and Arousal dimension, wherein the Valence dimension is expressed as positive and negative, the Arousal dimension is expressed as 1,2,3, … and 7, namely-7 to-3 are negative emotions, -2 to 2 are neutral emotions, and 3 to 7 are positive emotions.
In a preferred embodiment of the present invention, in step S3, the data preprocessing process includes removing 50Hz power frequency interference, mean value and trend of the EEG data obtained by the process of step S2, and using ICA' S Infomax algorithm to separate and remove the ocular artifacts.
In a preferred embodiment of the present invention, the step S4 includes the following steps:
single experimental samples were divided on the pre-processed data using window lengths of 50% overlap of 2s, 3s and 4s respectively, and PSD, DE, DASM and RASM features were extracted respectively:
Figure BDA0002804963690000031
DE=log2(PSD)
DASM=PSD(Xleft)-PSD(Xright)
RASM=PSD(Xleft)/PSD(Xright)
in the formula, fftyiRepresenting the signal value corresponding to the ith point after the time domain signal is converted into the frequency domain signal, i represents a sample point, XleftDenotes the electrode corresponding to the left half head, XrightShowing the corresponding electrodes of the right hemi-cephalic portion, where XleftAnd XrightIs a symmetrical relationship.
In a preferred embodiment of the present invention, the step S5 includes the following steps:
the extracted features under two types of stimulation are divided into 5 parts by adopting a five-fold cross algorithm, wherein 4 parts are training sets, 1 part is a test set, the extracted features are subjected to learning classification by using a Libsvm classifier, and a radial basis function is used as a kernel function:
K(x,xi)=exp(-|x-xi|22)
using a grid optimization algorithm to carry out parameter optimization on a penalty coefficient c and a parameter gamma in an SVM algorithm in an interval of [ -10, 10] with the step length of 0.5 to obtain an optimal classification result, wherein the input is as follows:
Itrain=[It1,It2,…,Itn],Ilabel=[Il1,Il2,…,Iln]and Itest=[Itest1,Itest2,…,Itesth]
In the formula ItrainTo train the set, IlabelFor training the set of labels, ItestAnd n is the number of training sets, h is the number of testing sets, and n is 4 h.
The output is a prediction tag set: i isp_label=[Ip1,Ip2,…,Iph]。
In order to solve the above technical problems, the second technical solution adopted by the present invention is: there is provided an olfactory stimulus based EEG acquisition and emotion recognition system comprising:
the experimental paradigm design module is used for designing a standard automatic emotion inducing material presentation process and alternately presenting pure video stimulus and video odor mixed stimulus required by an experiment;
the data acquisition module is used for acquiring EEG signals of a subject under two stimulation modes of pure video stimulation and video odor mixed stimulation, and feeding an emotional state label after the subject receives one stimulation; intercepting useful data of all the acquired EEG data;
the data preprocessing module is used for preprocessing the EEG data obtained by the data acquisition module to obtain a preprocessed EEG signal;
the characteristic extraction module is used for intercepting a single experimental sample according to different time lengths from the EEG signal obtained by the data preprocessing module, and extracting four characteristics of Power Spectral Density (PSD), Differential Entropy (DE), differential asymmetric DASM and rational asymmetric RASM respectively;
and the learning classification module is used for training and learning the features extracted by the feature extraction module under the two stimuli to obtain the recognition rate of emotion recognition and comparing and analyzing the results obtained by the two stimulus modes.
In a preferred embodiment of the present invention, the pure video stimulus designed by the experimental paradigm design module only presents video stimulus to the subject, the video stimulus is three-classification emotion evaluation of a plurality of sections of videos by a plurality of evaluators, and videos with the same evaluation result are selected; the scent stimulus is that the subject receives a plurality of scents and feeds back emotional state labels, and the scents are classified into positive, negative and neutral scents which are suitable for the subject according to the emotional state labels corresponding to the scents; the video and odor mixed stimulation means that the video stimulation and the olfactory stimulation of the same type are presented to the subject at the same time, and the two types of stimulation are presented alternately every 20s in the video for a certain time.
In order to solve the above technical problems, the third technical solution adopted by the present invention is: there is provided an olfactory stimulus based EEG acquisition and emotion recognition apparatus comprising a memory for storing at least one program and a processor for loading said at least one program to perform a method as described in any one of the above.
In order to solve the technical problems, the fourth technical scheme adopted by the invention is as follows: there is provided a storage medium having stored therein processor-executable instructions for performing the method of any one of the above when executed by a processor.
The invention has the beneficial effects that:
(1) the data collected by the invention has higher accuracy:
at present, the emotion recognition method is mainly realized by facial expressions and voice, but the facial expressions and the voice are easily controlled by human subjectives, so that objective results cannot be obtained, and the accuracy is not high. The EEG signal can record the real brain activity, is not subjectively changed, and can objectively reflect the emotional state of the subject at the moment. In signal acquisition, the invention uses 32 electrodes to collect electroencephalogram data, and more completely covers unused areas of the brain. In addition, the method requires that the testee feeds back an emotional state label every 20s, so that the feedback information can be obtained in a more real-time mode, the generation of singular samples is reduced, a more accurate model can be trained subsequently, and the recognition rate is improved;
(2) the invention lays a foundation for further exploring the olfactory emotion-inducing mechanism:
the existing electroencephalogram emotion database is obtained by induction of pictures, videos or audios, the olfactory-induced emotion electroencephalogram database is not disclosed, and the influence of different stimuli on human emotion and the emotional mechanism of the brain cannot be further explored without taking the data as a basis. Therefore, the invention uses a novel independently designed experimental paradigm to carry out experiments, acquires the electroencephalogram signals of 8 testees, and provides a data basis for further exploring the olfactory evoked emotion mechanism;
(3) the invention has wide application prospect:
studies have shown that gradual loss or defect of olfactory function is considered to be one of the early symptoms of alzheimer's disease, parkinson's disease, temporal lobe medial epilepsy, schizophrenia, etc., and is of significant relevance thereto, and olfactory impairment has been described in the preclinical stages of certain neurodegenerative diseases and may be associated with disease progression. The invention collects EEG signal under olfactory stimulation, and processes and identifies it systematically, and judges the stimulation degree of olfactory action to brain by drawing brain map and comparing the final identification rate, so it can deduce whether the function of olfactory action is normal. The result analysis of the invention is intuitive and easy to understand, and the adverse effect on the brain can not be generated in the data acquisition process, thereby having wide application prospect.
Drawings
FIG. 1 is a flow chart of the olfactory stimulus based EEG acquisition and emotion recognition method of the present invention;
FIG. 2 is a schematic diagram of the data acquisition system;
FIG. 3 is a schematic view of the collection electrode distribution;
FIG. 4 is a schematic diagram of the experimental collection environment;
FIG. 5 is a schematic diagram of the experimental paradigm;
FIG. 6 is a graph of emotional state tag offsets under different stimuli;
FIG. 7 is a comparison graph of the recognition results for different sample lengths, different features, and different stimuli;
FIG. 8 is a graph of the frequency spectra of Fp2 and T7 electrodes under purely video stimulation;
fig. 9 is a graph of the frequency spectra of Fp2 and T7 electrodes under video scent mixing stimulation.
The parts in the drawings are numbered as follows: 1. the device comprises an electrode cap, 2, an electroencephalogram synchronous amplifier, 3, a wireless receiver, 4, an interface expander, 5, a connecting wire, 6, a collection computer, 7 and a stimulation computer.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
Referring to fig. 1, an embodiment of the present invention includes:
an olfactory stimulus based EEG acquisition and emotion recognition method comprising the steps of:
s1: reasonable-design experimental paradigm:
firstly, the experimental design is carried out in advance, 9 types of obviously different smells are respectively presented to a subject for 10s before the experiment is started, the emotional state labels are fed back by the subject, and the smells are classified into positive, negative and neutral smells suitable for the subject according to the emotional state label corresponding to each smell. Specifically, nine odors of sweet orange, rose, alcohol, water, vinegar, durian, black odor, mint, myrtle, each odor type being defined according to the feedback of each subject during the pre-experiment;
and secondly, designing an experimental paradigm, performing three-classification emotion evaluation on 66 sections of videos by three evaluators, selecting 32 sections of videos with the same evaluation result as video stimuli, and using the classified smells and video segments with the same properties as mixed video and smell stimuli.
S2: data acquisition: collecting EEG signals of a subject under two stimulation modes of pure video stimulation and video odor mixed stimulation, and feeding an emotional state label after the subject receives one stimulation; intercepting useful data from an event occurrence index point to an event end index point of all collected EEG data;
the stimulation is divided into two modes, wherein one mode is pure video stimulation, namely the stimulation only presents video stimulation to the testee, the other mode is video odor mixed stimulation, namely the stimulation simultaneously presents video stimulation and olfactory stimulation of the same type to the testee, the two types of stimulation are presented alternately every 20s in the video with the average length of 83s, the emotional induction difference caused by different videos is avoided, the EEG signal of the testee is collected, and the emotional tag is fed back by the testee after the presentation of each type of stimulation is finished, namely, the emotional state tag is fed back every 20 s.
The pure video stimulation and the olfactory video mixed stimulation can effectively induce three emotions of positive, neutral and negative of a subject, an effective 32-lead EEG signal and an actual emotional state label L ═ L (L1, L2, …, Lq) are obtained, q represents the number of emotional state labels, and the value of each label is Lk{ -7, -6, …, 0, …, 6, 7}, where k denotes the kth tag, k { -1, 2, …, q. The emotion labels are classified by using Valence dimension and Arousal dimension, wherein the Valence dimension is expressed as positive and negative, the Arousal dimension is expressed as 1,2,3, … and 7, negative emotion is negative emotion from-7 to-3 in subsequent processing, neutral emotion from-2 to 2 and positive emotion from 3 to 7.
S3: data preprocessing: preprocessing the EEG data intercepted and obtained in the step S2 to obtain a preprocessed EEG signal;
the data preprocessing process comprises the steps of removing 50Hz power frequency interference, mean value and trend of EEG data obtained by intercepting in the step S2, and separating and removing eye movement artifacts by using an ICA Infmax algorithm. The interference of the electromyographic signals is effectively removed by removing the 50Hz power frequency interference.
S4: feature extraction: intercepting a single experimental sample of the preprocessed EEG signal according to different time lengths, and respectively extracting four characteristics of Power Spectral Density (PSD), Differential Entropy (DE), differential asymmetric DASM and rational asymmetric RASM;
single experimental samples were divided on the pre-processed data using window lengths of 50% overlap of 2s, 3s and 4s respectively, and PSD, DE, DASM and RASM features were extracted respectively:
Figure BDA0002804963690000071
DE=log2(PSD)
DASM=PSD(Xleft)-PSD(Xright)
RASM=PSD(Xleft)/PSD(Xright)
in the formula, fftyiRepresenting the signal value corresponding to the ith point after the time domain signal is converted into the frequency domain signal, i represents the sample point,XleftDenotes the electrode corresponding to the left half head, XrightShowing the corresponding electrodes of the right hemi-cephalic portion, where XleftAnd XrightIs a symmetrical relationship.
S5: and (3) learning and classifying: the features extracted under two stimuli are divided into five equal parts by using five-fold intersection, wherein four parts are a training set and a testing set, the training set and a corresponding label set are put into an SVM classifier for learning, parameters of the SVM classifier are optimized by using a grid optimization algorithm to obtain a trained model, the testing set is put into the trained model to predict labels, the true labels are compared to obtain the recognition accuracy of emotion recognition, and the recognition obtained under two stimulus modes is compared and analyzed. The method comprises the following specific steps:
the extracted features under two types of stimulation are divided into 5 parts by adopting a five-fold cross algorithm, wherein 4 parts are training sets, 1 part is a test set, the extracted features are subjected to learning classification by using a Libsvm classifier, and a radial basis function is used as a kernel function:
K(x,xi)=exp(-|x-xi|22)
using a grid optimization algorithm to carry out parameter optimization on a penalty coefficient c and a parameter gamma in an SVM algorithm in an interval of [ -10, 10] with the step length of 0.5 to obtain an optimal classification result, wherein the input is as follows:
Itrain=[It1,It2,…,Itn],Ilabel=[Il1,Il2,…,Iln]and Itest=[Itest1,Itest2,…,Itesth]
In the formula ItrainTo train the set, IlabelFor training the set of labels, ItestAnd n is the number of training sets, h is the number of testing sets, and n is 4 h.
The output is a prediction tag set: i isp_label=[Ip1,Ip2,…,Iph]。
The method and the advantages thereof are concretely described in the following with reference to the accompanying drawings and the specific experiments and experimental results thereof:
referring to fig. 2, a schematic structural diagram of the data acquisition system in the present embodiment is illustrated. All the acquisition devices of the experiment are a 34-lead electrode cap 1 (comprising a reference electrode ref and a grounding electrode GND), an electroencephalogram synchronous amplifier 2, a wireless receiver 3, an interface expander 4, a connecting wire 5, a computer (an exciting computer 7) provided with E-prime software for presenting stimulation and a computer (an acquisition computer 6) provided with Vision Recorder software, wherein the acquisition devices and the connecting wires are shown in figure 2, the electroencephalogram synchronous amplifier 2 is directly connected with the special wearable electrode cap 1 and the interface expander 4 respectively, and then the interface expander 4 is connected with the acquisition computer 6 and the exciting computer 7 respectively. By the device, stimulation time, event generation nodes and stimulation type labels provided by the E-prime in the experimental process can be transmitted to the Vision Recorder, and the Vision Recorder simultaneously stores the data, the position information (X1, X2 and X3) of the electroencephalogram synchronous amplifier 2 and the corresponding electroencephalogram signals and displays the data on the display interface of the software. The data collected was 32 channels of electroencephalogram signals (excluding the reference electrode, ground electrode, and amplifier locations) with a sampling frequency of 250 Hz.
Figure 3 is an electrode distribution diagram of a standard 32 lead actiCAP electrode cap 1 used for acquisition.
Referring to fig. 4, a schematic diagram of an experimental environment of the acquisition experiment in this example is illustrated. The testee sits in front of a computer presenting video stimulation, a collector executes a program written by using E-prime, stimulation videos are played in order, a screen prompts a stimulation mode to be changed 1s in advance, a container bottle cap for storing smell is opened when olfactory stimulation is required to be added, and the container is placed 1 cm in front of the nose of the testee, so that the smell is prevented from being diffused in a room where the experiment is located in a large range to influence next olfactory stimulation, the testee can fully receive the olfactory stimulation, and the collector starts to close the container when the screen prompts pure video stimulation 1s in advance. In the experimental process, the head needs to be kept still as much as possible, and the generation of facial expressions needs to be controlled as much as possible so as to avoid the interference of the generated electromyographic signals on the electroencephalogram signals.
Referring to fig. 5, a schematic diagram of an experimental paradigm used in this example is illustrated. Before the formal collection experiment is started, 11 smells are respectively tested in two groups of test experiments, each group tests six smells (neutral smell water is used twice), each smell presents 10s, each group is separated by 30s, the self-evaluation label and the brain electrical signal of a testee for each smell are recorded, and the smells are divided into three types of positive, neutral and negative according to the self-evaluation label of the smells. The formal acquisition experiment has 32 experiments, each experiment consists of four parts of 3s of starting prompt, video clip playing, 5s of emotion state label feedback of a testee and 15s of rest, during the video playing, in order to avoid the difference of different video emotion arousing degrees, the video is divided into four parts (the video playing process is continuous and uninterrupted), and two different stimulation modes are presented alternately: pure video stimulation and olfactory video mixed stimulation, namely, odor with the same property as the video is added every 20s, and the computer screen sequentially prompts a switching mode. The other 31 experiments were performed in the same manner as described above.
Referring to fig. 6, the shift of the emotional state tags for different subjects under different stimuli is illustrated. In the pre-experiment stage, the test subject needs to evaluate and feed back 11 smells, the label range is from-7 to 7, and the smells are divided into positive, negative and neutral according to the feedback smell induction emotional state label. And (3) calculating the average value of the collected feedback label of each subject from the positive smell, the negative smell and the neutral smell, and subtracting the average label value obtained under the pure video stimulation from the average label value obtained under the olfactory video mixed stimulation so as to obtain the offset. According to the counted emotion label results, after olfactory stimulation is received, Arousal of the three emotions all shift, positive shift of the positive emotion is 1.06, positive shift of the neutral emotion is 0.17, and negative shift of the negative emotion is 1.68; from the result of the recognition rate, the recognition rate is improved by 1.25% after olfactory stimulation is added. Therefore, the emotion induction degree can be effectively improved by adding olfactory stimulation, the emotional experience of a user is enhanced, the brain activity is increased, and the influence on the electrodeless emotion is larger.
Referring to fig. 7, a comparison graph of the recognition results of different characteristics and different stimulation patterns for different sample lengths is shown. The average recognition rate of four features extracted by eight testees under different sample lengths is respectively calculated, and the recognition results under two stimulation modes are compared. Generally, the overall recognition rate is high, which indicates that the acquired electroencephalogram signals not only contain rich emotional information, but also the received emotional state labels fed back by the testees are relatively accurate. Comparing results of different sample lengths, it can be seen that the recognition rate of the time window length of 2s is lower than that of 3s and 4s, and the short sample length can cause part of information loss to reduce the recognition rate; compared with results of two different stimulation modes, the recognition rate of three-classification emotion recognition with olfactory stimulation is obviously increased, the increase of characteristic DASM and RASM is especially obvious, the optimal average recognition results based on video odor mixed stimulation and pure video stimulation are respectively 98.95% and 97.77%, and are higher than that of the added olfactory stimulation by 1.18%, which indicates that the added olfactory stimulation successfully enhances the emotional experience of a testee; comparing the performance between different features, it can be seen that the PSD and DE features are superior to the left and right hemibrain asymmetry features DASM and RASM at both stimuli. The above results verify the effectiveness and practicability of the present invention.
Referring to fig. 8, a graph of the frequency spectra of Fp2 and T7 electrodes of a subject under purely video stimulation is shown, with the abscissa representing sample point number and the ordinate representing normalized frequency. Different mood-evoked spectra have different patterns, the spectrograms of both electrodes show a more stable pattern classification in the high frequency band than in the low frequency band, the spectrogram of the Fp2 electrode located in the frontal lobe has a more pronounced pattern classification in the T7 electrode than that of the T7 electrode located in the temporal lobe, while the pattern classification of Fp2 is more blurred and has a lower energy value. The method proves that the high-frequency oscillation electroencephalogram is more relevant to emotion, and the emotion relevance of the temporal lobe of the brain is stronger.
Referring to fig. 9, a graph of the frequency spectra of Fp2 and T7 electrodes of a subject under video olfactory stimuli is shown, with the abscissa representing sample point number and the ordinate representing normalized frequency. The spectrogram after the addition of olfactory stimulus has a more significant increase in the overall energy value, which means that the brain has increased activity. In addition, the pattern classification effect of the Fp2 and T7 electrodes is improved, and more detailed emotion pattern classification can be read on the information in the image. This further demonstrates that olfactory stimuli have the effect of effectively enhancing emotional induction, enhancing the emotional experience of the subject, and also demonstrates the correctness of the classification experimental results obtained by the present invention.
The embodiment of the invention also provides an EEG acquisition and emotion recognition system based on olfactory stimulation, which comprises:
the experimental paradigm design module is used for designing a standard automatic emotion inducing material presentation process and alternately presenting pure video stimulus and video odor mixed stimulus required by an experiment;
the data acquisition module is used for acquiring EEG signals of a subject under two stimulation modes of pure video stimulation and video odor mixed stimulation, and the subject feeds back an emotional state label every 20 s; intercepting useful data of all the acquired EEG data;
the data preprocessing module is used for preprocessing the EEG data obtained by the data acquisition module to obtain a preprocessed EEG signal;
the characteristic extraction module is used for intercepting a single experimental sample according to different time lengths from the EEG signal obtained by the data preprocessing module, and extracting four characteristics of Power Spectral Density (PSD), Differential Entropy (DE), differential asymmetric DASM and rational asymmetric RASM respectively;
and the classification and identification module is used for training and learning the features extracted by the feature extraction module under the two stimuli to obtain the recognition rate of emotion recognition and comparing and analyzing the results obtained by the two stimulus modes.
The pure video stimulation designed by the experimental paradigm design module only presents video stimulation to a subject, the video stimulation is to perform three-classification emotion assessment on a plurality of sections of videos by a plurality of assessors, and the videos with the same assessment result are selected; the scent stimulus is that the subject receives a plurality of scents and feeds back emotional state labels, and the scents are classified into positive, negative and neutral scents which are suitable for the subject according to the emotional state labels corresponding to the scents; the video and odor mixed stimulation means that the video stimulation and the olfactory stimulation of the same type are presented to the subject at the same time, and the two types of stimulation are presented alternately every 20s in the video for a certain time.
The system for collecting the EEG and recognizing the emotion based on the olfactory stimulus can execute the method for collecting the EEG and recognizing the emotion based on the olfactory stimulus, can execute any combination of the method examples to implement the steps, and has the corresponding functions and beneficial effects of the method.
The present embodiment also provides an olfactory stimulus based EEG acquisition and emotion recognition apparatus comprising a memory for storing at least one program and a processor for loading said at least one program to perform the method as described above.
The present embodiments also provide a storage medium having stored therein processor-executable instructions for performing the method as described above when executed by a processor.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (10)

1. An olfactory stimulus based EEG acquisition and emotion recognition method, comprising the steps of:
s1: designing an experimental paradigm: the method comprises the following steps that a subject receives various smells and feeds back emotional state labels, the smells are classified into positive, negative and neutral smells suitable for the subject according to the emotional state label corresponding to each smell, and the classified smells are combined with video clips with the same properties to serve as video and smell mixed stimuli;
s2: data acquisition: collecting EEG signals of a subject under two stimulation modes of pure video stimulation and video odor mixed stimulation, and feeding an emotional state label after the subject receives one stimulation; intercepting useful data from an event occurrence index point to an event end index point of all collected EEG data;
s3: data preprocessing: preprocessing the EEG data intercepted and obtained in the step S2 to obtain a preprocessed EEG signal;
s4: feature extraction: intercepting a single experimental sample of the preprocessed EEG signal according to different time lengths, and respectively extracting four characteristics of Power Spectral Density (PSD), Differential Entropy (DE), differential asymmetric DASM and rational asymmetric RASM;
s5: and (3) learning and classifying: the features extracted under the two stimulation modes are respectively divided into a training set and a testing set, the training set and the corresponding emotional state labels are put into an SVM classifier to be subjected to learning classification to obtain a trained model, the model is used for predicting the emotional state labels of the testing set to obtain the recognition rate of emotion recognition, and the results obtained by the two stimulation modes are compared and analyzed.
2. The method for EEG collection and emotion recognition based on olfactory stimuli as claimed in claim 1, wherein in step S2, said pure video stimuli is presented to the subject only with video stimuli, and the video stimuli is three-classification emotion assessment of the multiple video segments by multiple assessors, and the videos with the same assessment result are selected; the video and odor mixed stimulation means that the video stimulation and the olfactory stimulation of the same type are presented to the subject at the same time, and the two types of stimulation are presented alternately every 20s in the video for a certain time.
3. The olfactory stimulus based EEG collection and emotion recognition method of claim 1, wherein in step S2, the pure video stimulus and the olfactory video hybrid stimulus evoke three emotions positive, neutral and negative for the subject, obtaining a valid 32-lead EEG signal and the actual emotional state label L ═ { L1, L2, …, L ═ L1, L2, L …qQ represents the number of emotional state labels, and the value of each labelIs Lk-7, -6, …, 0, …, 6, 7}, wherein k denotes the kth tag, k { -1, 2, …, q; the emotion labels are classified by using Valence dimension and Arousal dimension, wherein the Valence dimension is expressed as positive and negative, the Arousal dimension is expressed as 1,2,3, … and 7, namely-7 to-3 are negative emotions, -2 to 2 are neutral emotions, and 3 to 7 are positive emotions.
4. The olfactory stimulus based EEG acquisition and emotion recognition method of claim 1 wherein, in step S3, the data preprocessing procedure comprises 50Hz power frequency interference removal, mean value removal and detrending processing of the EEG data intercepted in step S2, and the Infmax algorithm of ICA is used to separate and remove ocular artifacts.
5. The olfactory stimulus based EEG collection and emotion recognition method of claim 1, wherein the specific steps of step S4 include:
single experimental samples were divided on the pre-processed data using window lengths of 50% overlap of 2s, 3s and 4s respectively, and PSD, DE, DASM and RASM features were extracted respectively:
Figure FDA0002804963680000021
DE=log2(PSD)
DASM=PSD(Xleft)-PSD(Xright)
RASM=PSD(Xleft)/PSD(Xright)
in the formula, fftyiRepresenting the signal value corresponding to the ith point after the time domain signal is converted into the frequency domain signal, i represents a sample point, XleftDenotes the electrode corresponding to the left half head, XrightShowing the corresponding electrodes of the right hemi-cephalic portion, where XleftAnd XrightIs a symmetrical relationship.
6. The olfactory stimulus based EEG collection and emotion recognition method of claim 1, wherein the specific steps of step S5 include:
the extracted features under two types of stimulation are divided into 5 parts by adopting a five-fold cross algorithm, wherein 4 parts are training sets, 1 part is a test set, the extracted features are subjected to learning classification by using a Libsvm classifier, and a radial basis function is used as a kernel function:
K(x,xi)=exp(-|x-xi|22)
using grid optimization algorithm to carry out the interval of the penalty coefficient c and the parameter gamma of the SVM algorithm with the step length of 0.5 in the range of [ -10, 10 [)]Optimizing the parameters to obtain an optimal classification result, wherein the input is as follows: i istrain=[It1,It2,…,Itn],Ilabel=[Il1,Il2,…,Iln]And Itest=[Itest1,Itest2,…,Itesth]
In the formula ItrainTo train the set, IlabelFor training the set of labels, ItestAnd n is the number of training sets, h is the number of testing sets, and n is 4 h.
The output is a prediction tag set: i isp_label=[Ip1,Ip2,…,Iph]。
7. An olfactory stimulus based EEG acquisition and emotion recognition system, comprising:
the experimental paradigm design module is used for designing a standard automatic emotion inducing material presentation process and alternately presenting pure video stimulus and video odor mixed stimulus required by an experiment;
the data acquisition module is used for acquiring EEG signals of a subject under two stimulation modes of pure video stimulation and video odor mixed stimulation, and feeding an emotional state label after the subject receives one stimulation; intercepting useful data of all the acquired EEG data;
the data preprocessing module is used for preprocessing the EEG data obtained by the data acquisition module to obtain a preprocessed EEG signal;
the characteristic extraction module is used for intercepting a single experimental sample according to different time lengths from the EEG signal obtained by the data preprocessing module, and extracting four characteristics of Power Spectral Density (PSD), Differential Entropy (DE), differential asymmetric DASM and rational asymmetric RASM respectively;
and the classification and identification module is used for training and learning the features extracted by the feature extraction module under the two stimuli to obtain the recognition rate of emotion recognition and comparing and analyzing the results obtained by the two stimulus modes.
8. The system according to claim 7, wherein the experimental paradigm design module is designed to present only video stimuli to the subject, the video stimuli are three-classification emotion evaluations on a plurality of video segments by a plurality of evaluators, and videos with the same evaluation result are selected; the scent stimulus is that the subject receives a plurality of scents and feeds back emotional state labels, and the scents are classified into positive, negative and neutral scents which are suitable for the subject according to the emotional state labels corresponding to the scents; the video and odor mixed stimulation means that the video stimulation and the olfactory stimulation of the same type are presented to the subject at the same time, and the two types of stimulation are presented alternately every 20s in the video for a certain time.
9. An olfactory stimulus based EEG acquisition and emotion recognition apparatus comprising a memory for storing at least one program and a processor for loading said at least one program to perform the method of any of claims 1 to 6.
10. A storage medium having stored therein processor-executable instructions, which when executed by a processor, are configured to perform the method of any one of claims 1 to 6.
CN202011364249.2A 2020-11-27 2020-11-27 Olfactory stimulus-based EEG (electroencephalogram) acquisition and emotion recognition method and system Pending CN112347984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011364249.2A CN112347984A (en) 2020-11-27 2020-11-27 Olfactory stimulus-based EEG (electroencephalogram) acquisition and emotion recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011364249.2A CN112347984A (en) 2020-11-27 2020-11-27 Olfactory stimulus-based EEG (electroencephalogram) acquisition and emotion recognition method and system

Publications (1)

Publication Number Publication Date
CN112347984A true CN112347984A (en) 2021-02-09

Family

ID=74366135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011364249.2A Pending CN112347984A (en) 2020-11-27 2020-11-27 Olfactory stimulus-based EEG (electroencephalogram) acquisition and emotion recognition method and system

Country Status (1)

Country Link
CN (1) CN112347984A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128353A (en) * 2021-03-26 2021-07-16 安徽大学 Emotion sensing method and system for natural human-computer interaction
CN114048806A (en) * 2021-11-09 2022-02-15 安徽大学 Alzheimer disease auxiliary diagnosis model classification method based on fine-grained deep learning
CN117633606A (en) * 2024-01-26 2024-03-01 浙江大学医学院附属第一医院(浙江省第一医院) Consciousness detection method, equipment and medium based on olfactory stimulus and facial expression
CN117633606B (en) * 2024-01-26 2024-04-19 浙江大学医学院附属第一医院(浙江省第一医院) Consciousness detection method, equipment and medium based on olfactory stimulus and facial expression

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050022034A1 (en) * 2003-07-25 2005-01-27 International Business Machines Corporation Method and system for user authentication and identification using behavioral and emotional association consistency
US20120150545A1 (en) * 2009-06-15 2012-06-14 Adam Jay Simon Brain-computer interface test battery for the physiological assessment of nervous system health
CN103210411A (en) * 2010-10-01 2013-07-17 巴洛研株式会社 Emotional matching system and matching method for linking ideal mates
CN103823551A (en) * 2013-03-17 2014-05-28 浙江大学 System and method for realizing multidimensional perception of virtual interaction
CN106526541A (en) * 2016-10-13 2017-03-22 杭州电子科技大学 Sound positioning method based on distribution matrix decision
CN106933353A (en) * 2017-02-15 2017-07-07 南昌大学 A kind of two dimensional cursor kinetic control system and method based on Mental imagery and coded modulation VEP
CN107256393A (en) * 2017-06-05 2017-10-17 四川大学 The feature extraction and state recognition of one-dimensional physiological signal based on deep learning
US20170364929A1 (en) * 2016-06-17 2017-12-21 Sanjiv Ferreira Method and system for identifying, aggregating & transforming emotional states of a user using a temporal phase topology framework
CN108495191A (en) * 2018-02-11 2018-09-04 广东欧珀移动通信有限公司 Video playing control method and related product
CN108494955A (en) * 2018-03-13 2018-09-04 广东欧珀移动通信有限公司 Network connection control method and related product
CN109147279A (en) * 2018-10-19 2019-01-04 燕山大学 A kind of driver tired driving monitoring and pre-alarming method and system based on car networking
US20200000348A1 (en) * 2017-02-28 2020-01-02 Kokon, Inc. Multisensory technology for stress reduction and stress resilience
CN111134666A (en) * 2020-01-09 2020-05-12 中国科学院软件研究所 Emotion recognition method of multi-channel electroencephalogram data and electronic device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050022034A1 (en) * 2003-07-25 2005-01-27 International Business Machines Corporation Method and system for user authentication and identification using behavioral and emotional association consistency
US20120150545A1 (en) * 2009-06-15 2012-06-14 Adam Jay Simon Brain-computer interface test battery for the physiological assessment of nervous system health
CN103210411A (en) * 2010-10-01 2013-07-17 巴洛研株式会社 Emotional matching system and matching method for linking ideal mates
CN103823551A (en) * 2013-03-17 2014-05-28 浙江大学 System and method for realizing multidimensional perception of virtual interaction
US20170364929A1 (en) * 2016-06-17 2017-12-21 Sanjiv Ferreira Method and system for identifying, aggregating & transforming emotional states of a user using a temporal phase topology framework
CN106526541A (en) * 2016-10-13 2017-03-22 杭州电子科技大学 Sound positioning method based on distribution matrix decision
CN106933353A (en) * 2017-02-15 2017-07-07 南昌大学 A kind of two dimensional cursor kinetic control system and method based on Mental imagery and coded modulation VEP
US20200000348A1 (en) * 2017-02-28 2020-01-02 Kokon, Inc. Multisensory technology for stress reduction and stress resilience
CN107256393A (en) * 2017-06-05 2017-10-17 四川大学 The feature extraction and state recognition of one-dimensional physiological signal based on deep learning
CN108495191A (en) * 2018-02-11 2018-09-04 广东欧珀移动通信有限公司 Video playing control method and related product
CN108494955A (en) * 2018-03-13 2018-09-04 广东欧珀移动通信有限公司 Network connection control method and related product
CN109147279A (en) * 2018-10-19 2019-01-04 燕山大学 A kind of driver tired driving monitoring and pre-alarming method and system based on car networking
CN111134666A (en) * 2020-01-09 2020-05-12 中国科学院软件研究所 Emotion recognition method of multi-channel electroencephalogram data and electronic device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
AASIM RAHEEL ET AL: "Emotion recognition in response to traditional and tactile enhanced multimedia using electroencephalography", 《MULTIMEDIA TOOLS AND APPLICATIONS》 *
AASIM RAHEEL ET AL: "Physiological Sensors Based Emotion Recognition While Experiencing Tactile Enhanced Multimedia", 《SENSORS》 *
HUI-RANG HOU ET AL: "Odor-induced emotion recognition based on average frequency band division of EEG signals", 《JOURNAL OF NEUROSCIENCE METHODS》 *
张冠华等: "面向情绪识别的脑电特征研究综述", 《中国科学》 *
赵卫丽: "基于EEG的虚拟智能家居控制与紧张情绪识别", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128353A (en) * 2021-03-26 2021-07-16 安徽大学 Emotion sensing method and system for natural human-computer interaction
CN113128353B (en) * 2021-03-26 2023-10-24 安徽大学 Emotion perception method and system oriented to natural man-machine interaction
CN114048806A (en) * 2021-11-09 2022-02-15 安徽大学 Alzheimer disease auxiliary diagnosis model classification method based on fine-grained deep learning
CN117633606A (en) * 2024-01-26 2024-03-01 浙江大学医学院附属第一医院(浙江省第一医院) Consciousness detection method, equipment and medium based on olfactory stimulus and facial expression
CN117633606B (en) * 2024-01-26 2024-04-19 浙江大学医学院附属第一医院(浙江省第一医院) Consciousness detection method, equipment and medium based on olfactory stimulus and facial expression

Similar Documents

Publication Publication Date Title
CN107080546B (en) Electroencephalogram-based emotion perception and stimulus sample selection method for environmental psychology of teenagers
CN108236464B (en) Feature extraction method based on electroencephalogram signals and detection extraction system thereof
Nie et al. EEG-based emotion recognition during watching movies
Wang et al. EEG-based emotion recognition using frequency domain features and support vector machines
CN101711388B (en) The effect analysis of marketing and amusement
Koelstra et al. Single trial classification of EEG and peripheral physiological signals for recognition of emotions induced by music videos
CN103412646B (en) Based on the music mood recommend method of brain-machine interaction
US20170039591A1 (en) Analysis of controlled and automatic attention for introduction of stimulus material
CN108324292B (en) Indoor visual environment satisfaction degree analysis method based on electroencephalogram signals
Khalili et al. Emotion detection using brain and peripheral signals
CN112347984A (en) Olfactory stimulus-based EEG (electroencephalogram) acquisition and emotion recognition method and system
CN111184509A (en) Emotion-induced electroencephalogram signal classification method based on transfer entropy
CN106510702B (en) The extraction of sense of hearing attention characteristics, identifying system and method based on Middle latency auditory evoked potential
CN112232861A (en) Plane advertisement evaluation method and system based on neural similarity analysis
Li et al. EEG-based attention recognition
CN111012342B (en) Audio-visual dual-channel competition mechanism brain-computer interface method based on P300
Mehmood et al. Towards emotion recognition of EEG brain signals using Hjorth parameters and SVM
Alharbi et al. Feature selection algorithm for evoked EEG signal due to RGB colors
CN112493998B (en) Olfactory sensory evaluation method and system
CN105700687A (en) Single-trial electroencephalogram P300 component detection method based on folding HDCA algorithm
Wang et al. Temporal and spectral EEG dynamics can be indicators of stealth placement
CN113349795B (en) Depression electroencephalogram analysis method based on sparse low-rank tensor decomposition
CN111671418B (en) Event-related potential acquisition method and system considering brain working state
Bo et al. Music-evoked emotion classification using EEG correlation-based information
CN115690528A (en) Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination