CN115690528A - Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene - Google Patents

Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene Download PDF

Info

Publication number
CN115690528A
CN115690528A CN202211082746.2A CN202211082746A CN115690528A CN 115690528 A CN115690528 A CN 115690528A CN 202211082746 A CN202211082746 A CN 202211082746A CN 115690528 A CN115690528 A CN 115690528A
Authority
CN
China
Prior art keywords
aesthetic
electroencephalogram
data
picture
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211082746.2A
Other languages
Chinese (zh)
Inventor
王泽彬
邱国平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202211082746.2A priority Critical patent/CN115690528A/en
Publication of CN115690528A publication Critical patent/CN115690528A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a method, a device, a medium and a terminal for evaluating and processing electroencephalogram signals across main body scenes, wherein the method comprises the steps of screening out a plurality of aesthetic pictures with real scoring labels from a preset aesthetic data set to serve as a picture training set; acquiring electroencephalogram original data of a target object when the target object watches the aesthetic picture according to a time sequence; preprocessing the electroencephalogram original data and performing characteristic calculation to obtain electroencephalogram training data; training the initial aesthetic evaluation model based on electroencephalogram training data and a picture training set to obtain a mature aesthetic evaluation model; performing aesthetic evaluation on the target picture based on the mature aesthetic evaluation model, processing electroencephalogram data when the target picture is viewed, inputting the processed electroencephalogram data into the mature aesthetic evaluation model for calculation to obtain an aesthetic score of the target picture, and giving a corresponding aesthetic category suggestion; by adopting the method, the objectivity of the aesthetic evaluation process is enhanced, the human interference is reduced, and the accuracy of the aesthetic evaluation is improved.

Description

Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene
Technical Field
The invention relates to the field of aesthetic evaluation, in particular to a method, a device, a medium and a terminal for processing the aesthetic evaluation of electroencephalogram signals across main scenes.
Background
The human body is also the emotional activity process in the aesthetic perception process, and the human body has strong emotional components for the aesthetic perception. For example, in the aesthetic art, the creator creates the art with a certain emotional motivation during the creation process, and the audience is expected to be able to enjoy the same sense of the author while watching the corresponding art. The aesthetic evaluation of an image is closely related to whether the image is visually beautiful or not, and the emotional evaluation activated by the image is related to whether the target object can enjoy the image when viewing the image, so that the aesthetic evaluation of the image is an extremely subjective matter. On the other hand, the emotion which can be aroused when the target object watches the emotional picture can be divided into positive emotion and negative emotion, and the positive emotion can be aroused when people watch the picture with the aesthetic feeling, so that the physical and mental health of the human body is facilitated. Thus, there is an interplay between aesthetics and mood.
The brain activity of a human plays an important role in the emotion generation and activity process, with the increasing maturity of electroencephalogram emotion recognition research, electroencephalogram signals collected from the brain are non-invasive reliable signals, information related to emotion state changes can be detected, the emotion recognition method of the electroencephalogram signals has good objectivity and high time resolution, emotion can be excited by aesthetics, and emotional activity can capture corresponding feedback in electroencephalogram signals and the like of the human beings.
Most of the existing aesthetic evaluation processes are carried out manually, the interference of human factors is excessive, and the aesthetic evaluation result is not accurate enough.
Disclosure of Invention
In view of the defects of the prior art, the present application aims to provide a method, an apparatus, a medium and a terminal for aesthetic evaluation processing of electroencephalogram signals across main scenes, and aims to solve the problems that in the aesthetic evaluation process, the interference of human factors is excessive, and the aesthetic evaluation result is not accurate enough.
In order to solve the above technical problem, a first aspect of the embodiments of the present application provides an electroencephalogram signal aesthetic evaluation processing method across a main scene, where the method includes:
screening out a plurality of aesthetic pictures with real scoring labels from a preset aesthetic data set to serve as a picture training set;
acquiring electroencephalogram original data of a target object when the target object watches the aesthetic picture according to a time sequence;
preprocessing the electroencephalogram original data and calculating characteristics to obtain electroencephalogram training data;
training an initial aesthetic evaluation model based on the electroencephalogram training data and the picture training set to obtain a mature aesthetic evaluation model;
and performing aesthetic evaluation on the target picture based on the mature aesthetic evaluation model, processing the electroencephalogram data when the target picture is viewed, inputting the processed electroencephalogram data into the mature aesthetic evaluation model for calculation to obtain an aesthetic score of the target picture, and giving a corresponding aesthetic type suggestion to the target picture according to the aesthetic score.
As a further improved technical solution, screening out a plurality of aesthetic pictures with true score labels from a preset aesthetic data set as a picture training set includes:
taking an aesthetic picture data set with both emotion scoring labels and aesthetic scoring labels as the preset aesthetic data set;
and selecting an aesthetic picture from the preset aesthetic data set as a picture training set, wherein emotion score labels of the aesthetic picture in the picture training set are in Gaussian distribution.
As a further improved technical solution, the acquiring electroencephalogram raw data of the target object in time sequence when the target object views the aesthetic picture includes:
acquiring electroencephalogram original data of the target object when a preset number of aesthetic pictures are watched within a preset time according to a time sequence, wherein the time for watching each aesthetic picture is the same.
As a further improved technical scheme, the preprocessing and feature calculation of the electroencephalogram raw data to obtain electroencephalogram training data comprises:
filtering the electroencephalogram original data by using a Butterworth zero-phase lag band-pass filter and a Butterworth zero-phase lag notch filter in sequence to obtain processed electroencephalogram data;
and extracting two characteristics of differential entropy and power spectral density in the processed electroencephalogram data to obtain electroencephalogram training data.
As a further improved technical solution, the training an initial aesthetic evaluation model based on the electroencephalogram training data and the picture training set to obtain a mature aesthetic evaluation model includes:
constructing an initial aesthetic assessment model, wherein the initial aesthetic assessment model comprises a common feature extractor, a domain-specific feature extractor, and a domain-specific classifier;
one of the electroencephalogram data in the electroencephalogram training data is used as target domain data, the electroencephalogram data except the target domain data is used as source domain data, and the target domain data and the source domain data are input into a public feature extractor to be extracted to obtain domain invariant features;
matching every two of the domain invariant features of the source domain data and the domain invariant features of the target domain data, and inputting the domain invariant features and the domain invariant features into the domain specific feature extractor to obtain corresponding domain specific features;
and training the domain-specific classifier based on the domain-specific features and the real scoring labels to obtain a mature aesthetic evaluation model.
As a further improvement, the training of the domain-specific classifier based on the domain-specific features and the realistic scoring labels to obtain a mature aesthetic evaluation model includes:
inputting the domain specific features into the domain specific classifier for calculation to obtain a predicted value;
and comparing the predicted value with a real value in the real scoring label, continuing training the classifier if the predicted value is smaller than or larger than the real value, and outputting to obtain a mature aesthetic evaluation model if the predicted value is equal to the real value.
As a further improved technical solution, the performing aesthetic evaluation on the target picture based on the mature aesthetic evaluation model, processing electroencephalogram data when the target picture is viewed, inputting the processed electroencephalogram data into the mature aesthetic evaluation model for calculation, obtaining an aesthetic score of the target picture, and giving a corresponding aesthetic category suggestion to the target picture according to the aesthetic score includes:
performing aesthetic evaluation on a target picture based on the mature aesthetic evaluation model, collecting electroencephalogram data when the target picture is watched, and sequentially performing preprocessing and feature calculation on the electroencephalogram data to obtain electroencephalogram feature data;
inputting the electroencephalogram feature data into the mature aesthetic evaluation model for calculation to obtain the aesthetic score of the target picture, and giving corresponding aesthetic category suggestions to the target picture according to the aesthetic score.
A second aspect of the embodiments of the present application provides an electroencephalogram signal aesthetic evaluation apparatus across a main scene, including:
the screening module is used for screening a plurality of aesthetic pictures with real scoring labels from a preset aesthetic data set to serve as a picture training set;
the acquisition module is used for acquiring electroencephalogram original data of a target object when the target object watches the aesthetic picture according to a time sequence;
the preprocessing module is used for preprocessing the electroencephalogram original data and calculating characteristics to obtain electroencephalogram training data;
the training module is used for training an initial aesthetic evaluation model based on the electroencephalogram training data and the picture training set to obtain a mature aesthetic evaluation model;
and the evaluation module is used for performing aesthetic evaluation on the target picture based on the mature aesthetic evaluation model, processing the electroencephalogram data when the target picture is viewed, inputting the processed electroencephalogram data into the mature aesthetic evaluation model for calculation to obtain an aesthetic score of the target picture, and giving a corresponding aesthetic category suggestion to the target picture according to the aesthetic score.
A third aspect of embodiments of the present application provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement steps in a method for processing an aesthetic evaluation of brain electrical signals across a subject scene as described in any one of the above.
A fourth aspect of embodiments of the present application provides a terminal device, including: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes the connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps in the cross-subject scene electroencephalogram signal aesthetic assessment processing method as described in any one of the above.
Has the beneficial effects that: compared with the prior art, the electroencephalogram signal aesthetic evaluation processing method for the cross-main-body scene comprises the steps that a plurality of aesthetic pictures with real scoring labels are screened out from a preset aesthetic data set to serve as a picture training set; acquiring electroencephalogram original data of a target object when the target object watches the aesthetic picture according to a time sequence; preprocessing the electroencephalogram original data and calculating characteristics to obtain electroencephalogram training data; training an initial aesthetic evaluation model based on the electroencephalogram training data and the picture training set to obtain a mature aesthetic evaluation model; performing aesthetic evaluation on a target picture based on the mature aesthetic evaluation model, processing electroencephalogram data when the target picture is viewed, inputting the processed electroencephalogram data into the mature aesthetic evaluation model for calculation to obtain an aesthetic score of the target picture, and giving a corresponding aesthetic type suggestion to the target picture according to the aesthetic score; by adopting the method, the electroencephalogram signals are used for capturing the information related to the aesthetic perception activities, the objectivity of the aesthetic evaluation process is enhanced, the human interference is reduced, the accuracy of the aesthetic evaluation is improved, the calculation connection of the electroencephalogram signals and the aesthetic perception activities is obtained through a deep learning technical means, the calculation connection is not only suitable for a certain single individual, and the calculation connection can be converted into group commonalities.
Drawings
FIG. 1 is a flow chart of an electroencephalogram signal aesthetic evaluation processing method across a subject scene according to the present invention.
Fig. 2 is a schematic structural diagram of a terminal device provided in the present invention.
Fig. 3 is a block diagram of the apparatus provided by the present invention.
FIG. 4 is a flowchart frame diagram of the cross-subject scene electroencephalogram signal aesthetic evaluation processing method of the present invention.
FIG. 5 is an aesthetic evaluation model frame diagram of the cross-subject scene electroencephalogram signal aesthetic evaluation processing method of the present invention.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
To facilitate an understanding of the present application, the present application will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present application are given in the accompanying drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
The inventor finds that the prior art has the following problems through research:
(1) The brain activity of a human plays an important role in the emotion generation and activity process, with the increasing maturity of electroencephalogram emotion recognition research, the electroencephalogram signal collected from the brain is a non-invasive reliable signal, information related to emotion state changes can be detected, the emotion recognition method of the electroencephalogram signal has good objectivity and high time resolution, the emotion can be aroused by aesthetics, and emotional activity can capture corresponding feedback in the electroencephalogram signal of the human being and the like.
In the existing aesthetic evaluation process, the interference of artificial factors is too much, the aesthetic evaluation is too subjective, and the aesthetic evaluation process is not objective enough, so that the final evaluation result is not ideal.
(2) There is information about aesthetic perception in human brain activity, but many studies are now explained more from a neuroscience perspective, and no studies have been made to discuss the relationship between the two from a computational perspective. Early studies of aesthetic preferences were primarily from a psychological point of view.
The electroencephalogram data of different people obey different edge distributions, and the aesthetic evaluation by utilizing the electroencephalogram signals under the cross-subject scene is also a research difficulty. The research condition of the cross-subject is also a research hotspot of electroencephalogram emotion recognition, and how to improve the generalization capability of an electroencephalogram emotion classification model on samples from different individuals is one of the most challenging leading directions in the field of electroencephalogram emotion recognition at present.
As shown in fig. 1, the electroencephalogram signal aesthetic evaluation processing method across a main scene provided by the embodiment of the present application includes the following steps:
s1, screening a plurality of aesthetic pictures with real scoring labels from a preset aesthetic data set to serve as a picture training set;
specifically, the preset aesthetic data set is an aesthetic picture data set with emotion marks and aesthetic marks, the data set is obtained through a rigorous and massive statistical experiment, scene contents of the aesthetic pictures are natural scenery, and a plurality of aesthetic pictures are selected from the preset aesthetic data set and used as a picture training set according to the emotion marks in Gaussian distribution.
The method comprises the following steps of screening a plurality of aesthetic pictures with real scoring labels from a preset aesthetic data set to serve as a picture training set:
s101, taking an aesthetic picture data set with an emotion scoring tag and an aesthetic scoring tag as the preset aesthetic data set;
s102, selecting the aesthetic pictures from the preset aesthetic data set as a picture training set, wherein emotion score labels of the aesthetic pictures in the picture training set are in Gaussian distribution.
The preset aesthetic data set is an aesthetic picture data set with emotion marks and aesthetic marks, the data set is obtained through a rigorous and massive statistical experiment, scene contents of the aesthetic pictures are natural scenery, the number of the aesthetic pictures is 20994, each picture is subjected to 1-7-point aesthetic scores and emotion scores by 20 different people, finally, the average value of the scene contents is taken as the aesthetic score and emotion score of the corresponding picture, the aesthetic score and emotion score are both divided into 4 boundaries, and the overall score presents a Gaussian distribution no matter the picture is analyzed from the perspective of the aesthetic score or from the perspective of the emotion score, accords with the general rule in statistics, and the aesthetic score and emotion score are taken as the aesthetic score label and emotion score label of the aesthetic picture.
Screening is carried out from a preset aesthetic data set, 10 groups of aesthetic pictures are selected, the number of the aesthetic pictures in each group is 300, emotion scores of the 300 aesthetic pictures are in Gaussian distribution, the aesthetic scores are distributed in a score interval from 1 score to 7 scores, and the 10 groups of aesthetic pictures are used as a picture training set.
S2, acquiring electroencephalogram original data of a target object when the target object watches the aesthetic picture according to a time sequence;
specifically, the target object is focused on the perception activity of aesthetic stimulation, and electroencephalogram original data of the target object when the target object watches the aesthetic pictures are collected according to the time sequence;
wherein, the collecting of the electroencephalogram original data of the target object in time sequence when the target object watches the aesthetic picture comprises:
acquiring electroencephalogram original data of the target object when a preset number of aesthetic pictures are watched within a preset time according to a time sequence, wherein the time for watching each aesthetic picture is the same.
The target objects are human beings, the number of the participated target objects is set to be 10, the target objects are required to keep sufficient sleep and rest in the first few days before collection, a good mental state is kept, the quality of collected electroencephalogram signals is prevented from being influenced, the total collection time of each target object is 30 minutes, and the single collection time is 10 minutes, so that the collection frequency required by each target object is 3 times, the collection interval of each time is not more than 12 hours, the electroencephalogram signals of the target objects can be ensured to be in accordance with nearly consistent edge distribution, the sampling frequency of electroencephalogram equipment is set to be 500 Hz, and the electroencephalogram equipment and a data processing platform provided by IMOTION companies are used for the collection.
When the electroencephalogram equipment is correctly worn by a target object during collection, the aesthetic pictures randomly presented on a computer screen are watched, a viewer only needs to take the thinking angle of whether the picture is beautiful and whether the picture feels happy or unhappy to watch the picture, the display time of each picture is 6 seconds, the total number of the aesthetic pictures watched by each target object is 300, and 100 aesthetic pictures can be randomly presented in the single 10-minute experiment time. 100 pictures can be randomly divided into two groups, 50 pictures can be recorded as a group, a full white picture can be played before the pictures of each group are displayed, the display time of the full white picture is 30 seconds, a target object can adjust the mood of the target object within the play time of 30 seconds, the brain of the target object is emptied, the attention of the target object is focused on an experiment, the 30 seconds are also used for adjusting the target object in the experiment process, the target object can be adjusted to the state that the next aesthetic picture can be observed normally and comfortably, the whole collection process is completed under the condition of no other light, and when the aesthetic picture is played, the target object needs to be kept still as much as possible due to the sensitivity of electroencephalogram signals.
S3, preprocessing the electroencephalogram original data and performing characteristic calculation to obtain electroencephalogram training data;
and sequentially preprocessing and calculating the characteristics of the acquired electroencephalogram original data, and extracting accurate electroencephalogram training data for model training.
The method for preprocessing the electroencephalogram original data and calculating the characteristics of the electroencephalogram original data to obtain electroencephalogram training data comprises the following steps:
s301, filtering the electroencephalogram original data by using a Butterworth zero-phase lag band-pass filter and a Butterworth zero-phase lag notch filter in sequence to obtain processed electroencephalogram data;
s302, extracting two characteristics of differential entropy and power spectral density in the processed electroencephalogram data to obtain electroencephalogram training data.
Specifically, after the electroencephalogram original data are collected, certain preprocessing needs to be performed on the electroencephalogram original data, and due to the sensitivity of electroencephalograms, the original data contain more noise, so certain preprocessing needs to be performed, in the preprocessing process, a Butterworth zero-phase lag band-pass filter and a Butterworth zero-phase lag notch filter are used successively to perform filtering processing on the electroencephalogram original data, mainly to filter signals which interfere with electroencephalograms in the collection process due to eyelid jumping or sitting posture change and the like, and an artifact threshold needs to be set, data points of which absolute values exceed a preset voltage threshold are rejected, the quality of the collected electroencephalograms is ensured, and the electroencephalogram data are obtained after preprocessing.
And then extracting corresponding features in the processed electroencephalogram data, mainly extracting two features of differential entropy and power spectral density of the electroencephalogram signal, and extracting the features according to five frequency bands of delta, theta, alpha, beta and gamma respectively in the feature extraction process. The characteristics are calculated by adopting a differential entropy calculation formula and a power spectral density calculation formula, in the characteristic calculation process, the selection of the time window length of the brain waveform is also required to be proper, and for a single target object to watch an aesthetic picture, the time length of the collected brain wave is 6 seconds, so that the moving time windows of 3 seconds, 4 seconds, 5 seconds and 6 seconds are respectively selected to calculate the differential entropy and the power spectral density characteristics, and finally the brain electrical training data is obtained.
The differential entropy calculation formula is as follows:
Figure BDA0003834050320000101
in the calculation formula of the differential entropy, f (X) represents a probability density function of the electroencephalogram information, and X represents a value interval of the electroencephalogram information.
The power spectral density calculation formula is shown below:
Figure BDA0003834050320000102
in the formula for calculating the power spectral density characteristic, X (t) represents the acquired brain electrical signal, and X (f) is a continuous fourier transform of X (t).
S4, training an initial aesthetic evaluation model based on the electroencephalogram training data and the picture training set to obtain a mature aesthetic evaluation model;
taking the electroencephalogram training data as input of an initial aesthetic evaluation model, calculating through the initial aesthetic evaluation model to obtain a predicted value, taking an emotion scoring label and an aesthetic scoring label of the picture training set as real scoring labels, comparing the predicted value with a real value in the real scoring labels, continuing training the classifier if the predicted value is smaller than or larger than the real value, and outputting to obtain a mature aesthetic evaluation model if the predicted value is equal to the real value.
Wherein, the training of the initial aesthetic evaluation model based on the electroencephalogram training data and the picture training set to obtain a mature aesthetic evaluation model comprises the following steps:
s401, constructing an initial aesthetic evaluation model, wherein the initial aesthetic evaluation model comprises a common feature extractor, a domain-specific feature extractor and a domain-specific classifier;
s402, taking one of the electroencephalogram data in the electroencephalogram training data as target domain data, taking the electroencephalogram data except the target domain data as source domain data, and inputting the target domain data and the source domain data into a public feature extractor for extraction to obtain a domain invariant feature;
s403, matching every two of the domain invariant features of the source domain data and the domain invariant features of the target domain data, and inputting the result into the domain specific feature extractor to obtain corresponding domain specific features;
s404, training the domain-specific classifier based on the domain-specific features and the real scoring labels to obtain a mature aesthetic evaluation model.
Training the domain-specific classifier based on the domain-specific features and the truth scoring labels to obtain a mature aesthetic assessment model comprises the following steps:
inputting the domain specific features into the domain specific classifier for calculation to obtain a predicted value;
and comparing the predicted value with a real value in the real scoring label, continuing training the classifier if the predicted value is smaller than or larger than the real value, and outputting to obtain a mature aesthetic evaluation model if the predicted value is equal to the real value.
Specifically, the picture training set used in this embodiment is aesthetically scored with 1 to 7 points, and in the training process, the aesthetic picture attributed to the [1, 3) score interval is defined as not beautiful, the aesthetic picture attributed to the [3, 5) score interval is defined as general, and the aesthetic picture attributed to the [5,7] score interval is defined as beautiful, so the aesthetic evaluation task performed in this embodiment is a three-classification task in nature from deep learning, and therefore the initial aesthetic evaluation model constructed mainly includes three parts, namely, a common feature extractor, a domain-specific feature extractor, and a domain-specific classifier.
In the training process of the model, the electroencephalogram data of one experimental object in the electroencephalogram training data is used as target domain data, the electroencephalogram data of other experimental objects are used as source domain data, and the target domain data and the source domain data are input into a public feature extractor to be extracted to obtain the domain invariant features.
Matching the domain invariant features of each source domain data with the domain invariant features of the target domain data in pairs, sending the result to corresponding domain specific feature extractors to obtain the corresponding domain specific features of the target domain and the source domain obtained from the same domain specific feature extractor, and calculating the distance between the target domain and the source domain in a deep space by using an MMD loss formula, wherein the MMD loss is reduced so as to enable the target domain and the source domain to be closer in feature space and help the target domain to make better prediction, and the MMD loss formula is as follows:
Figure BDA0003834050320000121
where Φ represents the mapping function, H represents the regenerated kernel Hilbert space, x S Representing an instance matrix X composed of each source domain data S The feature vector of (1). x is the number of T Representing an example matrix X composed of target domain data T Feature vector of (1), X S Data representing each source domain, X T Representing the target domain data. N is a radical of hydrogen S Representing the amount of source domain data, N T Indicating the amount of target domain data.
Inputting the obtained domain specific features into corresponding multivariate classifiers, and for the training of each classifier, evaluating the loss of the classifier by using the cross entropy defined by the following equation, wherein N classifiers are trained by using the data of the N source domains, and the equation is as follows:
Figure BDA0003834050320000122
where X represents the eigenvectors in the example matrix composed of source domain data, X S The data representing each of the source domains is,
Figure BDA0003834050320000123
a predictive label matrix, Y, representing each source domain data S A matrix is represented where each source domain consists of a truth-scoring label. E denotes the mathematical expectation and J denotes the cross entropy.
If the N classifiers are averaged to be the prediction, the variance is too large, especially when the samples of the target domain are located on the decision boundary, and in order to reduce the variance, a measure of the difference loss is introduced so that the predictions of the N classifiers converge, and the difference loss is defined by the following equation:
Figure BDA0003834050320000124
where X represents the eigenvectors in an instance matrix X consisting of source domain data, X T The data representing each of the source domains is,
Figure BDA0003834050320000131
predictive tag matrix, Y, representing target domain data T A matrix of true score labels representing the target domain, and E represents the mathematical expectation.
Reducing MMD loss to better extract domain specific features for each pair of source and target domains, reducing classifier loss to give better results for the source domain prediction by the classifier, reducing differential loss to get more convergent prediction results, and training the global model based on the final overall loss, defined by the following equation:
ζ=ζ cls +αζ mmd +βζ disc
where α and β are the hyper-parameters to be set in the training, ζ cls Is referred to as classification loss, ζ mmd Refers to MMD loss, ζ disc Refers to the loss of difference.
And S5, performing aesthetic evaluation on the target picture based on the mature aesthetic evaluation model, processing the electroencephalogram data when the target picture is viewed, inputting the processed electroencephalogram data into the mature aesthetic evaluation model for calculation to obtain an aesthetic score of the target picture, and giving a corresponding aesthetic category suggestion to the target picture according to the aesthetic score.
The aesthetic evaluation is carried out on the target picture based on the mature aesthetic evaluation model, electroencephalogram data when the target picture is viewed are processed and then input into the mature aesthetic evaluation model for calculation, the aesthetic score of the target picture is obtained, and the aesthetic category suggestion corresponding to the target picture is given according to the aesthetic score, and the aesthetic evaluation method comprises the following steps:
s501, performing aesthetic evaluation on a target picture based on the mature aesthetic evaluation model, collecting electroencephalogram data when the target picture is watched, and sequentially performing preprocessing and feature calculation on the electroencephalogram data to obtain electroencephalogram feature data;
s502, inputting the electroencephalogram feature data into the mature aesthetic evaluation model for calculation to obtain the aesthetic score of the target picture, and giving corresponding aesthetic category suggestions to the target picture according to the aesthetic score.
Specifically, when the aesthetic evaluation is performed on the target picture, electroencephalogram data when a user watches the target picture is required to be collected, the electroencephalogram characteristic data is obtained by preprocessing and characteristic calculation of the step S3 on the electric energy data, the obtained electroencephalogram characteristic data is input into the mature aesthetic evaluation model for calculation, the model is applicable to a cross-main-body experiment scene, namely, electroencephalogram characteristic data of a plurality of experimental objects can be simultaneously input, and the electroencephalogram data when the user watches the aesthetic picture is used for obtaining the aesthetic category of the aesthetic picture watched at the moment, namely, three aesthetic categories of beauty, namely, non-beauty, general beauty and beauty.
Based on the above electroencephalogram signal aesthetic evaluation processing method across main body scenes, the embodiment provides an electroencephalogram signal aesthetic evaluation device across main body scenes, which includes:
the screening module 1 is used for screening a plurality of aesthetic pictures with real scoring labels from a preset aesthetic data set to serve as a picture training set;
the acquisition module 2 is used for acquiring electroencephalogram original data of a target object when the target object watches the aesthetic pictures according to a time sequence;
the preprocessing module 3 is used for preprocessing the electroencephalogram original data and calculating characteristics to obtain electroencephalogram training data;
the training module 4 is used for training an initial aesthetic evaluation model based on the electroencephalogram training data and the picture training set to obtain a mature aesthetic evaluation model;
and the evaluation module 5 is used for performing aesthetic evaluation on the target picture based on the mature aesthetic evaluation model, processing the electroencephalogram data when the target picture is viewed, inputting the processed electroencephalogram data into the mature aesthetic evaluation model for calculation to obtain an aesthetic score of the target picture, and giving a corresponding aesthetic category suggestion to the target picture according to the aesthetic score.
In addition, it is worth to be noted that the working process of the electroencephalogram signal aesthetic evaluation device based on the cross-body scene provided in this embodiment is the same as the working process of the electroencephalogram signal aesthetic evaluation processing method based on the cross-body scene, and specifically, the working process of the electroencephalogram signal aesthetic evaluation processing method based on the cross-body scene may be referred to, and is not described herein again.
Based on the above-mentioned cross-subject-scene electroencephalogram signal aesthetic evaluation processing method, the present embodiment provides a computer-readable storage medium, which stores one or more programs that can be executed by one or more processors to implement the steps in the cross-subject-scene electroencephalogram signal aesthetic evaluation processing method according to the above-mentioned embodiment.
As shown in fig. 2, based on the above-mentioned cross-subject scene electroencephalogram signal aesthetic evaluation processing method, the present application also provides a terminal device, which includes at least one processor (processor) 20; a display screen 21; and a memory (memory) 22, and may further include a communication Interface (Communications Interface) 23 and a bus 24. The processor 20, the display 21, the memory 22 and the communication interface 23 can communicate with each other through the bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. Processor 20 may call logic instructions in memory 22 to perform the methods in the embodiments described above.
Furthermore, the logic instructions in the memory 22 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 22, which is a computer-readable storage medium, may be configured to store a software program, a computer-executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 executes the functional application and data processing, i.e. implements the method in the above-described embodiments, by executing the software program, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 22 may include a high speed random access memory and may also include a non-volatile memory. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.
Compared with the prior art, the electroencephalogram signal aesthetic evaluation processing method across the main body scene comprises the steps of screening out a plurality of aesthetic pictures with real scoring labels from a preset aesthetic data set to serve as a picture training set; acquiring electroencephalogram original data of a target object when the target object watches the aesthetic picture according to a time sequence; preprocessing the electroencephalogram original data and performing characteristic calculation to obtain electroencephalogram training data; training an initial aesthetic evaluation model based on the electroencephalogram training data and the picture training set to obtain a mature aesthetic evaluation model; performing aesthetic evaluation on a target picture based on the mature aesthetic evaluation model, processing electroencephalogram data when the target picture is viewed, inputting the processed electroencephalogram data into the mature aesthetic evaluation model for calculation to obtain an aesthetic score of the target picture, and giving a corresponding aesthetic category suggestion to the target picture according to the aesthetic score; by adopting the method, the electroencephalogram signal is used for capturing the information related to the aesthetic perception activities, the objectivity of the aesthetic evaluation process is enhanced, the human interference is reduced, the accuracy of the aesthetic evaluation is improved, the calculation relation between the electroencephalogram signal and the aesthetic perception activities is obtained through a deep learning technical means, the calculation relation is not only suitable for a single individual person, the calculation relation can be converted into group commonality, the appreciation aesthetics can play a role in soothing soul at times, and the method can realize the effect of soothing soul by observing the EEG signal and utilizing proper aesthetic contents.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A method for evaluating and processing EEG signal aesthetics across main body scenes is characterized by comprising the following steps:
screening a plurality of aesthetic pictures with real scoring labels from a preset aesthetic data set to serve as a picture training set;
acquiring electroencephalogram original data of a target object when the target object watches the aesthetic picture according to a time sequence;
preprocessing the electroencephalogram original data and performing characteristic calculation to obtain electroencephalogram training data;
training an initial aesthetic evaluation model based on the electroencephalogram training data and the picture training set to obtain a mature aesthetic evaluation model;
and performing aesthetic evaluation on the target picture based on the mature aesthetic evaluation model, processing electroencephalogram data when the target picture is viewed, inputting the processed electroencephalogram data into the mature aesthetic evaluation model for calculation to obtain an aesthetic score of the target picture, and giving a corresponding aesthetic category suggestion to the target picture according to the aesthetic score.
2. The method for aesthetic evaluation and processing of electroencephalogram signals across main body scenes as claimed in claim 1, wherein said screening out a plurality of aesthetic pictures with true score labels from a preset aesthetic data set as a picture training set comprises:
taking an aesthetic picture data set with both emotion scoring labels and aesthetic scoring labels as the preset aesthetic data set;
and screening out aesthetic pictures from the preset aesthetic data set to serve as a picture training set, wherein emotion score labels of the aesthetic pictures in the picture training set are in Gaussian distribution.
3. The method for aesthetic evaluation and processing of electroencephalogram signals across a subject scene as recited in claim 1, wherein the temporally sequentially collecting raw data of electroencephalogram when a target object views the aesthetic picture comprises:
acquiring electroencephalogram original data of the target object when a preset number of aesthetic pictures are observed within a preset time according to a time sequence, wherein the time for observing each aesthetic picture is the same.
4. The method for aesthetic evaluation and processing of electroencephalogram signals across a subject scene as claimed in claim 1, wherein the preprocessing and feature calculation of the electroencephalogram raw data to obtain electroencephalogram training data comprises:
filtering the electroencephalogram original data by using a Butterworth zero-phase lag band-pass filter and a Butterworth zero-phase lag notch filter in sequence to obtain processed electroencephalogram data;
and extracting two characteristics of differential entropy and power spectral density in the processed electroencephalogram data to obtain electroencephalogram training data.
5. The method for aesthetic evaluation and processing of electroencephalogram signals across subject scenes as recited in claim 1, wherein the training of an initial aesthetic evaluation model based on the electroencephalogram training data and the picture training set to obtain a mature aesthetic evaluation model comprises:
constructing an initial aesthetic assessment model, wherein the initial aesthetic assessment model comprises a common feature extractor, a domain-specific feature extractor, and a domain-specific classifier;
one of the electroencephalogram data in the electroencephalogram training data is used as target domain data, the electroencephalogram data except the target domain data is used as source domain data, and the target domain data and the source domain data are input into a public feature extractor to be extracted to obtain domain invariant features;
matching every two of the domain invariant features of the source domain data and the domain invariant features of the target domain data, and inputting the domain invariant features and the domain invariant features into the domain specific feature extractor to obtain corresponding domain specific features;
and training the domain-specific classifier based on the domain-specific features and the real scoring labels to obtain a mature aesthetic evaluation model.
6. The method for aesthetic evaluation processing of electroencephalogram signals across subject scenes as recited in claim 1, wherein said training of said domain-specific classifier based on said domain-specific features and said truth score labels to obtain a mature aesthetic evaluation model comprises:
inputting the domain specific features into the domain specific classifier for calculation to obtain a predicted value;
and comparing the predicted value with a real value in the real scoring label, continuing training the classifier if the predicted value is smaller than or larger than the real value, and outputting to obtain a mature aesthetic evaluation model if the predicted value is equal to the real value.
7. The method for aesthetic evaluation and processing of electroencephalogram signals across a main scene according to claim 1, wherein the aesthetic evaluation of a target picture based on the mature aesthetic evaluation model, the processing of electroencephalogram data when the target picture is viewed and the inputting of the processed electroencephalogram data into the mature aesthetic evaluation model for calculation to obtain an aesthetic score of the target picture, and the giving of corresponding aesthetic category suggestions to the target picture according to the aesthetic score comprises:
performing aesthetic evaluation on a target picture based on the mature aesthetic evaluation model, collecting electroencephalogram data when the target picture is watched, and sequentially performing preprocessing and feature calculation on the electroencephalogram data to obtain electroencephalogram feature data;
inputting the electroencephalogram feature data into the mature aesthetic evaluation model for calculation to obtain the aesthetic score of the target picture, and giving corresponding aesthetic category suggestions to the target picture according to the aesthetic score.
8. An apparatus for aesthetic evaluation of brain electrical signals across a subject scene, comprising:
the screening module is used for screening a plurality of aesthetic pictures with real scoring labels from a preset aesthetic data set to serve as a picture training set;
the acquisition module is used for acquiring electroencephalogram original data of a target object when the target object watches the aesthetic picture according to a time sequence;
the preprocessing module is used for preprocessing the electroencephalogram original data and calculating characteristics to obtain electroencephalogram training data;
the training module is used for training an initial aesthetic evaluation model based on the electroencephalogram training data and the picture training set to obtain a mature aesthetic evaluation model;
and the evaluation module is used for performing aesthetic evaluation on the target picture based on the mature aesthetic evaluation model, processing the electroencephalogram data when the target picture is viewed, inputting the processed electroencephalogram data into the mature aesthetic evaluation model for calculation to obtain an aesthetic score of the target picture, and giving a corresponding aesthetic category suggestion to the target picture according to the aesthetic score.
9. A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps in the method for brain electrical signal aesthetic assessment processing across a subject scene as claimed in any one of claims 1 to 7.
10. A terminal device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps in the cross-subject scene electroencephalogram signal aesthetic evaluation processing method of any one of claims 1 to 7.
CN202211082746.2A 2022-09-06 2022-09-06 Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene Pending CN115690528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211082746.2A CN115690528A (en) 2022-09-06 2022-09-06 Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211082746.2A CN115690528A (en) 2022-09-06 2022-09-06 Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene

Publications (1)

Publication Number Publication Date
CN115690528A true CN115690528A (en) 2023-02-03

Family

ID=85061373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211082746.2A Pending CN115690528A (en) 2022-09-06 2022-09-06 Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene

Country Status (1)

Country Link
CN (1) CN115690528A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116595456A (en) * 2023-06-06 2023-08-15 之江实验室 Data screening and model training method and device based on brain-computer interface

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116595456A (en) * 2023-06-06 2023-08-15 之江实验室 Data screening and model training method and device based on brain-computer interface
CN116595456B (en) * 2023-06-06 2023-09-29 之江实验室 Data screening and model training method and device based on brain-computer interface

Similar Documents

Publication Publication Date Title
Palazzo et al. Decoding brain representations by multimodal learning of neural activity and visual features
CN110507335B (en) Multi-mode information based criminal psychological health state assessment method and system
Campanella et al. Right N170 modulation in a face discrimination task: an account for categorical perception of familiar faces
CN111329474A (en) Electroencephalogram identity recognition method and system based on deep learning and information updating method
CN110036402A (en) The data processing method of prediction for media content performance
CN108904163A (en) wheelchair control method and system
CN110464367B (en) Psychological anomaly detection method and system based on multi-channel cooperation
CN111598451B (en) Control work efficiency analysis method, device and system based on task execution capacity
CN112690793A (en) Emotion electroencephalogram migration model training method and system and emotion recognition method and equipment
CN109805944B (en) Children's ability analytic system that shares feelings
CN111000556A (en) Emotion recognition method based on deep fuzzy forest
Pane et al. Identifying severity level of cybersickness from eeg signals using cn2 rule induction algorithm
Kanna et al. Cognitive Disability Prediction & Analysis using Machine Learning Application
Khoirunnisaa et al. Channel selection of EEG-based cybersickness recognition during playing video game using correlation feature selection (CFS)
CN113974627B (en) Emotion recognition method based on brain-computer generated confrontation
Li et al. Emotion recognition of subjects with hearing impairment based on fusion of facial expression and EEG topographic map
CN115690528A (en) Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene
CN116211305A (en) Dynamic real-time emotion detection method and system
CN113576498B (en) Visual and auditory aesthetic evaluation method and system based on electroencephalogram signals
Joyce et al. Early selection of diagnostic facial information in the human visual cortex
Huang Recognition of psychological emotion by EEG features
CN117883082A (en) Abnormal emotion recognition method, system, equipment and medium
Wang et al. Neurocognition-inspired design with machine learning
CN113255786B (en) Video quality evaluation method based on electroencephalogram signals and target salient characteristics
CN113288099B (en) Face attraction identification method based on electrocardiosignals and photoplethysmography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination