CN113420591B - Emotion-based OCC-PAD-OCEAN federal cognitive modeling method - Google Patents

Emotion-based OCC-PAD-OCEAN federal cognitive modeling method Download PDF

Info

Publication number
CN113420591B
CN113420591B CN202110523544.6A CN202110523544A CN113420591B CN 113420591 B CN113420591 B CN 113420591B CN 202110523544 A CN202110523544 A CN 202110523544A CN 113420591 B CN113420591 B CN 113420591B
Authority
CN
China
Prior art keywords
emotion
space
personality
occ
pad
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110523544.6A
Other languages
Chinese (zh)
Other versions
CN113420591A (en
Inventor
刘峰
张嘉淏
王晗阳
沈思源
贾迅
胡静怡
周爱民
齐佳音
李志斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202110523544.6A priority Critical patent/CN113420591B/en
Publication of CN113420591A publication Critical patent/CN113420591A/en
Application granted granted Critical
Publication of CN113420591B publication Critical patent/CN113420591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides an emotion-based OCC-PAD-OCEAN federal cognitive modeling method, which comprises the following steps: constructing a VGG-FACS-OCC model, and calculating the emotion space vector of the tested video; according to the parameter quantization mapping relation between the OCC emotion space and the PAD emotion space in the OCC-PAD-OCEAN model, mapping the emotion space vector to the PAD emotion space to obtain an emotion space vector; and mapping the mood space vector to an OCEAN personality space to obtain a personality space vector. According to the invention, the expression is mapped to the PAD mood space through the established expression-mood mapping relation, and then the average mood in a period of time is mapped through the established mood-personality mapping relation, so that extraction of personality characteristics is realized, and finally, information with certain statistical significance and credibility in the personality space is obtained.

Description

Emotion-based OCC-PAD-OCEAN federal cognitive modeling method
Technical Field
The invention relates to the technical field of psychological cognitive modeling, in particular to an emotion-based OCC-PAD-OCEAN federal cognitive modeling method.
Background
Psychology is an experimental science, and most of the progress of psychology research is based on the experimental range of psychology, so as to collect and analyze subjective and objective data of a tested person. In the experiment, the main test is required to observe and record the behaviors of various aspects of the tested under the condition of strictly controlling variables, and the collected data result is analyzed. Even so, the psychological experiment still has the problems that a large number of experimental results are not high in reliability, the experiment cannot be repeated and the like. These problems are partly due to bystander effect, partly due to limitations of laboratory experiments, etc., and thus the psychological experiments are limited to a certain scene and cannot be extrapolated. In view of the above problems, computer technology can play a role in precisely controlling and quantifying data, and the modern psychological measurement technology can have a plurality of benefits, including avoiding complex credibility measurement, improving structural efficiency, avoiding exposure effect, high measurement efficiency and the like. The main objective of most psychological experiments is to explore the principle of human behavior or the pattern of human cognition: based on the angle of data acquisition, the observation data are acquired through a computer technology, so that accurate digital control can be realized on the whole experimental environment, such as accurate acquisition of video signals, audio signals, sensor data, human motion information and the like. Based on the perspective of building an experimental environment, computer assistance may enable a immersive experience to be tested, for example, using virtual reality technology (VR) in the study of emotional psychology may enable emotions to be induced more effectively than general picture or language stimuli. Furthermore, computer technology can simulate a hypothesis model, interpret observed behavior, and in the case of limited experimental conditions, computer simulation can also verify preliminary idealization of the hypothesis. Research into human emotional behavior has been receiving increasing attention over the last decade. Emotion computing is based on psychology and computer science, and a cross discipline of how human emotion can be identified, modeled and even expressed by using a computer is studied. Personality calculations extending from emotion calculations can advance all techniques of understanding and predicting human behavior.
Personality is a very important determinant in psychological studies concerning the direction of behavior, prediction, etc. of humans, and it describes stable personal characteristics that can be measured, usually quantitatively, to interpret and predict observable differences in behavior. The large five personality model (FFM) is an important theory in the psychology of the personality today, and is one of the most influential models in psychology research. Five factors include openness, accountability, exogenesis, pleasure, and neurotism. The psychologist Harry Reis describes FFM as "the most scientifically rigorous taxonomy in behavioral science". The large five personality model provides a structure that classifies personality characteristics of a majority of people, and through a set of highly reproducible dimensions, most individual differences may be described briefly and comprehensively. From a computer perspective, the feature model represents the individuality in the form of a numerical value, which may be suitable for computer processing. Most personality assessments today take the form of self-reports, assessing personality through statements or adjectives in the scale. The self-reporting paradigm is simple and clear, but cannot control the authenticity of the tested answers, and experimental results are influenced by various irrelevant factors and are easy to generate larger deviation. One of the significant limitations of self-assessment is also that the test may tend to bias the score towards social expectations, particularly when the assessment may have negative consequences, the test may hide negative features, resulting in a result that does not fit the authenticity.
In general, while some of the current crossover innovative research is advancing the calculation and quantification of psychological theory, psychological theory is still based on traditional qualitative conclusions, and it is difficult to support the algorithm implementation of a computer for a direct quantification model. In addition, the algorithm program of the computer cannot accurately express the emotion theory and emotion model in psychology, and a large barrier exists before the psychology theory and the emotion model. Most of the studies that exist are generally considered from one of a computer science or psychology perspective only, not from the perspective of cross-fusion. Meanwhile, although facial expression recognition technology based on deep learning is well established at present, research of processing psychological signals by using the deep learning technology is still in a starting stage. Therefore, from the theory of emotional psychology, the cognitive modeling method for fusing algorithms such as deep learning and the like to perform deep fusion is still relatively lacking, and how to efficiently process the cognitive problem and improve the interpretability of the model is also a key problem.
Disclosure of Invention
The invention provides an emotion-based OCC-PAD-OCEAN federal cognitive modeling method which can process psychological signals by using a deep learning technology and solves the problem that the cognitive modeling method for deep fusion by fusion of algorithms such as deep learning in the prior art is relatively poor.
In order to solve the technical problems, the method provided by the invention comprises the following steps: s1, constructing a VGG-FACS-OCC model, and calculating emotion space vectors of a tested video; s2, mapping emotion space vectors to the PAD mood space according to a parameter quantization mapping relation between an OCC emotion space and the PAD mood space in an OCC-PAD-OCEAN model to obtain mood space vectors; s3, mapping the mood space vector to an OCEAN personality space, and obtaining the personality space vector through the mood space vector.
Specifically, the step S1 specifically includes the following steps: s11, the tested video is divided into a plurality of Image frames Image in time t The picture frames are imaged according to fixed frequency t Sampling to obtain several sampled frames I i (i=1, 2,3, …, n); s12 for sample frame I i Preprocessing to remove interference information; s13, preprocessing the sampling frame I i Mapping to OCC emotion space to obtain emotion space vector E i
Further, S12 is sampling frame I by preprocessing function Pre i Preprocessing, wherein the step S12 specifically includes: s121 uses MTCNN face recognition algorithm to sample frame I i Face target detection is carried out to obtain a target frame set B= { B 1 ,b 2 ,…b m And b is }, where i =(x i ,y i ,h i ,w i ,p i ),x i Representing the left upper corner abscissa, y, of the target frame i Representing the left upper corner ordinate of the target frame, h i Representing the height, w, of the target frame i Representing the width of the target frame, p i Representing the confidence level of the target frame; s122 determining a height threshold h t Threshold w of width t And confidence threshold p t The method comprises the steps of carrying out a first treatment on the surface of the For any b i E B, reserve B' = { h i >h t And w is i >w t And is also provided withS123 obtaining confidence level p from B i Highest target frame b * According to b * Pair I i Cutting, and cutting the I i Adjusted to a specific size to obtain Pre (I) i ) Obtaining emotion space vector E i =VGG(Pre(I i ))。
Specifically, the S2 includes: s21 calculating mood space vector M i =K×E i K is a conversion matrix of a parameter quantization mapping relation between an OCC emotion space and a PAD emotion space; s22, the mapping relation between the mood space and the personality space is continuous to obtain M i =K×E′ i The method comprises the steps of carrying out a first treatment on the surface of the S23 calculating average mood space vector
Specifically, the S3 is a calculation of a personality space vector P e =Z×M v Wherein Z is a conversion matrix from PAD mood space to OCEAN personality space.
The technical scheme has the following advantages or beneficial effects: according to the invention, the expression is mapped to the PAD mood space through the established expression-mood mapping relation, then the average mood in a period of time is mapped through the established mood-personality mapping relation, the psychological signals are processed by using the deep learning technology, extraction of personality characteristics is realized, and finally, the information with certain statistical significance reliability in the personality space is obtained, so that the problem that the cognitive modeling method for deep fusion by fusion of algorithms such as deep learning in the prior art is relatively lacking is solved.
Drawings
The invention and its features, aspects and advantages will become more apparent from the detailed description of non-limiting embodiments with reference to the following drawings. Like numbers refer to like parts throughout. The drawings are not intended to be drawn to scale, emphasis instead being placed upon illustrating the principles of the invention.
FIG. 1 is a schematic flow chart of an emotion-based OCC-PAD-OCEAN federal cognitive modeling method provided in example 1 of the present invention;
FIG. 2 is a schematic diagram of the model processing of the emotion-based OCC-PAD-OCEAN federal cognitive modeling method provided in example 1 of the present invention;
FIG. 3 is a functional explanatory diagram of various layers of VGG-19 provided in example 1 of the present invention;
FIG. 4 is a FACS-OCC emotion mapping table based on CK+ expression characteristics provided in embodiment 1 of the present invention;
FIG. 5 is a quantized mapping table of PAD mood space and emotion provided in embodiment 1 of the present invention;
FIG. 6 is a schematic diagram of the data processing process of a model of the emotion-based OCC-PAD-OCEAN federal cognitive modeling method provided in example 1 of the present invention;
FIG. 7 is a timing chart of six emotion dimensions of OCC in all frames of a video of the emotion-based OCC-PAD-OCEAN federal cognitive modeling method provided in embodiment 1 of the present invention;
FIG. 8 is a timing chart of PAD mood space in all frames of video of the emotion-based OCC-PAD-OCEAN federal cognitive modeling method provided in embodiment 1 of the present invention;
FIG. 9 is a human radar chart of the output of the emotion-based OCC-PAD-OCEAN federal cognitive modeling method provided in example 1 of the present invention;
FIG. 10 is a graph bias rate of five human cells in the population tested for the emotion-based OCC-PAD-OCEAN federal cognitive modeling method provided in example 1 of the present invention.
Detailed Description
The invention will now be further described with reference to the drawings and specific examples, which are not intended to limit the invention.
Example 1:
emotion can reflect the short-term state of a person, and can change greatly in a short time due to the change or stimulation of external conditions. OCC emotion model As a standard model for emotion synthesis 22 emotion classes are specified, based on Ekman theory that all non-basic emotions can be synthesized from basic emotions, six basic emotions defined by Ekman are used for constructing an OCC emotion space, namely anger (anger), aversion (disgust), fear (fear), happiness (happiness), sadness (sadness), surprise (surprise), and the OCC emotion space is expressed as E= [ E) in vector form angry ,e disgust ,e fear ,e happy ,e sad ,e surprise ] T Wherein each element has a value of 0,1]Representing the strength of emotion.
Mood is taken as the intermediate quantity of emotion and personality, and reflects the emotion state of a person in a period of time. From a measurement perspective, moods can be obtained by averaging the emotional states of an individual over a period of time, but since the combination of discrete emotional states (e.g. anger, aversion, fear, happiness, sadness) cannot be meaningfully averaged, a conceptual system is needed to build the basic dimension of moods, so the invention introduces a PAD mood space consisting of mutually independent three dimensions, namely Pleasure plasu (P), arousal (A), dominance (D), expressed in vector form as M= [ M P ,m A ,m D ] T The value range of the element is [ -1,1]. Wherein Pleasure plaeasure (P) describes the relative dominance of positive and negative affective states; arousal degree Arousal (a) measures how much a person is aroused by "high information" (complex, changing, unexpected) stimuli, and how fast it returns to baseline levels; the Dominance (D) evaluates the sense of control and influence of a person on his living environment, as well as the sense of control and influence by others or events.
The personality reflects the difference of the individual on the psychological characteristics, and the difference can not be greatly changed in the long-term process, so the invention adopts the large five personality model to construct the OCEAN personalitySpace, its 5 factors are respectively: openness, accountability, exogenesis, humanization, neurogenin, and expressed in vector form as p= [ P ] O ,p C ,p E ,p A ,p N ] T The element value range is [ -1,1]. Wherein the openness describes a person's awareness style, finding understanding from experience, and tolerating and exploring unfamiliar situations; accountability refers to the manner in which self-impulses are controlled, managed, and regulated, assessing an individual's organization, adherence, and motivation in a targeted behavior; exogenously refers to the number and density of human interactions, the need for stimulation, and the ability to gain pleasure; the comfort examines the attitudes of individuals to others; the nervous matter reflects the individual's emotional regulation process, reflecting the tendency and emotional instability of the individual's physical examination of negative emotions.
Because VGG is a classical deep convolutional neural network with strong Image feature extraction capability, VGG-19 trained on CK+ data sets is used as a video-emotion space reasoning model to establish a VGG-FACS-OCC model, and natural talking video V is segmented into a plurality of picture frames Image according to time with reference to figures 1 and 2 t The picture frames are imaged according to fixed frequency t Sampling to obtain several sampled frames I i (i=1, 2,3, …, n), let the frame preprocessing function be Pre, pre specifically be sampling frame I using MTCNN face recognition algorithm i Face target detection is carried out to obtain a target frame set B= { B 1 ,b 2 ,…b m And b is }, where i =(x i ,y i ,h i ,w i ,p i ),x i Representing the left upper corner abscissa, y of the target frame i Representing the left upper corner ordinate of the target frame, h i Representing the height, w, of the target frame i Representing the width of the target frame, p i Representing the confidence level of the target frame; determining a height threshold h t Width threshold w t And confidence threshold p t The method comprises the steps of carrying out a first treatment on the surface of the For any b i E B, reserve B' = { h i >h t And w is i >w t And is also provided with-a }; finally, acquiring confidence level p from B i Highest target frame b * According to b * Pair I i Cutting, and cutting the I i Adjusted to a specific size to obtain Pre (I) i ) Obtaining emotion space vector E i =VGG(Pre(I i ))。
Specifically, the method for converting the single-frame image into the emotion space vector through VGG comprises the following steps: assuming that the picture frame to be inferred is a pixel matrixThe calculation process mainly related to VGG comprises three types of convolution layers, full connection layers and pooling layers, and formalized expression of convolution operation is given first:
where s and p are the number of steps and zero padding layers, respectively, Z≡l is the input of layer 1 and Z≡ 0=I, K is the number of channels of the convolution layer, F is the height and width of the convolution kernel, L_ (1+1) is the size of the input of the convolution layer of layer 1+1, and there are:σ (·) represents a nonlinear activation function, typically a linear rectification function (ReLU) function: reLU (x) =max (0, x), assuming that the number of output channels is K 'for the case of output channel number transform, the two-dimensional convolution operation is typically performed multiple times with K' different convolution kernels, and then all the results are connected in the channel dimension.
Formally expressed as follows for the fully attached layer:
formalized expression of the pooling layer is as follows: when p → infinity, the pooling operation becomes maximum pooling, denoted as maxpool_f, i.e., the pixel value with the largest gray value in the pooling area is taken. Namely: />
Model parameters of VGG are generally determined by a pre-training process, whose forward propagation reasoning process can be formalized as:wherein f_1, f_2, …, f_n represent functions corresponding to different levels of the neural network, +_>Representing function compounding, namely completing model forward propagation through function compounding of different layers of a neural network. For one implementation of VGG, VGG-19, the functional expression of each layer is shown in FIG. 3.
To adjust the parameters of VGG-19 so that VGG-19 has good FACS feature extraction and emotion classification capabilities, the VGG-19 model is trained with CK+ datasets herein. The ck+ dataset provides OCC emotion labeling based on FACS providing face pictures. The FACS-OCC transformation pattern is shown in FIG. 4 below. Let FACS-AU intensity vector be f, each of which represents the intensity of FACS AU associated with emotion recognition, with a value in the range of [0,1]]The FACS-OCC conversion method is expressed as a function F2O (F), and then the optimization objective of the VGG model for the picture I and FACS-AU feature signature F in the training set is: l (VGG (I), F2O (F))=cross entropy (VGG (I), F2O (F)), wherein the cross entropy loss is:where n is the number of tags, for the OCC emotion classification problem herein, the number of tags should be 6. Training VGG model by CK+ data set in batch gradient descent mode, minimizing objective function L, and adjusting model parameters to enable hidden layer of VGGAnd extracting FACS features to finally obtain the specific OCC emotion vector.
Obtaining emotion space vector E i =VGG(Pre(I i ) After calculating the mapping from the emotion space vector to the PAD mood space, referring to the quantized mapping relation between the PAD mood space and emotion shown in FIG. 5, wherein emotion surprise (surrise) has no corresponding PAD value in the original PAD scale, we presume that the PAD values of the PAD values are 0.20, 0.45 and-0.45 respectively after analyzing by observing the PAD values corresponding to emotion similar to the emotion surprise in the original scale. Let us assume that the emotions in the above table are independent of each other, and when one emotion value is the maximum value of 1 and the rest emotion values are all 0, the emotion values can be mapped to the PAD value corresponding to the right side. Then, the above-mentioned table is formalized and written as the following mapping relation: f (f) e (e=1.0)=[m Pe ,m Ae ,m De ] T ,e∈ {e angry ,e disgust ,e fear ,e happy ,e sad ,e surprise Further, we write this as matrix multiplication form M for computer operations i =K×E i WhereinFor the conversion matrix from emotion space to mood space, E i Is a binary vector modulo 1. Through facial expression recognition technology, we obtain emotion vector E i =[e angry ,e disgust ,e fear ,e happy ,e sad ,e surprise ] T Since the table only shows the correspondence of the dispersion, E can be only 1,5, 0, e.g. [1,0,0,0,0,0 ]],[0,1,0,0,0,0]We need to convert it into a continuous mapping function to get the emotion vector E i Corresponding mood vector M i . Then we expand the formula to obtain the mood space vector M i =K×E′ i At this time E' i Emotion vector obtained for computer recognition, E' i Any value can be taken within its range of values. Then, the arithmetic average of the mood space vectors is calculated to obtain the averageMood space vector
Finally, the average mood space vector M v Mapping to an OCEAN personality space, and establishing a conversion relation from a mood space to a personality space through linear regression as follows:
Sophistication=.16P+.24A+.46D
Conscientiousness=.25P+.19D
Extraversion=.29P+.59D
Agreeableness=.74P+.13A-.18D
Emotional stability=.43P-.49A
since Sophination and Openness are both derived from Culteral, it is assumed herein that Sophination is approximately considered synonymous with Openness. Emotional stability describes the stability of an individual's emotion and neurotism describes the instability of an individual's emotion. Emotional stability is therefore assumed to be antisense to neurotism. Namely: sophination = Openness; emotional stability = -neurotism; thereby obtaining the conversion relation from the PAD mood space to the OCEAN personality space:
Openness=.16P+.24A+.46D
Conscientiousness=.25P++.19D
Extraversion=.29P+.59D
Agreeableness=.74P+.13A-.18D
Neuroticism=-.43P+.49A
i.e. personality space vector P e =Z×M v Wherein B is a conversion matrix of mood space vector and personality space vector,
in order to prove the feasibility of the proposed cognitive modeling and computer implementation thereof, the invention develops related experiments, wherein the study object is 31 college students, and the experiments follow the standard program and the paradigm of psychological experiments. The experimental procedure is as follows, the interview is firstly carried out for 5-10 minutes, the content of the interview is mainly used for stimulating recall and emotion, and meanwhile, the expression of the experiment is recorded by a camera in the experiment. After the interview is completed, the test fills in the traditional large five personality scale. Finally, the effectiveness of the model is proved by respectively analyzing the test results of the scale and the results of the traditional large five-personality scale. Relevant experimental materials, raw data and analysis results provide open source access in the GitHub. Meanwhile, regarding the algorithm and result analysis processing of the video, the operating hardware environment is as follows: memory: 8GB; CPU: an Inter (R) Core (TM) i5-7300HQ 2.50GHz 4 Core; the system running environment Debian 10. The processing flow is shown in fig. 6 by taking the video data with the tested ID "lf" as an example.
The main process of federal cognitive model (EFCM) data processing is as follows, firstly interview is carried out with a tested person in a large five-personality experiment, video stream information of the tested person is obtained, then VGG-19 is utilized to process the video stream data into emotion OCC characteristic data based on FACS-OCC emotion modeling, and the tested time sequence personality data is finally obtained by combining OCC-OCEAN cognitive modeling processing, and weighted personality data is finally obtained.
The specific process flow is as follows. Firstly, six-dimensional time series data of the expression of the OCC are obtained from tested video stream information through FACS-OCC emotion modeling based on CK+ expression characteristics, and the six-dimensional OCC emotion time series data processed through the process are shown in fig. 7. The OCC emotion activation conditions tested during the experiment were fluctuating to different degrees in both happy (happy) and sad (sad) emotions and were more frequent, whereas activation of four expressions was relatively limited in anger (Angry), embarrassment (distest), fear (Fear), and Surprise (surrise).
According to the output OCC six-dimensional emotion time sequence data, the time sequence data of the PAD is obtained through the mapping processing from emotion space to mood space, as shown in fig. 8.
The dynamic personality recognition data, i.e., the small dots in fig. 9, can be finally obtained by combining the conversion theory from Mehrabian mood space to personality space. In combination with the psychographic tradition of the large five personality scale, the large dots in figure 9.
According to fig. 9, it can be seen that the conventional large five-personality data represented by the large dots falls in the algorithmically trusted recognition area, and in order to quantify the deviation of each personality, we know the accuracy of EFCM cognitive modeling by calculating the five personality deviations of the whole tested population. The specific deviation rate calculation method is described by taking an open-ended Openness as an example.Wherein Openness_rec t The method comprises the steps of representing an Openness value obtained by an algorithm for a t frame, wherein n represents the total frame number, openness represents an Openness value obtained by calculation of a personality table, openness_recmax represents a maximum Openness value obtained by calculation of the algorithm in all frames, openness_recmin represents a minimum Openness value obtained by calculation of the algorithm in all frames, and in order to avoid the negative number condition, an absolute value is taken for a molecule.
The above formula represents the ratio of the difference between the arithmetic average of the Openness value obtained by the algorithm and the Openness value obtained by the personality table to the difference between the minimum value and the maximum value of the Openness obtained by the algorithm, namely the Openness deviation rate. The Conscientiousness, extraversion, agreeableness, neuroticism bias ratios can be obtained by the same method as follows:
according to the above calculation method, five human deviation rates of the whole tested model of the EFCM cognitive model shown in fig. 10 can be obtained as follows.
The federal cognitive model provided by the method theoretically opens up the whole flow from visual information input to final data output of five big human grids, and is thoroughly deduced by combining formalization. The model proves that besides the fact that the personality of the nerve is objectively not verified, the experimental results prove that the average deviation rate of the personality results of the four effective tests is about 20.41%, namely the average accuracy rate is 79.56%.
For the neural personality deviation result with larger deviation rate, the result of the tested neural matter (neurotism) and the deviation rate in the subsequent personality comparison result are influenced by taking into consideration that the negative emotion which is relatively difficult to truly capture in life is recalled in the actual test process. From the basic theory of psychology, it is known that the neural matter is related to a large number of negative emotions, but in the standard large five personality test experiment process, the negative emotions to be tested cannot appear and cannot be stimulated deliberately, so that the neural matter characteristics to be tested in the experiment cannot be accurately tested and recorded, and abnormal results of the neural matter neurotism personality are objectively caused. When the model is applied to an actual scene, the model is recorded and analyzed in the unconsciousness of a test, and in theory, the test of the physical personality of the nerve can be effectively performed. Therefore, the next step of research is to observe and experiment mass data in mass scenes under the premise of being subjected to tested permission, so as to verify the effectiveness of the federal cognitive model under large-scale observation.
The foregoing describes preferred embodiments of the present invention; it is to be understood that the invention is not limited to the specific embodiments described above, wherein devices and structures not described in detail are to be understood as being implemented in a manner common in the art; any person skilled in the art will make many possible variations and modifications, or adaptations to equivalent embodiments without departing from the technical solution of the present invention, which do not affect the essential content of the present invention; therefore, any simple modification, equivalent variation and modification of the above embodiments according to the technical substance of the present invention still fall within the scope of the technical solution of the present invention.

Claims (1)

1. The emotion-based OCC-PAD-OCEAN federal cognitive modeling method is characterized by comprising the following steps of:
s1, constructing a VGG-FACS-OCC model, setting a FACS-AU intensity vector as F, setting a value range as [0,1], and marking the conversion of the FACS-OCC as a function F2O (F);
setting the optimization targets of the VGG model as follows:
L(VGG(I),F2O(f))=CrossEntropy(VGG(I),F2O(f))
the cross entropy loss function is set as:
training the VGG model in a CK+ data set in a batch gradient descent mode, and minimizing an objective function L;
time-slicing the tested video into several picture frames Image t The picture frames are imaged according to fixed frequency t Sampling to obtain several sampled frames I i ,(i=1,2,3,…,n);
Sampling frame I by using MTCNN face recognition algorithm i Face target detection is carried out to obtain a target frame set B= { B 1 ,b 2 ,…b m And b is }, where i =(x i ,y i ,h i ,w i ,p i ),x i Representing the left upper corner abscissa, y of the target frame i Representing the left upper corner ordinate of the target frame, h i Representing the height, w, of the target frame i Representing the width of the target frame, p i Representing the confidence level of the target frame;
determining a height threshold h t Width threshold w t And confidence threshold p t The method comprises the steps of carrying out a first treatment on the surface of the For any b i E B, reserve B' = { h i >h t And w is i >w t And is also provided with
S123 obtaining confidence level p from B i Highest target frame b * According to b * Pair I i Cutting, and cutting the I i Adjusted to a specific size to obtain Pre (I) i );
FACS feature extraction is carried out on hidden layers based on VGG, and the preprocessed sampling frame I is obtained i Mapping to OCC emotion space to obtain emotion space vector E i Emotion space vector E i =VGG(Pre(I i ));
S2 calculating mood space vector M i =K×E i K is a conversion matrix of a parameter quantization mapping relation between an OCC emotion space and a PAD emotion space;
the mapping relation between mood space and personality space is continuous to obtain M i =K×E′ i
Calculating an average mood space vector
S3, mapping the average mood space vector to an OCEAN personality space, and obtaining a personality space vector through the mood space vector; personality space vector P e =Z×M v Wherein Z is a conversion matrix from PAD mood space to OCEAN personality space.
CN202110523544.6A 2021-05-13 2021-05-13 Emotion-based OCC-PAD-OCEAN federal cognitive modeling method Active CN113420591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110523544.6A CN113420591B (en) 2021-05-13 2021-05-13 Emotion-based OCC-PAD-OCEAN federal cognitive modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110523544.6A CN113420591B (en) 2021-05-13 2021-05-13 Emotion-based OCC-PAD-OCEAN federal cognitive modeling method

Publications (2)

Publication Number Publication Date
CN113420591A CN113420591A (en) 2021-09-21
CN113420591B true CN113420591B (en) 2023-08-22

Family

ID=77712383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110523544.6A Active CN113420591B (en) 2021-05-13 2021-05-13 Emotion-based OCC-PAD-OCEAN federal cognitive modeling method

Country Status (1)

Country Link
CN (1) CN113420591B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809354B (en) * 2024-02-29 2024-06-21 华南理工大学 Emotion recognition method, medium and device based on head wearable device perception

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9177060B1 (en) * 2011-03-18 2015-11-03 Michele Bennett Method, system and apparatus for identifying and parsing social media information for providing business intelligence
CN105469065A (en) * 2015-12-07 2016-04-06 中国科学院自动化研究所 Recurrent neural network-based discrete emotion recognition method
CN106970703A (en) * 2017-02-10 2017-07-21 南京威卡尔软件有限公司 Multilayer affection computation method based on mood index
CN108376234A (en) * 2018-01-11 2018-08-07 中国科学院自动化研究所 emotion recognition system and method for video image
CN109730701A (en) * 2019-01-03 2019-05-10 中国电子科技集团公司电子科学研究院 A kind of acquisition methods and device of mood data
CN109815903A (en) * 2019-01-24 2019-05-28 同济大学 A kind of video feeling classification method based on adaptive converged network
CN110110671A (en) * 2019-05-09 2019-08-09 谷泽丰 A kind of character analysis method, apparatus and electronic equipment
KR20210027769A (en) * 2019-09-03 2021-03-11 한국항공대학교산학협력단 Neural network based sentiment analysis and therapy system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9177060B1 (en) * 2011-03-18 2015-11-03 Michele Bennett Method, system and apparatus for identifying and parsing social media information for providing business intelligence
CN105469065A (en) * 2015-12-07 2016-04-06 中国科学院自动化研究所 Recurrent neural network-based discrete emotion recognition method
CN106970703A (en) * 2017-02-10 2017-07-21 南京威卡尔软件有限公司 Multilayer affection computation method based on mood index
CN108376234A (en) * 2018-01-11 2018-08-07 中国科学院自动化研究所 emotion recognition system and method for video image
CN109730701A (en) * 2019-01-03 2019-05-10 中国电子科技集团公司电子科学研究院 A kind of acquisition methods and device of mood data
CN109815903A (en) * 2019-01-24 2019-05-28 同济大学 A kind of video feeling classification method based on adaptive converged network
CN110110671A (en) * 2019-05-09 2019-08-09 谷泽丰 A kind of character analysis method, apparatus and electronic equipment
KR20210027769A (en) * 2019-09-03 2021-03-11 한국항공대학교산학협력단 Neural network based sentiment analysis and therapy system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
苏文超.人脸面部活动单元检测及微表情分析.《中国优秀硕士学位论文全文数据库信息科技辑》.2019,论文第13-33页. *

Also Published As

Publication number Publication date
CN113420591A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CN111134666B (en) Emotion recognition method of multi-channel electroencephalogram data and electronic device
Bishay et al. Schinet: Automatic estimation of symptoms of schizophrenia from facial behaviour analysis
Liu et al. Reinforcement online learning for emotion prediction by using physiological signals
Vairachilai et al. Body sensor 5 G networks utilising deep learning architectures for emotion detection based on EEG signal processing
Gavrilescu et al. Predicting the Sixteen Personality Factors (16PF) of an individual by analyzing facial features
CN112529054B (en) Multi-dimensional convolution neural network learner modeling method for multi-source heterogeneous data
US20220138583A1 (en) Human characteristic normalization with an autoencoder
Gomez et al. Exploring facial expressions and action unit domains for Parkinson detection
CN112989920A (en) Electroencephalogram emotion classification system based on frame-level feature distillation neural network
JP2022505836A (en) Empathic computing systems and methods for improved human interaction with digital content experiences
Goffinet et al. Inferring low-dimensional latent descriptions of animal vocalizations
CN113420591B (en) Emotion-based OCC-PAD-OCEAN federal cognitive modeling method
Jamal et al. Cloud-based human emotion classification model from EEG signals
Younis et al. Machine learning for human emotion recognition: a comprehensive review
Havugimana et al. Predicting cognitive load using parameter-optimized cnn from spatial-spectral representation of eeg recordings
S'adan et al. Deep learning techniques for depression assessment
Andreas et al. Optimisation of CNN through Transferable Online Knowledge for Stress and Sentiment Classification
Khorrami How deep learning can help emotion recognition
Ahmadieh et al. Visual image reconstruction based on EEG signals using a generative adversarial and deep fuzzy neural network
Guodong et al. Multi feature fusion EEG emotion recognition
Zhang et al. EEG data augmentation for personal identification using SF-GAN
Andreas et al. CNN-based emotional stress classification using smart learning dataset
Li et al. Calibration error prediction: ensuring high-quality mobile eye-tracking
Nikolaevna et al. Bioengineering system for research on human emotional response to external stimuli
Cowen et al. Facial movements have over twenty dimensions of perceived meaning that are only partially captured with traditional methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant