CN110321440A - A kind of personality assessment's method and system based on emotional state and emotional change - Google Patents

A kind of personality assessment's method and system based on emotional state and emotional change Download PDF

Info

Publication number
CN110321440A
CN110321440A CN201910506596.5A CN201910506596A CN110321440A CN 110321440 A CN110321440 A CN 110321440A CN 201910506596 A CN201910506596 A CN 201910506596A CN 110321440 A CN110321440 A CN 110321440A
Authority
CN
China
Prior art keywords
personality
emotion
testee
data
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910506596.5A
Other languages
Chinese (zh)
Inventor
包能胜
方海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shantou University
Original Assignee
Shantou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shantou University filed Critical Shantou University
Priority to CN201910506596.5A priority Critical patent/CN110321440A/en
Publication of CN110321440A publication Critical patent/CN110321440A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/41Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/489Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

Personality assessment's method and system based on emotional state and emotional change that the invention discloses a kind of, wherein this method comprises: video when acquisition testee currently gives a lecture or reports;The video is pre-processed, pretreated video data, voice data and semantic data are obtained;It calls trained Emotion identification model E-R-Model to test the pretreated video data, voice data and semantic data, obtains the type of emotion of testee and the distribution results along the time;It calls type of emotion of the personality identification model P-R-Model to the testee and the distribution results along the time to analyze, obtains the personality result of testee.In embodiments of the present invention, according to testee, emotional state and emotional change assess the personality type of testee during speech or report, can save the tested time, improve the objectivity of test.

Description

A kind of personality assessment's method and system based on emotional state and emotional change
Technical field
The present invention relates to machine learning machine and depth learning technology fields, more particularly to one kind to be based on emotional state and mood Personality assessment's method and system of variation.
Background technique
Under new industrialization and era of information, individualized education is to improve one of the effective way of the quality of education.Have Effect implement the necessary condition of individualized education first is that each student must be " bosom friend " and deep and accurate, that is, understand self.And After universalization this exactly the new era education so that the whole society problem.Student itself and teacher can not understand each Student, to can not accomplish individualized education.For everyone different personality, and implement different education types to everyone And educational mode, it is the requisite measure for carrying out individualized education.
Current personality assessment most popular in the world is five-factor model personality assessment, and the personality of people is divided into extroversion by five-factor model personality (extraversion), indicate that the extroversion of personality is horizontal;Neurotic (neuroticism), indicates the maintenance level of mood;It is open Property (openness), indicates the open level of personality;Pleasant property (agreeableness), indicates the affine level of personality;It does one's duty Property (conscientiousness), indicate that the discretion of personality is horizontal.
Current main personality assessment's mode has three ways, such as questionnaire class, observation class, machine type.Questionnaire class represents as people Lattice scaling method designs reasonable questionnaire, is filled in questionnaires by tester oneself, and then statistical result assesses tester's personality;Observation Class includes Natural experiment method, projective test method, working system etc., designs certain scenarios and task by assessment teacher, complete by tester At task, the teacher that tests and assesses observes testee to assess testee's personality;Machine type is tester in webpage (WEB) or application (APP) various game are done on, the outcome evaluation tester's personality to play games by tester.Such as: Pymetrics, Cognisess、HireVue。
Above-mentioned technology has the disadvantage in that (1) questionnaire class: first, the accuracy of test, and greatly setting by questionnaire Meter determines;Second, questionnaire survey is filled in by testee oneself, big by testee's subjective impact, may be by oneself idea, carelessly It fills in, or deliberately fills in.There is the shortcomings that test authenticity is low, test scope is small, test lacks flexibility in questionnaire class.(2) it sees Examine class: the professional standards of observer determine that the accuracy of test, the preference of observer will lead to different observed results;Observation Class needs to observe each tester, therefore can expend a large amount of manpower and time.Observation class, which exists, to be taken considerable time, expends A large amount of manpowers, test result is unstable, is difficult to the disadvantages of promoting;(3) machine type: only become from the topic of papery by webpage It is filled in APP, essence is all that testee inscribes or plays games, and testee's personality, therefore accuracy rate are assessed according to result It is not high.That there are accuracys is low for current machine type, expends the disadvantages of a large amount of tester's times.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, and the present invention provides one kind to be based on emotional state and mood Personality assessment's method and system of variation, according to testee, emotional state and emotional change are assessed during speech or report The personality type of testee can save the tested time, improve the objectivity of test.
To solve the above-mentioned problems, the invention proposes a kind of personality assessment side based on emotional state and emotional change Method, which comprises
Video when acquisition testee currently gives a lecture or reports;
The video is pre-processed, pretreated video data, voice data and semantic data are obtained;
Trained Emotion identification model E-R-Model is called to test the pretreated video data, voice data And semantic data, obtain the type of emotion of testee and the distribution results along the time;
Type of emotion of the personality identification model P-R-Model to the testee and the distribution results along the time is called to carry out Analysis, obtains the personality result of testee.
Preferably, the method also includes:
Emotion identification model is established using interactive mood binary motion capture database.
Preferably, described the step of establishing Emotion identification model using interactive mood binary motion capture database, packet It includes:
Establish video, voice, semantic three-dimensional Emotion identification deep learning model;
Using the interactive mood binary motion capture database training Emotion identification deep learning model, described in acquisition Emotion identification model.
Preferably, described the step of establishing video, voice, semantic three-dimensional Emotion identification deep learning model, comprising:
Video data is pre-processed, the feature vector of video data is obtained;
Voice data is pre-processed, the feature vector of voice data is obtained;
Semantic data is pre-processed, the feature vector of semantic data is obtained;
It is established according to the feature vector of the video data, the feature vector of voice data, the feature vector of semantic data The deep learning model of Emotion identification.
Preferably, the method also includes:
Type of emotion when according to testee's speech with personality label or reporting and the distribution results along the time are established Personality database;
Using personality database sharing with type of emotion and emotional change be input, with personality type be output personality know Other model P-R-Model.
Correspondingly, the present invention also provides a kind of personality assessment's system based on emotional state and emotional change, the system Include:
Acquisition module, for acquiring video when testee currently gives a lecture or reports;
Preprocessing module, for being pre-processed to the video, obtain pretreated video data, voice data and Semantic data;
Mood calling module, for calling trained Emotion identification model E-R-Model test described pretreated Video data, voice data and semantic data obtain the type of emotion of testee and the distribution results along the time;
Personality calling module, for calling type of emotion and edge of the personality identification model P-R-Model to the testee The distribution results of time are analyzed, and the personality result of testee is obtained.
Preferably, the system also includes:
Emotion identification model building module, for establishing Emotion identification using interactive mood binary motion capture database Model.
Preferably, the Emotion identification model building module includes:
Unit is established, for establishing video, voice, semantic three-dimensional Emotion identification deep learning model;
Training unit, for using the interactive mood binary motion capture database training Emotion identification deep learning Model obtains the Emotion identification model.
Preferably, the unit of establishing includes:
It pre-processes subelement and the feature vector of video data is obtained, to voice number for pre-processing to video data According to being pre-processed, obtain the feature vector of voice data, semantic data pre-processed, obtain the feature of semantic data to Amount;
Subelement is established, for feature vector, the feature vector of voice data, semantic data according to the video data Feature vector establish the deep learning model of Emotion identification.
Preferably, the system also includes:
Personality identification model establishes module, mood class when for being given a lecture or reported according to the testee with personality label Type and distribution results along the time establish personality database;It the use of personality database sharing with type of emotion and emotional change is defeated Enter, be the personality identification model P-R-Model exported with personality type.
In embodiments of the present invention, the mood of the video data analysis testee using testee in speech or report Type and emotional change establish Emotion identification model and personality identification model, to evaluate the personality type of testee.It uses Video, voice, the semantic Emotion identification method combined are known with existing video (or image), voice, semantic individually mood Other method comparison improves the accuracy rate of Emotion identification in conjunction with video in parallel, voice, semantic three kinds of data.By establish by The personality identification model of survey person's type of emotion and emotional change and testee's personality type relationship, can be without manual intervention, visitor The personality for seeing ground assessment testee, since assessment mode is also to be difficult to cover up under ignorant or informed state by testee Mood in the case where carry out, therefore the true personality of testee can be more embodied relative to the deliberately property of questionnaire class method;This hair The bright subjectivity relative to observation class method, has many advantages, such as objectivity, saves a large amount of manpower, time.
The invention has the following advantages that
Advantage one: the present invention can more realistically embody the personality tendency of testee, the present invention given a lecture by testee or Type of emotion and variation when report assess testee's personality, and mood of the testee when public place is given a lecture or is reported is Extremely difficult camouflage, therefore it is truer to assess by mood testee's personality.
Advantage two: the present invention has the advantages that assessment result objectivity, evaluation process save time human-saving, personality of the present invention Assessment completed by system whole process, therefore assessment result have objectivity, will not because evaluator level and prejudice caused by testee Assessment result inaccuracy.
Advantage three: the present invention can assess school teachers and students wholely, obtain an interim personality variation or tendency, Adjustment in time and improvement.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the flow diagram of personality assessment's method based on emotional state and emotional change of the embodiment of the present invention;
Fig. 2 is the process schematic that Emotion identification model is established in the embodiment of the present invention;
Fig. 3 is the structure group of the first embodiment of personality assessment's system of the invention based on emotional state and emotional change At schematic diagram;
Fig. 4 is the structure group of the second embodiment of personality assessment's system of the invention based on emotional state and emotional change At schematic diagram.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Fig. 1 is the flow diagram of personality assessment's method based on emotional state and emotional change of the embodiment of the present invention, As shown in Figure 1, this method comprises:
S101, video when acquisition testee currently gives a lecture or reports;
S102 pre-processes video, obtains pretreated video data, voice data and semantic data;
S103 calls trained Emotion identification model E-R-Model to test pretreated video data, voice data And semantic data, obtain the type of emotion of testee and the distribution results along the time;
S104, call type of emotion of the personality identification model P-R-Model to testee and the distribution results along the time into Row analysis, obtains the personality result of testee.
In embodiments of the present invention, the mood of the video data analysis testee using testee in speech or report Type and emotional change establish Emotion identification model and personality identification model.This method further include:
Emotion identification model is established using interactive mood binary motion capture database.
Specifically, as shown in Fig. 2, establishing the step of Emotion identification model using interactive mood binary motion capture database Suddenly, comprising:
S201 establishes video, voice, semantic three-dimensional Emotion identification deep learning model;
S202 obtains feelings using interactive mood binary motion capture database training Emotion identification deep learning model Thread identification model.
Further, S201 includes:
Video data is pre-processed, the feature vector of video data is obtained;
Voice data is pre-processed, the feature vector of voice data is obtained;
Semantic data is pre-processed, the feature vector of semantic data is obtained;
Mood is established according to the feature vector of video data, the feature vector of voice data, the feature vector of semantic data The deep learning model of identification.
In specific implementation, using interactive mood binary motion capture database (The Interactive Emotional Dyadic Motion Capture, IMEOCAP) establish an Emotion identification model.IMEOCAP database is One mood data library comprising video, voice and text transcription (semanteme), IMEOCAP database is for each video, language Mood label has all been made in dialogue in sound, semanteme;Tag types include neutral (neutral), happy (happiness), sadness (sadness), angry (anger), surprised (surprise), frightened (fear), detest (disgust), be dejected (frustration), excitement (excited), other (other) ten kinds of type of emotion.
During carrying out pretreated to video data, treatment process is as follows:
Video data includes 3 type videos in IMEOCAP database, respectively face, hand and end rotation;Face Feature includes 165 features, and hand-characteristic includes 18 features, and end rotation includes 6 features;Therefore every a video data Include 189 features.Each section video being temporally averagely divided into 200 trifles, (IMEOCAP database finishes writing play before recording This, performer performs according to drama, then records, and the video of drama and recording includes 10039 sentences, that is, 10039 portions Point).It therefore, is the feature vector of (200,189) after video data pretreatment.
During carrying out pretreated to voice data, treatment process is as follows:
Audio is divided into the short audio no more than 100 frames, each frame is an element, without 100 frames, lacks part use Zero replaces (i.e. no data), then extracts 34 features in audio, each feature is an element, therefore, after pretreatment Voice be (100,34) feature vector.34 features are as shown in table 1:
34 feature schematic tables that table 1 extracts
Serial number Chinese English name
1 Short-time average zero-crossing rate Zero Crossing Rate
2 Short-time energy Energy
3 Energy-Entropy Entropy of Energy
4 Spectral centroid Spectral Centroid
5 Frequency spectrum extensibility Spectral Spread
6 Compose entropy Spectral Entropy
7 Spectral flux Spectral Flux
8 Spectral roll-off point Spectral Rolloff
9-21 Mel cepstrum coefficients MFCCs
22-33 Chroma vector Chroma Vector
34 Color standard variance Chroma Deviation
During carrying out pretreated to semantic data, treatment process is as follows:
For text transcription (semanteme), use GloVe (Global Vectors for Word Representation) It directly handles, GloVe is a kind of for obtaining the unsupervised learning algorithm of word vector expression.GloVe provides four kinds of instructions in advance The word vector data practiced, the present embodiment select common crawler (Common Crawl), and parameter is (42B tokens, 1.9M Vocab, uncased, 300d vectors, 1.75GB), i.e., term vector is 300, set each word number as 500, each Word is an element, and the data less than 500 words are zero;Therefore, the feature vector of a semanteme (500,300) is obtained.
Feelings are being established according to the feature vector, the feature vector of voice data, the feature vector of semantic data of video data During the deep learning model of thread identification, the feature vector using pretreated voice data (100,34) is input; Hidden layer is respectively as follows: first layer and uses two-way shot and long term memory network (Bidirectional LSTM), neuron (Units) Number is 128, and random inactivate (Dropout) is 0.2;The second layer uses the two-way shot and long term memory network for adding attention mechanism (Bidirectional LSTM with Attention), neuron number are 128, and random inactivation is similarly 0.2;Third Layer is the dense layer (Dense that is activation primitive using line rectification function (Rectified Linear Unit, ReLU) Layer), neuron number 256.Feature vector using pretreated semantic (text) data (500,300) is input, Hidden layer is respectively as follows: first layer using shot and long term memory network, and neuron number 256, random inactivation is 0.2;The second layer is same Sample uses shot and long term memory network, neuron number 256, and random inactivation is 0.2;Third layer is to use line rectification function It (ReLu) is the dense layer of activation primitive, neuron number 256.Use the spy of pretreated video data (200,189) Levying vector is input, and the design of hidden layer is as the processing of voice data, i.e., first layer uses shot and long term memory network, nerve First number is 256, and random inactivation is 0.2;The second layer equally uses shot and long term memory network, and neuron number 256 is random to lose Living is 0.2;Third layer is the dense layer for being activation primitive, neuron number 256 using line rectification function (ReLu).
It is finally a neural network by three voice, semanteme, video neural network parallel connections, due to voice, semanteme, video Three independent neural network numbers of plies are all input layer, and hidden layer is divided into first layer, the second layer, third layer and five layers of output layer Neural network, wherein hidden layer third layer neuron number is consistent, is all 256, therefore can hidden layer third layer work in parallel For the same output layer input layer (voice, semanteme, three independent neural networks of video hidden layer third layer output all For 256 neurons, there is no essential distinctions, can be used as the input layer of 10 kinds of moods output).Output layer (Output Layer neuron number) is 10, as neutral (neutral), happy (happiness), sad (sadness), indignation (anger), surprised (surprise), frightened (fear), detest (disgust), dejected (frustration), excitement (excited), other (other) ten kinds of type of emotion are known using interactive mood binary motion capture database training mood Other deep learning model obtains Emotion identification model E-R-Model, uses IEMOCAP database training Emotion identification model E- R-Model and test accuracy rate.
After establishing Emotion identification model E-R-Model, personality class is pressed using traditional questionnaire survey mode to testee Type labeling;Then acquisition testee speech or report video;Pretreatment above-mentioned is pressed to the video of the testee of acquisition Mode handles data;Finally according to pretreated data obtain the type of emotion of testee and along the distribution of time.
In specific implementation, present invention method further include:
Type of emotion when according to testee's speech with personality label or reporting and the distribution results along the time are established Personality database;
Using personality database sharing with type of emotion and emotional change be input, with personality type be output personality know Other model P-R-Model.
Wherein, type of emotion is above-mentioned 10 kinds of type of emotion, and personality type includes: extroversion (extraversion), mind Through matter (neuroticism), opening (openness), pleasant property (agreeableness), doing one's duty property (conscientiousness)。
Correspondingly, the embodiment of the present invention also provides a kind of personality assessment's system based on emotional state and emotional change, such as Shown in Fig. 3, which includes:
Acquisition module 1, for acquiring video when testee currently gives a lecture or reports;
Preprocessing module 2 obtains pretreated video data, voice data and language for pre-processing to video Adopted data;
Mood calling module 3, for calling trained Emotion identification model E-R-Model to test pretreated video Data, voice data and semantic data obtain the type of emotion of testee and the distribution results along the time;
Personality calling module 4, for calling personality identification model P-R-Model to the type of emotion of testee and along the time Distribution results analyzed, obtain the personality result of testee.
In embodiments of the present invention, the mood of the video data analysis testee using testee in speech or report Type and emotional change establish Emotion identification model and personality identification model.As shown in figure 4, the system further include:
Emotion identification model building module 5 is known for establishing mood using interactive mood binary motion capture database Other model.
In specific implementation, an Emotion identification model is established using interactive mood binary motion capture database. IMEOCAP database is the mood data library comprising video, voice and text transcription (semanteme), IMEOCAP database Mood label has been done for the dialogue in each video, voice, semanteme;Tag types include neutral (neutral), happy (happiness), sad (sadness), angry (anger), surprised (surprise), frightened (fear), detest (disgust), dejected (frustration), excitement (excited), other (other) ten kinds of type of emotion.
Further, Emotion identification model building module 5 includes:
Unit is established, for establishing video, voice, semantic three-dimensional Emotion identification deep learning model;
Training unit, for using interactive mood binary motion capture database training Emotion identification deep learning mould Type obtains Emotion identification model.
Establishing unit includes:
It pre-processes subelement and the feature vector of video data is obtained, to voice number for pre-processing to video data According to being pre-processed, obtain the feature vector of voice data, semantic data pre-processed, obtain the feature of semantic data to Amount;
Subelement is established, for the spy according to the feature vector of video data, the feature vector of voice data, semantic data Sign vector establishes the deep learning model of Emotion identification.
Wherein, the correlation in embodiment of the method can be found in the preprocessing process of video data, voice data, semantic data Explanation.
In addition, system further include:
Personality identification model establishes module, mood class when for being given a lecture or reported according to the testee with personality label Type and distribution results along the time establish personality database;It the use of personality database sharing with type of emotion and emotional change is defeated Enter, be the personality identification model P-R-Model exported with personality type.
Wherein, type of emotion is above-mentioned 10 kinds of type of emotion, and personality type includes: extroversion (extraversion), mind Through matter (neuroticism), opening (openness), pleasant property (agreeableness), doing one's duty property (conscientiousness)。
Specifically, the working principle of present system related function module can be found in the phase of the realization process of embodiment of the method Description is closed, which is not described herein again.
In embodiments of the present invention, the mood of the video data analysis testee using testee in speech or report Type and emotional change establish Emotion identification model and personality identification model, to evaluate the personality type of testee.It uses Video, voice, the semantic Emotion identification method combined are known with existing video (or image), voice, semantic individually mood Other method comparison improves the accuracy rate of Emotion identification in conjunction with video in parallel, voice, semantic three kinds of data.By establish by The personality identification model of survey person's type of emotion and emotional change and testee's personality type relationship, can be without manual intervention, visitor The personality for seeing ground assessment testee, since assessment mode is also to be difficult to cover up under ignorant or informed state by testee Mood in the case where carry out, therefore the true personality of testee can be more embodied relative to the deliberately property of questionnaire class method;This hair The bright subjectivity relative to observation class method, has many advantages, such as objectivity, saves a large amount of manpower, time.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage Medium may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc..
In addition, be provided for the embodiments of the invention above personality assessment's method based on emotional state and emotional change and System is described in detail, and used herein a specific example illustrates the principle and implementation of the invention, with The explanation of upper embodiment is merely used to help understand method and its core concept of the invention;Meanwhile for the general of this field Technical staff, according to the thought of the present invention, there will be changes in the specific implementation manner and application range, in conclusion The contents of this specification are not to be construed as limiting the invention.

Claims (10)

1. a kind of personality assessment's method based on emotional state and emotional change, which is characterized in that the described method includes:
Video when acquisition testee currently gives a lecture or reports;
The video is pre-processed, pretreated video data, voice data and semantic data are obtained;
Trained Emotion identification model E-R-Model is called to test the pretreated video data, voice data and language Adopted data obtain the type of emotion of testee and the distribution results along the time;
Type of emotion of the personality identification model P-R-Model to the testee and the distribution results along the time is called to divide Analysis, obtains the personality result of testee.
2. personality assessment's method based on emotional state and emotional change as described in claim 1, which is characterized in that the side Method further include:
Emotion identification model is established using interactive mood binary motion capture database.
3. personality assessment's method based on emotional state and emotional change as claimed in claim 2, which is characterized in that described to make The step of establishing Emotion identification model with interactive mood binary motion capture database, comprising:
Establish video, voice, semantic three-dimensional Emotion identification deep learning model;
Using the interactive mood binary motion capture database training Emotion identification deep learning model, the mood is obtained Identification model.
4. personality assessment's method based on emotional state and emotional change as claimed in claim 3, which is characterized in that described to build The step of vertical video, voice, semantic three-dimensional Emotion identification deep learning model, comprising:
Video data is pre-processed, the feature vector of video data is obtained;
Voice data is pre-processed, the feature vector of voice data is obtained;
Semantic data is pre-processed, the feature vector of semantic data is obtained;
Mood is established according to the feature vector of the video data, the feature vector of voice data, the feature vector of semantic data The deep learning model of identification.
5. personality assessment's method based on emotional state and emotional change as described in claim 1, which is characterized in that the side Method further include:
Type of emotion when according to testee's speech with personality label or reporting and the distribution results along the time establish personality Database;
Using personality database sharing with type of emotion and emotional change be input, with personality type be output personality identify mould Type P-R-Model.
6. a kind of personality assessment's system based on emotional state and emotional change, which is characterized in that the system comprises:
Acquisition module, for acquiring video when testee currently gives a lecture or reports;
Preprocessing module obtains pretreated video data, voice data and semanteme for pre-processing to the video Data;
Mood calling module, for calling trained Emotion identification model E-R-Model to test the pretreated video Data, voice data and semantic data obtain the type of emotion of testee and the distribution results along the time;
Personality calling module, for calling personality identification model P-R-Model to the type of emotion of the testee and along the time Distribution results analyzed, obtain the personality result of testee.
7. personality assessment's system based on emotional state and emotional change as claimed in claim 6, which is characterized in that the system System further include:
Emotion identification model building module, for establishing Emotion identification mould using interactive mood binary motion capture database Type.
8. personality assessment's system based on emotional state and emotional change as claimed in claim 7, which is characterized in that the feelings Thread identification model establishes module
Unit is established, for establishing video, voice, semantic three-dimensional Emotion identification deep learning model;
Training unit, for using the interactive mood binary motion capture database training Emotion identification deep learning mould Type obtains the Emotion identification model.
9. personality assessment's system based on emotional state and emotional change as claimed in claim 8, which is characterized in that described to build Vertical unit includes:
Pre-process subelement, for pre-processing to video data, obtain the feature vector of video data, to voice data into Row pretreatment, obtains the feature vector of voice data, pre-processes to semantic data, obtain the feature vector of semantic data;
Subelement is established, for the spy according to the feature vector of the video data, the feature vector of voice data, semantic data Sign vector establishes the deep learning model of Emotion identification.
10. personality assessment's system based on emotional state and emotional change as claimed in claim 6, which is characterized in that described System further include:
Personality identification model establishes module, for according to personality label testee give a lecture or report when type of emotion and Personality database is established along the distribution results of time;Using personality database sharing with type of emotion and emotional change be input, It take personality type as the personality identification model P-R-Model of output.
CN201910506596.5A 2019-06-12 2019-06-12 A kind of personality assessment's method and system based on emotional state and emotional change Pending CN110321440A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910506596.5A CN110321440A (en) 2019-06-12 2019-06-12 A kind of personality assessment's method and system based on emotional state and emotional change

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910506596.5A CN110321440A (en) 2019-06-12 2019-06-12 A kind of personality assessment's method and system based on emotional state and emotional change

Publications (1)

Publication Number Publication Date
CN110321440A true CN110321440A (en) 2019-10-11

Family

ID=68120921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910506596.5A Pending CN110321440A (en) 2019-06-12 2019-06-12 A kind of personality assessment's method and system based on emotional state and emotional change

Country Status (1)

Country Link
CN (1) CN110321440A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111540440A (en) * 2020-04-23 2020-08-14 深圳市镜象科技有限公司 Psychological examination method, device, equipment and medium based on artificial intelligence
CN112507959A (en) * 2020-12-21 2021-03-16 中国科学院心理研究所 Method for establishing emotion perception model based on individual face analysis in video
CN112561474A (en) * 2020-12-14 2021-03-26 华南理工大学 Intelligent personality characteristic evaluation method based on multi-source data fusion
CN113113113A (en) * 2020-01-10 2021-07-13 焦艳巧 Personality model treatment technology and RG-1 personality model artificial intelligence system
CN118260407A (en) * 2024-05-30 2024-06-28 青岛网信信息科技有限公司 Emotion simulation method, medium and system of knowledge base question-answering robot

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108646914A (en) * 2018-04-27 2018-10-12 安徽斛兵信息科技有限公司 A kind of multi-modal affection data collection method and device
CN109409433A (en) * 2018-10-31 2019-03-01 北京邮电大学 A kind of the personality identifying system and method for social network user
CN109498039A (en) * 2018-12-25 2019-03-22 北京心法科技有限公司 Personality assessment's method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108646914A (en) * 2018-04-27 2018-10-12 安徽斛兵信息科技有限公司 A kind of multi-modal affection data collection method and device
CN109409433A (en) * 2018-10-31 2019-03-01 北京邮电大学 A kind of the personality identifying system and method for social network user
CN109498039A (en) * 2018-12-25 2019-03-22 北京心法科技有限公司 Personality assessment's method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
金琴等: "基于声学特征的语言情感识别", 《计算机科学》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113113113A (en) * 2020-01-10 2021-07-13 焦艳巧 Personality model treatment technology and RG-1 personality model artificial intelligence system
CN111540440A (en) * 2020-04-23 2020-08-14 深圳市镜象科技有限公司 Psychological examination method, device, equipment and medium based on artificial intelligence
CN112561474A (en) * 2020-12-14 2021-03-26 华南理工大学 Intelligent personality characteristic evaluation method based on multi-source data fusion
CN112561474B (en) * 2020-12-14 2024-04-30 华南理工大学 Intelligent personality characteristic evaluation method based on multi-source data fusion
CN112507959A (en) * 2020-12-21 2021-03-16 中国科学院心理研究所 Method for establishing emotion perception model based on individual face analysis in video
CN118260407A (en) * 2024-05-30 2024-06-28 青岛网信信息科技有限公司 Emotion simulation method, medium and system of knowledge base question-answering robot
CN118260407B (en) * 2024-05-30 2024-08-06 青岛网信信息科技有限公司 Emotion simulation method, medium and system of knowledge base question-answering robot

Similar Documents

Publication Publication Date Title
CN110321440A (en) A kind of personality assessment's method and system based on emotional state and emotional change
Kory Westlund et al. Flat vs. expressive storytelling: Young children’s learning and retention of a social robot’s narrative
Gärdenfors et al. Using conceptual spaces to model actions and events
Ren Affective information processing and recognizing human emotion
Brickell Performativity or performance?: clarifications in the sociology of gender
US20150302866A1 (en) Speech affect analyzing and training
CN106663383A (en) Method and system for analyzing subjects
Krug Research methods in language variation and change
Li et al. Speech emotion recognition in e-learning system based on affective computing
KR102415102B1 (en) A device that analyzes the personality of the examinee using a picture theme that is meaningful for psychological understanding
Busso et al. Recording audio-visual emotional databases from actors: a closer look
Sonderegger et al. Chatbot-mediated Learning: Conceptual Framework for the Design of Chatbot Use Cases in Education.
Busso et al. Scripted dialogs versus improvisation: lessons learned about emotional elicitation techniques from the IEMOCAP database.
Chao et al. An affective learning interface with an interactive animated agent
Fuyuno et al. Multimodal analysis of public speaking performance by EFL learners: Applying deep learning to understanding how successful speakers use facial movement
Shelton Not an inspiration just for existing: How advertising uses physical disabilities as inspiration: A categorization and model
Henry et al. Cada Día Spanish: An Analysis of Confidence and Motivation in a Social Learning Language MOOC.
Messer-Davidow Knowers, knowing, knowledge: Feminist theory and education
Shukla et al. Entrepreneurial intention for social cause: role of moral obligation, contextual support and barriers
Chen [Retracted] Design of Piano Intelligent Teaching System Based on Neural Network Algorithm
Liu et al. Deep learning scoring model in the evaluation of oral English teaching
Kelemen et al. Creative processes of impact making: advancing an American Pragmatist Methodology
Li [Retracted] Emotional Interactive Simulation System of English Speech Recognition in Virtual Context
Karsudjono et al. The influence of leader self-mastery, leader personality and leader personal branding on achievement motivation and leader candidate performance: A study at PT Mangium Anugerah Lestari, Kotabaru Regency, South Kalimantan
Chen Entertainment social media based on deep learning and interactive experience application in English e-learning teaching system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191011

RJ01 Rejection of invention patent application after publication