CN115658933B - Psychological state knowledge base construction method and device, computer equipment and storage medium - Google Patents

Psychological state knowledge base construction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115658933B
CN115658933B CN202211688048.7A CN202211688048A CN115658933B CN 115658933 B CN115658933 B CN 115658933B CN 202211688048 A CN202211688048 A CN 202211688048A CN 115658933 B CN115658933 B CN 115658933B
Authority
CN
China
Prior art keywords
vocabulary
time period
modal
dimension
sample data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211688048.7A
Other languages
Chinese (zh)
Other versions
CN115658933A (en
Inventor
张伟
姚佳
张思迈
何行知
李宏伟
文凤
刘斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Provincial Prison Administration
West China Hospital of Sichuan University
Original Assignee
Sichuan Provincial Prison Administration
West China Hospital of Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Provincial Prison Administration, West China Hospital of Sichuan University filed Critical Sichuan Provincial Prison Administration
Priority to CN202211688048.7A priority Critical patent/CN115658933B/en
Publication of CN115658933A publication Critical patent/CN115658933A/en
Application granted granted Critical
Publication of CN115658933B publication Critical patent/CN115658933B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention discloses a method and a device for constructing a mental state knowledge base, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring an initial multi-mode sample data set of prisoners; performing data preprocessing on each initial multi-modal sample data set to obtain a target multi-modal sample data set with vocabulary as basic granularity; extracting features in a target multi-modal sample data set from a multi-modal time sequence dimension and a global dimension, and identifying the features based on an attention weight identification model to obtain a psychological state evaluation result of the prisoner; and mining high-frequency and low-frequency terms in the psychological state evaluation result according to a preset frequent term mining rule, and constructing a psychological state knowledge base based on the high-frequency and low-frequency terms. The invention obtains the alignment sample data of the vocabulary granularity, and then mines the psychological state knowledge based on the attention mechanism, and can accurately express the multi-modal knowledge into the knowledge with understandable psychological state.

Description

Psychological state knowledge base construction method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a mental state knowledge base construction method and device, computer equipment and a storage medium.
Background
At present, the scheme for acquiring the mental state knowledge of prisoners in prisons mainly adopts scales such as mature Chinese officer mental evaluation Personality score test (COPA-PI for short) for evaluation, but the scale evaluation has a delayed characteristic, so that the mental state of the prisoners is difficult to continuously track, and the accuracy of the scheme for acquiring the mental state knowledge of the prisoners is easily influenced.
At present, many leading-edge researches are developing in the direction of constructing a multi-modal emotion knowledge base, and most of current methods for constructing a psychological state knowledge base have the following two problems:
the psychological state recognition model has insufficient mobility, has extremely poor adaptability to a new task, and often needs to collect a large amount of marking data of the new task to retrain the model; the psychological state recognition model recognizes that the psychological state of the person taking a criminal is not interpretable.
Therefore, a method for constructing a multi-modal knowledge base of mental states, which is adaptable and capable of outputting interpretable mental state knowledge, is needed.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present application provide a method and an apparatus for constructing a mental state knowledge base, a computer device, and a storage medium, and the specific scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for constructing a mental state knowledge base, including:
acquiring an initial multi-mode sample data set and psychological assessment personality assessment data of prisoners;
performing data preprocessing on each initial multi-mode sample data set to obtain a target multi-mode sample data set, wherein the target multi-mode sample data set is a multi-mode alignment data set with vocabulary as basic granularity;
extracting interpretable features in the target multi-modal sample data set from a text time sequence dimension, a voice time sequence dimension, an image time sequence dimension and a global dimension respectively to obtain multi-modal features of each vocabulary time period and multi-modal features of the global time period, wherein the multi-modal features comprise text features, voice features and image features;
inputting the psychological assessment personality evaluation data, the multi-modal characteristics of the global time period and the multi-modal characteristics of each vocabulary time period into an attention weight recognition model according to a time sequence to obtain a psychological state assessment result of the prisoner;
and mining high frequency division and low frequency division and multiplication items in the psychological state evaluation result according to a preset frequent item mining rule, and constructing a psychological state knowledge base based on the high frequency division and multiplication items and the low frequency division and multiplication items.
According to a specific implementation manner of an embodiment of the present application, the acquiring an initial multi-modal sample data set and psychological assessment personality assessment data of a prisoner includes:
all prisoners are sampled hierarchically through identity characteristic information to obtain a target prisoner queue, wherein the identity characteristic information comprises a crime name, an age and a criminal period duration;
acquiring an initial multi-mode sample data set and psychological assessment personality evaluation data of each prisoner in the target prisoner queue, wherein the psychological assessment personality comprises evaluation scores corresponding to a lying dimension, a true dimension, an extroversion dimension, a smart dimension, a sympathy dimension, a subordinate dimension, a fluctuation dimension, an impulsion dimension, a abstinence dimension, an autobase dimension, an anxiety dimension, a violence tendency dimension, a metamorphosis psychological dimension and a crime thinking dimension.
According to a specific implementation manner of the embodiment of the present application, the initial multi-modal sample data includes text sample data, audio sample data, and video sample data, and the performing data preprocessing on each initial multi-modal sample data set to obtain a target multi-modal sample data set includes:
performing text cutting on the text sample data to obtain all words in the text sample data;
acquiring vocabulary time periods corresponding to all vocabularies based on the starting time and the ending time of each vocabulary;
and performing data alignment on the text sample data, the audio sample data and the video sample data based on the vocabulary time period to obtain the target multi-modal sample data set.
According to a specific implementation manner of the embodiment of the present application, the extracting interpretable features in the target multimodal sample data set from a text time sequence dimension, a speech time sequence dimension, an image time sequence dimension, and a global dimension respectively to obtain multimodal features of each vocabulary time segment and multimodal features of a global time segment includes:
respectively acquiring text interpretable features, voice interpretable features and image interpretable features in the target multi-modal sample data set in each vocabulary time period;
acquiring text interpretable feature change conditions of all vocabulary time periods based on the text interpretable features of each current vocabulary time period and the next vocabulary time period;
acquiring voice interpretable feature change conditions of the vocabulary time periods based on the voice interpretable features of each current vocabulary time period and the next vocabulary time period;
acquiring image interpretable feature change conditions of all vocabulary time periods based on the image interpretable features of each current vocabulary time period and the next vocabulary time period;
respectively acquiring global text features, global voice features and global image features of the target multi-modal sample data set based on a global time period;
and obtaining the multi-modal characteristics of each vocabulary time period according to the text interpretable characteristics and the change conditions thereof, the voice interpretable characteristics and the change conditions thereof, and the image interpretable characteristics and the change conditions thereof, and obtaining the multi-modal characteristics of the global time period according to the global text characteristics, the global voice characteristics and the global image characteristics.
According to a specific implementation manner of the embodiment of the present application, the obtaining of the voice-interpretable feature change of each vocabulary time period based on the voice-interpretable feature of each current vocabulary time period and the next vocabulary time period comprises:
carrying out normalization and grade classification processing on the voice interpretable features of each vocabulary time period to obtain a voice grade of each vocabulary time period;
and acquiring the voice interpretable feature change condition of each vocabulary time period based on the voice grade corresponding to the voice interpretable feature of each current vocabulary time period and the next vocabulary time period.
According to a specific implementation manner of the embodiment of the present application, the obtaining of the image interpretable feature change of each vocabulary time period based on the image interpretable feature of each current vocabulary time period and the next vocabulary time period comprises:
normalizing and grade classifying the image interpretable features of each vocabulary time period to obtain an image grade of each vocabulary time period;
and acquiring the image interpretable feature change condition of each vocabulary time period based on the image grade corresponding to the image interpretable feature of each current vocabulary time period and the next vocabulary time period.
According to a specific implementation manner of the embodiment of the application, the psychological state evaluation result includes a psychological evaluation personality score corresponding to each modal dimension and a vocabulary time slot weight corresponding to each psychological evaluation personality score;
the mining of the high frequency division complex item and the low frequency division complex item in the psychological state evaluation result according to a preset frequent item mining rule comprises the following steps:
acquiring vocabulary time period weights corresponding to the psychological assessment personality scores, wherein the vocabulary time period weights are smaller than a first score threshold value or larger than a second score threshold value;
dividing the vocabulary time period with the weight larger than a preset weight threshold into target vocabulary time periods;
and mining multi-modal characteristic frequent items in the target vocabulary time period based on a preset Aprior algorithm, dividing the multi-modal characteristic frequent items with the psychological assessment personality less than a first score threshold into low-frequency frequent items, and dividing the multi-modal characteristic frequent items with the psychological assessment personality greater than a second score threshold into high-frequency frequent items.
In a second aspect, an embodiment of the present application provides an apparatus for building a mental state knowledge base, including:
the acquisition module is used for acquiring an initial multi-mode sample data set and psychological assessment individual evaluation data of prisoners;
the preprocessing module is used for performing data preprocessing on each initial multi-mode sample data set to obtain a target multi-mode sample data set, wherein the target multi-mode sample data set is a multi-mode alignment data set with granularity based on vocabularies;
the characteristic extraction module is used for extracting interpretable characteristics in the target multi-modal sample data set from a text time sequence dimension, a voice time sequence dimension, an image time sequence dimension and a global dimension respectively to obtain multi-modal characteristics of each vocabulary time period and multi-modal characteristics of the global time period, wherein the multi-modal characteristics comprise text characteristics, voice characteristics and image characteristics;
the attention recognition module is used for inputting the psychological assessment individual evaluation data, the multi-modal characteristics of the global time period and the multi-modal characteristics of each vocabulary time period into an attention weight recognition model according to a time sequence so as to obtain a psychological state assessment result of the prisoner;
and the knowledge base construction module is used for mining high-frequency-division-frequency items and low-frequency-division-frequency items in the psychological state evaluation result according to a preset frequent item mining rule, and constructing a psychological state knowledge base based on the high-frequency-division-frequency items and the low-frequency-division-frequency items.
In a third aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores a computer program, and the computer program, when executed on the processor, executes the mental state knowledge base construction method according to any one of the first aspect and the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program runs on a processor, the computer program performs the mental state knowledge base building method according to the first aspect and any one of the embodiments of the first aspect.
The embodiment of the application provides a method and a device for constructing a mental state knowledge base, computer equipment and a readable storage medium, wherein the method comprises the following steps: acquiring an initial multi-mode sample data set and psychological assessment personality assessment data of prisoners; performing data preprocessing on each initial multi-modal sample data set to obtain a target multi-modal sample data set with vocabulary as basic granularity; extracting features in a target multi-modal sample data set from a multi-modal time sequence dimension and a global dimension, and identifying the features based on an attention weight identification model to obtain a psychological state evaluation result of the prisoner; and mining high frequency division and low frequency division and multiplication items in the psychological state evaluation result according to a preset frequent item mining rule, and constructing a psychological state knowledge base based on the high frequency division and multiplication items and the low frequency division and multiplication items. The method preprocesses the multi-mode sample data to obtain the alignment sample data of the vocabulary granularity, and then mines the psychological state knowledge in the alignment sample data based on the attention mechanism, so that the multi-mode knowledge can be accurately expressed into the knowledge with understandable psychological state.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention. Like components are numbered similarly in the various figures.
FIG. 1 is a schematic flow chart of a method for constructing a mental state knowledge base according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating an application of a method for constructing a mental state knowledge base to perform a data preprocessing step on each initial multi-modal sample data set according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an application of an attention weight recognition model in a mental state knowledge base construction method according to an embodiment of the present application;
fig. 4 is a second schematic view illustrating an application of an attention weight recognition model of a mental state knowledge base construction method according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an application of frequent item mining steps of a mental state knowledge base construction method according to an embodiment of the present application;
fig. 6 shows a schematic device module diagram of a mental state knowledge base building device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present invention, are only intended to indicate specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
Referring to fig. 1, a method flow diagram of a method for constructing a mental state knowledge base provided in an embodiment of the present application is shown, and as shown in fig. 1, the method for constructing a mental state knowledge base provided in the embodiment of the present application includes:
step S101, obtaining an initial multi-mode sample data set and psychological assessment and individual evaluation data of prisoners;
in a particular embodiment, the initial multi-modal set of sample data comprises audio samples, video samples and text samples.
The initial sample data set can be collected by constructing a preset number of open questions in advance and recording open answers of the prisoners to the open questions through a camera, a recorder and other equipment.
In the process of collecting the initial sample data set, the openness problem is used for the prisoner to express the latest mood state, and the openness problem can be a problem guiding the prisoner to perform openness expression, such as "how you have got through today", "what you have happened today", and "how you have the latest mood". The openness problem can be set adaptively according to the actual application scenario.
In the process of answering the open questions by the prisoners, audio and video data of the prisoners are synchronously recorded through equipment such as a camera and a recorder, and the audio data are converted into text data through a text conversion program, so that the audio data, the video data and the text data are in one-to-one correspondence.
The Psychological Assessment Personality Assessment data can be measured through a preset Chinese person Psychological Assessment Personality test (COPA-PI) scale so as to obtain Assessment scores of 14 dimensions of the person serving criminals in the lying dimension, the consummation dimension, the camber dimension, the clever dimension, the sympathy dimension, the subordinate dimension, the fluctuation dimension, the impulsion dimension, the abstinence dimension, the self-inferior dimension, the anxiety dimension, the violence tendency dimension, the metamorphosis psychology dimension and the crime thinking dimension.
Specifically, the prisoner can also be replaced by other people to acquire mental state knowledge, and the mental state knowledge base construction method provided by the embodiment can be selectively applied according to actual application scenes.
According to a specific implementation manner of an embodiment of the present application, the acquiring an initial multi-modal sample data set and psychological assessment personality assessment data of a prisoner includes:
all prisoners are sampled hierarchically through identity characteristic information to obtain a target prisoner queue, wherein the identity characteristic information comprises a crime name, an age and a criminal period duration;
acquiring an initial multi-mode sample data set and psychological assessment personality evaluation data of each prisoner in the target prisoner queue, wherein the psychological assessment personality comprises evaluation scores corresponding to a lying dimension, a true dimension, an extroversion dimension, a smart dimension, a sympathy dimension, a subordinate dimension, a fluctuation dimension, an impulsion dimension, a abstinence dimension, an autobase dimension, an anxiety dimension, a violence tendency dimension, a metamorphosis psychological dimension and a crime thinking dimension.
In a specific embodiment, when obtaining the relevant data of the prisoners, the prisoners may be first arranged in groups according to the identity characteristic information of the prisoners.
By means of preset criminal name information, age information and criminal period duration information, layered sampling is conducted from a criminal person database, and a target criminal person queue with multiple criminal names, multiple years of period and multiple criminal period durations can be obtained.
By acquiring the target prisoner queue, the psychological state knowledge acquired by the psychological state knowledge base can cover a large range of prisoners, and the device has the specialty of the prisoners aiming at various identity information, and is convenient for follow-up monitoring and research of the psychological state knowledge of the prisoners with appointed crime names, annual period or duration of the criminal period.
Step S102, performing data preprocessing on each initial multi-mode sample data set to obtain a target multi-mode sample data set, wherein the target multi-mode sample data set is a multi-mode alignment data set with vocabulary as basic granularity;
in a specific embodiment, after the initial multi-modal sample data set of each prisoner is obtained, since the initial multi-modal sample data set includes an audio sample, a video sample, and a text sample, the embodiment further performs data processing of vocabulary segmentation and sample alignment on each modal sample to obtain a multi-modal aligned data set based on vocabulary.
According to a specific implementation manner of the embodiment of the present application, the initial multi-modal sample data includes text sample data, audio sample data, and video sample data, and the data preprocessing is performed on each initial multi-modal sample data set to obtain a target multi-modal sample data set, including:
performing text cutting on the text sample data to obtain all vocabularies in the text sample data;
acquiring vocabulary time periods corresponding to all vocabularies based on the starting time and the ending time of each vocabulary;
and performing data alignment on the text sample data, the audio sample data and the video sample data based on the vocabulary time period to obtain the target multi-modal sample data set.
In a specific embodiment, any word segmentation tool may be used to perform word segmentation on the text sample in the initial multi-modal sample data set, so as to obtain each word of each sentence answered by the prisoner for each open question.
For example, as shown in fig. 2, for the openness question "what has been done today", the criminal answers "go back, that is, get a little while breakfast, then get up with friends in the morning", and by text cutting the text sample, we can get "go back", "be", "get a breakfast", "get a little while", "get a general", "then", "get a morning", "get with friends", and "get up".
In this embodiment, after all vocabularies corresponding to a text sample are obtained, time alignment processing of multimodal data is performed based on the vocabularies as a basic granularity in a manner of assisting time segmentation and cutting by artificial intelligence.
Specifically, based on each vocabulary, the audio data and the video data are combined to obtain the start time and the end time corresponding to any vocabulary, and then the start time and the end time of each vocabulary are utilized to perform time dotting in the audio data and the video data so as to align the modal data. The starting time is a time point for starting to express the vocabulary, and the ending time is a time point for stopping expressing the vocabulary.
The present embodiment may employ a speech aligner speed-aligner or the like to perform the vocabulary time point aligning operation.
According to a specific implementation manner of the embodiment of the application, after the multi-modal sample data is aligned based on the speech aligner speed-aligner, the aligned multi-modal sample data can be returned to the user interface, so that a user can fine-tune the start time and the end time of each vocabulary through a manual means, and it is ensured that the vocabulary corresponding to the start time and the end time can be clearly heard.
And performing data preprocessing on the initial multi-modal sample data to obtain a target multi-modal sample data set with vocabulary as basic granularity. In the target multi-modal sample data, voice data and video data of corresponding time slices can be obtained through each word of each sentence spoken by a prisoner.
Step S103, interpretable features in the target multi-modal sample data set are extracted from a text time sequence dimension, a voice time sequence dimension, an image time sequence dimension and a global dimension respectively to obtain multi-modal features of all vocabulary time periods and multi-modal features of the global time period, wherein the multi-modal features comprise text features, voice features and image features;
specifically, the interpretable feature is an interpretable feature, and the interpretable feature can enable the mental state knowledge to have clear interpretability so as to facilitate subsequent research on the acquired mental state knowledge.
The embodiment extracts the interpretable features of each vocabulary time period and the global time period from multiple dimensions respectively to obtain various interpretable features so as to acquire the mental state knowledge.
According to a specific implementation manner of the embodiment of the present application, the extracting interpretable features in the target multimodal sample data set from a text time sequence dimension, a speech time sequence dimension, an image time sequence dimension, and a global dimension respectively to obtain multimodal features of each vocabulary time segment and multimodal features of a global time segment includes:
respectively acquiring text interpretable features, voice interpretable features and image interpretable features in the target multi-modal sample data set in each vocabulary time period;
acquiring text interpretable feature change conditions of the vocabulary time periods based on the text interpretable features of each current vocabulary time period and the next vocabulary time period;
acquiring voice interpretable feature change conditions of the vocabulary time periods based on the voice interpretable features of each current vocabulary time period and the next vocabulary time period;
acquiring image interpretable feature change conditions of all vocabulary time periods based on the image interpretable features of each current vocabulary time period and the next vocabulary time period;
respectively acquiring global text features, global voice features and global image features of the target multi-modal sample data set based on a global time period;
and obtaining the multi-modal characteristics of each vocabulary time period according to the text interpretable characteristics and the change conditions thereof, the voice interpretable characteristics and the change conditions thereof, and the image interpretable characteristics and the change conditions thereof, and obtaining the multi-modal characteristics of the global time period according to the global text characteristics, the global voice characteristics and the global image characteristics.
In a specific embodiment, the method for obtaining interpretable features from dimensions can be split into the following steps:
step one, interpretable features in the target multi-modal sample data set are obtained from text time sequence dimensions.
In order to construct an interpretable knowledge base, aiming at each vocabulary in the target multi-modal sample data set, extracting corresponding text interpretable features, wherein the text interpretable features comprise vocabulary topics and vocabulary positivity and negativity, the vocabulary topics are obtained by means of kmeans clustering by utilizing open source vocabulary vectors, the categories of the vocabulary topics comprise 100 categories, the vocabulary positivity and negativity comprise 3 categories which are positive, negative and neutral, and the text interpretable features comprise 100+3 features.
Meanwhile, text time sequence characteristics formed by aiming at two continuous words are further extracted, the text time sequence characteristics are also called text interpretable characteristic change conditions and comprise two time sequence characteristics of theme change and positive and negative change, and the total number of the time sequence characteristics is 1000+3 characteristics. Specifically, the feature quantity of the theme change may be adaptively selected according to an actual application scenario, and the first 1000 theme changes are selected from the full data in this embodiment.
And step two, acquiring interpretable features in the target multi-modal sample data set from a voice time sequence dimension.
In order to construct an interpretable knowledge base, aiming at the voice of each vocabulary time slot in the target multi-modal sample data set, extracting corresponding voice interpretable features, wherein the voice interpretable features comprise 12 features of root mean square energy, attack time, zero crossing rate, autocorrelation, spectral centroid, mel Frequency Cepstrum Coefficient (MFCC), spectral flatness, spectral flux, fundamental Frequency f0, detuning degree, loudness and sharpness.
According to a specific implementation manner of the embodiment of the present application, the obtaining of the voice interpretable feature change of each vocabulary time segment based on the voice interpretable feature of each current vocabulary time segment and the next vocabulary time segment thereof includes:
normalizing and classifying the voice interpretable characteristics of each vocabulary time period to obtain a voice grade of each vocabulary time period;
and acquiring the voice interpretable feature change condition of each vocabulary time period based on the voice grade corresponding to the voice interpretable feature of each current vocabulary time period and the next vocabulary time period.
In an embodiment, for each of the speech-interpretable features in a single vocabulary time period, a corresponding normalization and level classification process is performed to normalize each of the speech-interpretable features to a predetermined number of levels. In this embodiment, the level classification process may be 5 levels. The level classification process may be adaptively replaced according to an actual application scenario, and is not limited herein.
After normalization and level classification processing are performed on the voice interpretable features, 12 feature levels including a root mean square energy level, an attack time level, a zero-crossing rate level, an autocorrelation level, a spectrum centroid level, a Mel Frequency Cepstrum Coefficient level (MFCC for short), a spectrum flatness level, a spectrum flux level, a pitch Frequency f0 level, a detuning level, a loudness level and a sharpness level can be obtained.
Aiming at voice interpretable features of two continuous vocabulary time periods, voice interpretable feature change conditions are obtained, wherein the voice interpretable feature change conditions comprise root mean square energy level change, attack time level change, zero-crossing rate level change, autocorrelation level change, spectrum centroid level change, mel Frequency Cepstrum Coefficient level change (MFCC for short), spectrum flatness level change, spectrum flux level change, fundamental Frequency f0 level change, detuning level change, loudness level change and sharpness level change.
In this embodiment, the obtained speech interpretable feature and its variation include 5 × 12=300 features.
And thirdly, acquiring interpretable features in the target multi-modal sample data set from the dimension of the image time sequence.
In order to construct an interpretable knowledge base, corresponding image interpretable features are extracted aiming at video samples in a target multi-modal sample data set of each vocabulary time period, wherein the image interpretable features comprise the relative position of 201 face key points, 8 key area areas, 8 key area sizes and 9 emotion indexes, and 226 features are provided.
Of these, the 9 mood indicators may include anger, disgust, fear, happiness, bruising, surprise, mouth beeping, grimacing, and no mood.
The relative positions of the key point positions of the human face comprise the farthest distance, the nearest distance and the average angle of all the point positions relative to the central axis of the face.
According to a specific implementation manner of the embodiment of the present application, the obtaining of the image interpretable feature change of each vocabulary time period based on the image interpretable feature of each current vocabulary time period and the next vocabulary time period comprises:
normalizing and grade classifying the image interpretable features of each vocabulary time period to obtain an image grade of each vocabulary time period;
and acquiring the image interpretable feature change condition of each vocabulary time period based on the image grade corresponding to the image interpretable feature of each current vocabulary time period and the next vocabulary time period.
In a specific embodiment, similar to the above-described speech interpretable feature, normalization and level classification processing is also performed on the image interpretable feature, and extraction of an image change feature is formed based on two consecutive vocabulary time periods.
The method for obtaining the grade division and the change condition of the image interpretable feature can refer to the description of the voice interpretable feature, and the description is not repeated here.
It should be noted that in this embodiment, the interpretable feature of the captured image and its variations include 5 × 226 features.
In addition, the grade range of the grade classification processing division of the voice interpretable features and the grade range of the grade classification processing division of the image interpretable features can be the same or different, and a user can carry out self-adaptive setting according to an actual application scene.
And step four, acquiring interpretable features in the target multi-modal sample data set from the global dimension.
In order to construct an interpretable knowledge base, for a target multi-modal sample data set in a global time period, the embodiment further performs global feature extraction from three feature dimensions of audio, video and text respectively.
Specifically, in the global feature extraction process, for the continuity dimension, the average value is taken as the global feature; for the dimension of discrete values, the discrete value with the largest occurrence number in each discrete value is taken as the global feature.
As shown in fig. 3, the global text feature is a character feature of a full time period, the global speech feature is a speech feature of a full time period, and the global image feature is an image feature of a full time period.
It should be noted that the feature length of the global feature of the same dimension is equal to the sum of the feature lengths of the features of all the vocabulary time periods of the same dimension.
Step S104, inputting the psychological assessment individual evaluation data, the multi-modal characteristics of the global time period and the multi-modal characteristics of each vocabulary time period into an attention weight recognition model according to a time sequence to obtain a psychological state assessment result of the prisoner;
in a specific embodiment, an Attention weight recognition model capable of analyzing the multi-modal features obtained in the above embodiment needs to be constructed in advance, and the Attention weight recognition model is shown in fig. 3 and 4.
As shown in fig. 3, the CLS, the multi-modal features of the global time segment, the SEQ, and the multi-modal features of each vocabulary time segment are input to the attention weight recognition model, and then a corresponding psychological assessment personality result can be obtained through a sigmoid function.
The attention weight recognition model is combined with the sigmoid function, so that psychological assessment personality results recognized by all the characteristics can be obtained, and the psychological assessment personality results comprise lying, identity, camber, clever and sensibility, sympathy, dependency, fluctuation, impulsion, abstinence, self-disfunction, anxiety, violence tendency, metamorphosis psychology and criminal thinking.
Specifically, the CLS may input the psychological assessment personality assessment data obtained in the above embodiments to perform basic classification for corresponding criminals.
In fig. 3 and 4, the multi-modal features of the global time segment include full-time speech features, full-time image features, and full-time text features. Each vocabulary time period comprises T
Figure M_221221144432519_519899001
And n is an integer.
The single/cross-period voice feature is a voice interpretable feature and a change situation thereof, the single/cross-period image feature is an image interpretable feature and a change situation thereof, and the single/cross-period character feature is a text interpretable feature and a change situation thereof.
According to a specific implementation manner of the embodiment of the application, the psychological state evaluation result includes a psychological evaluation personality score corresponding to each modal dimension and a vocabulary time slot weight corresponding to each psychological evaluation personality score;
in a particular embodiment, as shown in fig. 4, the attention weight recognition model may incorporate a softmax function to output a psychometric personality score for each modal dimension for each vocabulary time period, as well as a vocabulary time period weight.
For example, when the output result is lie 8, the time period weight corresponding to each vocabulary is 50% of the time weight at T +0, 30% of the global time period weight, and 20% of the time weight at T + 3.
Step S105, mining high-frequency and low-frequency frequent items in the psychological state evaluation result according to a preset frequent item mining rule, and constructing a psychological state knowledge base based on the high-frequency and low-frequency frequent items.
In particular embodiments, the attention weight recognition model may identify different psychological assessment personality scores for different dimensions of input.
Based on preset screening rules, the psychological assessment personality score in the corresponding range can be selected as the acquisition range of the psychological state knowledge, so that targeted frequent item mining is performed, and accurate psychological state knowledge is obtained.
The mining of the high frequency and low frequency terms in the psychological state evaluation result according to the preset frequent term mining rule comprises the following steps:
acquiring vocabulary time period weight corresponding to the psychological assessment personality score smaller than a first score threshold value or larger than a second score threshold value;
dividing the vocabulary time period with the weight of the vocabulary time period being more than a preset weight threshold into target vocabulary time periods;
and mining multi-modal characteristic frequent items in the target vocabulary time period based on a preset Aprior algorithm, dividing the multi-modal characteristic frequent items with the individual psychological assessment scores smaller than a first score threshold into low-frequency frequent items, and dividing the multi-modal characteristic frequent items with the individual psychological assessment scores larger than a second score threshold into high-frequency frequent items.
In this embodiment, the first score threshold may be set to 3, the second score threshold may be set to 6, and the first score threshold and the second score threshold may also be adaptively set according to an actual application scenario, and it is to be known that the second score threshold is greater than the first score threshold.
According to the score threshold setting of the present embodiment, vocabulary time period weights corresponding to psychological evaluation scores less than 3 and psychological evaluation scores more than 6 can be screened out.
And performing power reduction arrangement on the obtained all vocabulary time period weights to obtain a target vocabulary time period with the vocabulary time period weight larger than a preset weight threshold, wherein the target vocabulary time period is a high-correlation time period.
Frequent item mining is performed based on the Aprior algorithm, and a frequent item mining result as shown in fig. 5 can be obtained.
As shown in fig. 5, the voice frequent items mined in the target vocabulary period are loudness 3 and loudness 3-2, wherein loudness 3-2 indicates that the loudness changes from level 3 to level 2 in the continuous vocabulary period; the image frequent item is that the eye positive angle and the eye flat angle are changed into a positive angle; the frequent text items are breakfast and breakfast-negativity, wherein the breakfast-negativity is that the positive negativity of the vocabulary of the breakfast is negative; the multi-modal frequent term is the canthus positive angle and loudness 3.
Specifically, in the mining process, the frequent items mined based on the psychological assessment personality score below 3 points and the vocabulary time period weight thereof are low-frequency frequent items, and the frequent items mined based on the psychological assessment personality score above 6 points and the vocabulary time period weight thereof are high-frequency frequent items.
The low frequency division multiple item can be used as low-fraction psychological state knowledge of a certain dimension, the high frequency division multiple item can be used as high-fraction psychological state knowledge of a certain dimension, and all frequent items obtained by combining the mining result can be used for obtaining all psychological state knowledge for constructing the psychological state knowledge base.
In a specific embodiment, after obtaining the mental state knowledge shown in fig. 5, the mental state knowledge is stored in a preset database, so that the construction of the mental state knowledge base can be completed.
The construction steps of the mental state knowledge base are not limited in this embodiment, and the mental state knowledge base can be constructed in a self-adaptive manner according to an actual application scenario.
In summary, the present embodiment provides a method for constructing a mental state knowledge base, which can ensure the integrity and the fineness of the obtained multi-modal data by constructing a target multi-modal sample data set using vocabularies as basic units, so as to refine the granularity of vocabularies during knowledge mining. By constructing the multi-modal knowledge base mined by the attention screening characteristics of vocabulary time and frequent items, the multi-modal knowledge base construction scheme which has a wide application range and can provide accurate psychological state knowledge of prisoners can be obtained, and the method is favorable for further criminal research based on the psychological state knowledge base.
Referring to fig. 6, a schematic diagram of device modules of an apparatus 600 for constructing a mental state knowledge base according to an embodiment of the present application is provided, and as shown in fig. 6, the apparatus 600 for constructing a mental state knowledge base according to the embodiment of the present application includes:
the acquisition module 601 is used for acquiring an initial multi-mode sample data set and psychological assessment individual evaluation data of prisoners;
a preprocessing module 602, configured to perform data preprocessing on each initial multi-modal sample data set to obtain a target multi-modal sample data set, where the target multi-modal sample data set is a multi-modal alignment data set with vocabulary as a basic granularity;
the feature extraction module 603 is configured to extract interpretable features in the target multimodal sample data set from a text time sequence dimension, a voice time sequence dimension, an image time sequence dimension, and a global dimension, respectively, to obtain multimodal features of each vocabulary time period and multimodal features of the global time period, where the multimodal features include text features, voice features, and image features;
an attention recognition module 604, configured to input the psychological assessment personality evaluation data, the multi-modal features of the global time period, and the multi-modal features of each vocabulary time period into an attention weight recognition model according to a time sequence, so as to obtain a psychological state assessment result of the prisoner;
a knowledge base constructing module 605, configured to mine the high-frequency and low-frequency frequent items in the mental state evaluation result according to a preset frequent item mining rule, and construct a mental state knowledge base based on the high-frequency and low-frequency frequent items.
In addition, an embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory, where the memory stores a computer program, and the computer program, when executed on the processor, executes the mental state knowledge base construction method in the foregoing embodiment.
The embodiment of the present application provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a processor, the computer program performs the method for constructing a mental state knowledge base in the above embodiment.
In addition, for the specific implementation processes of the mental state knowledge base construction apparatus, the computer device, and the computer readable storage medium mentioned in the foregoing embodiments, reference may be made to the specific implementation processes of the foregoing method embodiments, which are not described in detail herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the present invention or a part thereof which contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (9)

1. A mental state knowledge base construction method is characterized by comprising the following steps:
acquiring an initial multi-mode sample data set and psychological assessment personality assessment data of prisoners;
performing data preprocessing on each initial multi-mode sample data set to obtain a target multi-mode sample data set, wherein the target multi-mode sample data set is a multi-mode alignment data set with vocabulary as basic granularity;
extracting interpretable features in the target multi-modal sample data set from a text time sequence dimension, a voice time sequence dimension, an image time sequence dimension and a global dimension respectively to obtain multi-modal features of each vocabulary time period and multi-modal features of the global time period, wherein the multi-modal features comprise text features, voice features and image features;
inputting the psychological assessment personality assessment data, the multi-modal characteristics of the global time period and the multi-modal characteristics of each vocabulary time period into an attention weight recognition model according to a time sequence to obtain a psychological state assessment result of the prisoners;
mining high frequency division and low frequency division and frequency division items in the psychological state evaluation result according to a preset frequent item mining rule, and constructing a psychological state knowledge base based on the high frequency division and frequency division items;
the psychological state evaluation result comprises a psychological evaluation personality score corresponding to each modal dimension and a vocabulary time period weight corresponding to each psychological evaluation personality score;
the mining of the high frequency and low frequency terms in the psychological state evaluation result according to the preset frequent term mining rule comprises the following steps:
acquiring vocabulary time period weight corresponding to the psychological assessment personality score smaller than a first score threshold value or larger than a second score threshold value;
dividing the vocabulary time period with the weight of the vocabulary time period being more than a preset weight threshold into target vocabulary time periods;
and mining multi-modal characteristic frequent items in the target vocabulary time period based on a preset Aprior algorithm, dividing the multi-modal characteristic frequent items with the individual psychological assessment scores smaller than a first score threshold into low-frequency frequent items, and dividing the multi-modal characteristic frequent items with the individual psychological assessment scores larger than a second score threshold into high-frequency frequent items.
2. The method according to claim 1, wherein said obtaining an initial multi-modal sample dataset of prisoners and psychometric personality assessment data comprises:
all prisoners are sampled hierarchically through identity characteristic information to obtain a target prisoner queue, wherein the identity characteristic information comprises a crime name, an age and a criminal period duration;
acquiring an initial multi-mode sample data set and psychological assessment personality evaluation data of each prisoner in the target prisoner queue, wherein the psychological assessment personality comprises evaluation scores corresponding to a lying dimension, a true dimension, an extroversion dimension, a smart dimension, a sympathy dimension, a subordinate dimension, a fluctuation dimension, an impulsion dimension, a abstinence dimension, an autobase dimension, an anxiety dimension, a violence tendency dimension, a metamorphosis psychological dimension and a crime thinking dimension.
3. The method of claim 1, wherein the initial multi-modal sample data comprises text sample data, audio sample data, and video sample data, and the pre-processing each initial multi-modal sample data set to obtain a target multi-modal sample data set comprises:
performing text cutting on the text sample data to obtain all vocabularies in the text sample data;
acquiring vocabulary time periods corresponding to all vocabularies based on the starting time and the ending time of each vocabulary;
and performing data alignment on the text sample data, the audio sample data and the video sample data based on the vocabulary time period to obtain the target multi-modal sample data set.
4. The method of claim 3, wherein the extracting interpretable features from the target multimodal sample set of data from a text temporal dimension, a speech temporal dimension, an image temporal dimension, and a global dimension, respectively, to obtain multimodal features for each vocabulary time segment and multimodal features for a global time segment, comprises:
respectively acquiring text interpretable features, voice interpretable features and image interpretable features in the target multi-modal sample data set in each vocabulary time period;
acquiring text interpretable feature change conditions of the vocabulary time periods based on the text interpretable features of each current vocabulary time period and the next vocabulary time period;
acquiring voice interpretable feature change conditions of the vocabulary time periods based on the voice interpretable features of each current vocabulary time period and the next vocabulary time period;
acquiring image interpretable feature change conditions of the vocabulary time periods based on the image interpretable features of each current vocabulary time period and the next vocabulary time period;
respectively acquiring global text features, global voice features and global image features of the target multi-modal sample data set based on a global time period;
and obtaining the multi-modal characteristics of each vocabulary time period according to the text interpretable characteristics and the change conditions thereof, the voice interpretable characteristics and the change conditions thereof, and the image interpretable characteristics and the change conditions thereof, and obtaining the multi-modal characteristics of the global time period according to the global text characteristics, the global voice characteristics and the global image characteristics.
5. The method of claim 4, wherein obtaining speech interpretable feature variations for each vocabulary time segment based on the speech interpretable feature for each current vocabulary time segment and its next vocabulary time segment comprises:
carrying out normalization and grade classification processing on the voice interpretable features of each vocabulary time period to obtain a voice grade of each vocabulary time period;
and acquiring the voice interpretable feature change condition of each vocabulary time period based on the voice grade corresponding to the voice interpretable feature of each current vocabulary time period and the next vocabulary time period.
6. The method of claim 4, wherein said obtaining image interpretable feature changes for each of the current vocabulary time periods and the next vocabulary time period based on the image interpretable feature of the vocabulary time period comprises:
normalizing and grade classifying the image interpretable features of each vocabulary time period to obtain an image grade of each vocabulary time period;
and acquiring the image interpretable feature change condition of each vocabulary time period based on the image grade corresponding to the image interpretable feature of each current vocabulary time period and the next vocabulary time period.
7. A mental state knowledge base construction apparatus, comprising:
the system comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring an initial multi-mode sample data set and psychological assessment personality assessment evaluation data of prisoners;
the preprocessing module is used for performing data preprocessing on each initial multi-mode sample data set to obtain a target multi-mode sample data set, wherein the target multi-mode sample data set is a multi-mode alignment data set with granularity based on vocabularies;
the characteristic extraction module is used for extracting interpretable characteristics in the target multi-modal sample data set from a text time sequence dimension, a voice time sequence dimension, an image time sequence dimension and a global dimension respectively to obtain multi-modal characteristics of each vocabulary time period and multi-modal characteristics of the global time period, wherein the multi-modal characteristics comprise text characteristics, voice characteristics and image characteristics;
the attention recognition module is used for inputting the psychological assessment individual evaluation data, the multi-modal characteristics of the global time period and the multi-modal characteristics of each vocabulary time period into an attention weight recognition model according to a time sequence so as to obtain a psychological state assessment result of the prisoner;
the knowledge base construction module is used for mining high-frequency and low-frequency frequent items in the psychological state evaluation result according to a preset frequent item mining rule and constructing a psychological state knowledge base based on the high-frequency and low-frequency frequent items;
the knowledge base construction module is also used for acquiring vocabulary time period weights corresponding to the psychological assessment personality scores smaller than a first score threshold value or larger than a second score threshold value;
dividing the vocabulary time period with the weight larger than a preset weight threshold into target vocabulary time periods;
and mining multi-modal characteristic frequent items in the target vocabulary time period based on a preset Aprior algorithm, dividing the multi-modal characteristic frequent items with the individual psychological assessment scores smaller than a first score threshold into low-frequency frequent items, and dividing the multi-modal characteristic frequent items with the individual psychological assessment scores larger than a second score threshold into high-frequency frequent items.
8. A computer device, characterized in that the computer device comprises a processor and a memory, the memory storing a computer program which, when run on the processor, performs the mental state knowledge base construction method of any one of claims 1 to 6.
9. A computer-readable storage medium, in which a computer program is stored which, when run on a processor, performs the mental state knowledge base construction method of any one of claims 1 to 6.
CN202211688048.7A 2022-12-28 2022-12-28 Psychological state knowledge base construction method and device, computer equipment and storage medium Active CN115658933B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211688048.7A CN115658933B (en) 2022-12-28 2022-12-28 Psychological state knowledge base construction method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211688048.7A CN115658933B (en) 2022-12-28 2022-12-28 Psychological state knowledge base construction method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115658933A CN115658933A (en) 2023-01-31
CN115658933B true CN115658933B (en) 2023-04-07

Family

ID=85022367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211688048.7A Active CN115658933B (en) 2022-12-28 2022-12-28 Psychological state knowledge base construction method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115658933B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760852A (en) * 2016-03-14 2016-07-13 江苏大学 Driver emotion real time identification method fusing facial expressions and voices
CN107087431A (en) * 2014-05-09 2017-08-22 谷歌公司 System and method for distinguishing ocular signal and continuous bio-identification
CN111507592A (en) * 2020-04-08 2020-08-07 山东大学 Evaluation method for active modification behaviors of prisoners
CN114171198A (en) * 2021-11-26 2022-03-11 智恩陪心(北京)科技有限公司 Multi-mode mental health analysis method based on sand table images, texts and videos

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007147166A2 (en) * 2006-06-16 2007-12-21 Quantum Leap Research, Inc. Consilence of data-mining
US8898091B2 (en) * 2011-05-11 2014-11-25 Ari M. Frank Computing situation-dependent affective response baseline levels utilizing a database storing affective responses
US20180158165A1 (en) * 2016-12-01 2018-06-07 Global Tel*Link Corp. System and method for unified inmate information and provisioning
US10445668B2 (en) * 2017-01-04 2019-10-15 Richard Oehrle Analytical system for assessing certain characteristics of organizations
CN109614895A (en) * 2018-10-29 2019-04-12 山东大学 A method of the multi-modal emotion recognition based on attention Fusion Features
US20200245949A1 (en) * 2019-02-01 2020-08-06 Mindstrong Health Forecasting Mood Changes from Digital Biomarkers
CN113076770A (en) * 2019-12-18 2021-07-06 广州捷世高信息科技有限公司 Intelligent figure portrait terminal based on dialect recognition
US11487891B2 (en) * 2020-10-14 2022-11-01 Philip Chidi Njemanze Method and system for mental performance computing using artificial intelligence and blockchain
CN113505310A (en) * 2021-07-07 2021-10-15 辽宁工程技术大学 Campus user next position recommendation method based on space-time attention network
CN114496162A (en) * 2022-01-12 2022-05-13 北京数字众智科技有限公司 Diet behavior information acquisition system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107087431A (en) * 2014-05-09 2017-08-22 谷歌公司 System and method for distinguishing ocular signal and continuous bio-identification
CN105760852A (en) * 2016-03-14 2016-07-13 江苏大学 Driver emotion real time identification method fusing facial expressions and voices
CN111507592A (en) * 2020-04-08 2020-08-07 山东大学 Evaluation method for active modification behaviors of prisoners
CN114171198A (en) * 2021-11-26 2022-03-11 智恩陪心(北京)科技有限公司 Multi-mode mental health analysis method based on sand table images, texts and videos

Also Published As

Publication number Publication date
CN115658933A (en) 2023-01-31

Similar Documents

Publication Publication Date Title
Tubaiz et al. Glove-based continuous Arabic sign language recognition in user-dependent mode
US11538472B2 (en) Processing speech signals in voice-based profiling
CN112200016A (en) Electroencephalogram signal emotion recognition based on ensemble learning method AdaBoost
CN111401105B (en) Video expression recognition method, device and equipment
Pham et al. Multimodal detection of Parkinson disease based on vocal and improved spiral test
CN110473571A (en) Emotion identification method and device based on short video speech
Boishakhi et al. Multi-modal hate speech detection using machine learning
Chebbi et al. On the use of pitch-based features for fear emotion detection from speech
CN112364697A (en) Electroencephalogram emotion recognition method based on R-LSTM model
Parthasarathy et al. Predicting speaker recognition reliability by considering emotional content
CN116563829A (en) Driver emotion recognition method and device, electronic equipment and storage medium
CN110992988B (en) Speech emotion recognition method and device based on domain confrontation
CN117115581A (en) Intelligent misoperation early warning method and system based on multi-mode deep learning
CN106710588B (en) Speech data sentence recognition method, device and system
Chaparro et al. Sentiment analysis of social network content to characterize the perception of security
CN110348482A (en) A kind of speech emotion recognition system based on depth model integrated architecture
Vydana et al. Detection of emotionally significant regions of speech for emotion recognition
CN115658933B (en) Psychological state knowledge base construction method and device, computer equipment and storage medium
Birla A robust unsupervised pattern discovery and clustering of speech signals
Harimi et al. Anger or joy? Emotion recognition using nonlinear dynamics of speech
CN114595692A (en) Emotion recognition method, system and terminal equipment
Poorna et al. A weight based approach for emotion recognition from speech: An analysis using South Indian languages
Sharma et al. Speech Emotion Recognition System using SVD algorithm with HMM Model
CN114881668A (en) Multi-mode-based deception detection method
Pálfy et al. Pattern search in dysfluent speech

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant