CN111816213A - Emotion analysis method and system based on voice recognition - Google Patents

Emotion analysis method and system based on voice recognition Download PDF

Info

Publication number
CN111816213A
CN111816213A CN202010662014.5A CN202010662014A CN111816213A CN 111816213 A CN111816213 A CN 111816213A CN 202010662014 A CN202010662014 A CN 202010662014A CN 111816213 A CN111816213 A CN 111816213A
Authority
CN
China
Prior art keywords
user
voice
emotion
database
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010662014.5A
Other languages
Chinese (zh)
Inventor
黄振斌
郝国栋
刘国锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiaolajiao Technology Co ltd
Original Assignee
Shenzhen Xiaolajiao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xiaolajiao Technology Co ltd filed Critical Shenzhen Xiaolajiao Technology Co ltd
Priority to CN202010662014.5A priority Critical patent/CN111816213A/en
Publication of CN111816213A publication Critical patent/CN111816213A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/90Pitch determination of speech signals

Abstract

The invention provides a method and a system for emotion analysis based on voice recognition, wherein the method for emotion analysis comprises the following steps: step 1: the system initializes and judges whether the user voice database exists, if not, the step of establishing the user voice characteristic database is entered, if yes, the step 2 is entered; step 2: acquiring voice data of a user in real time; when the system is in an open state, the voice characteristics of the user can be automatically identified, and when the acquired voice is the voice of the user, the emotion judgment mechanism in the step 3 is entered; and step 3: an emotion judgment mechanism; establishing a calm state characteristic model and detecting and recording the real-time emotional state of the user. The invention has the beneficial effects that: 1. the emotion analysis method focuses on the speech speed, the tone and the language organization habit of the ordinary speaking habit of the user, analyzes the emotion fluctuation of the user based on the ordinary language habit, and has pertinence and higher accuracy, and the analysis is more accurate when the use time is longer.

Description

Emotion analysis method and system based on voice recognition
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method and a system for emotion analysis based on voice recognition.
Background
According to the Chinese national mental health development report (2017-2018) published by the psychological research institute of the Chinese academy of sciences and the social scientific literature publisher in combination:
1.88% of the interviewees considered mental well-being and important;
2.74% of the interviewees considered it inconvenient to obtain psychological counseling;
3. different psychologies change with age, the teenager mental health presents a descending trend with age, the mental health index slowly rises in the adult stage with the age, and the mental health index obviously falls in the old stage.
Data show that the conventional method for analyzing emotion online in China is mainly carried out in a form of coupon investigation, has larger error and obvious subjectivity, and is poor in accuracy; while the individual's cognition is affected in many ways and the accuracy is not high.
The existing method for recognizing the emotion of the voice mainly extracts and recognizes the audio characteristic vectors of the voice segments to be matched with a plurality of emotion characteristic models, and takes emotion classification corresponding to the emotion characteristic models with matched matching results as emotion classification of the voice segments. The method has the advantages of wide identification, lack of pertinence and larger error.
In the prior art, emotion is recognized mainly by acquiring audio feature vectors in audio streams and then matching the audio feature vectors with emotion feature models, and the method has certain limitations, such as neglecting characteristics of speech speed, intonation, personal language habits and the like in speech segments, and is not high in emotion recognition accuracy.
Disclosure of Invention
The invention provides a sentiment analysis method based on voice recognition, which comprises the following steps:
step 1: the system initializes and judges whether the user voice database exists, if not, the step of establishing the user voice characteristic database is entered, if yes, the step 2 is entered;
step 2: acquiring voice data of a user in real time; when the system is in an open state, the voice characteristics of the user can be automatically identified, and when the acquired voice is the voice of the user, the emotion judgment mechanism in the step 3 is entered;
and step 3: an emotion judgment mechanism; establishing a calm state characteristic model and detecting and recording the real-time emotional state of the user.
As a further improvement of the present invention, in the step of establishing the user voice feature database, the following steps are further executed:
the first step is as follows: initializing user voice;
the second step is as follows: a user inputs voice;
the third step: and (3) judging whether the establishment of the user voice feature database is successful, if so, entering the step (2), and if not, returning to the first step.
As a further improvement of the present invention, the step 3 further comprises performing the following steps:
and step S1: disassembling the acquired voice information, and analyzing the speed and tone of the user;
and step S2: judging whether the emotion voice database of the user exists or not, if not, executing a step of newly building the user voice database, and if so, entering the step of S3;
and step S3: when the user voice database exists, judging whether the current voice has large fluctuation or not, if so, determining that the current emotion of the user is abnormal, and entering an S4 step, otherwise, entering a calm state characteristic model judging step;
and step S4: matching the current user database model; comparing the currently acquired voice data with the voice data of the database, executing a recording step after the model matching is successful, and executing a step of newly building a user voice database if the model matching is unsuccessful;
a recording step: and storing the current voice difference data into a user database, and storing the current voice data into a current user emotion state table, thereby realizing the purpose of recording the current user emotion state.
As a further improvement of the invention, the step of newly building the user voice database comprises the steps of newly building the user characteristic emotion, and then adding the user characteristic emotion into the user emotion database;
and after the step of establishing the user voice database is executed, executing a recording step.
As a further improvement of the present invention, the quiet state feature model determining step specifically includes: if the system does not find that the emotion of the user has large fluctuation, comparing the emotion characteristics of the current emotion state and the calm state, judging whether the emotion characteristics are in the calm state, if so, successfully matching, considering that the current user state is in the calm state, adding a calm state model, then executing a recording step, and if not, entering the step S4 to match all the emotion data models of the user.
The invention also discloses an emotion analysis system based on voice recognition, which comprises the following units: a user voice database judging unit: the system is used for initializing and judging whether a user voice database exists or not, if not, entering a user voice characteristic database establishing unit, and if so, entering a unit for acquiring user voice data in real time;
acquiring a user voice data unit in real time: when the system is in an open state, the voice characteristics of the user can be automatically identified, and when the acquired voice is the voice of the user, the emotion judgment unit can be accessed;
an emotion judgment unit: the method is used for establishing a calm state characteristic model and detecting and recording the real-time emotional state of the user.
As a further improvement of the present invention, the user voice feature database establishing unit further includes the following modules:
an initialization module: for user voice initialization;
a voice recording module: the voice input device is used for inputting voice by a user;
the voice characteristic database module: the system is used for judging whether the establishment of the user voice feature database is successful, if so, entering a real-time user voice data acquisition unit, and if not, returning to the initialization module.
As a further improvement of the present invention, the emotion judging unit further includes the following modules:
an analysis module: the voice recognition device is used for disassembling the acquired voice information and analyzing the speed and tone of the user; voice database judging module: the system comprises a voice fluctuation module, a emotion voice database module and a user voice database module, wherein the voice fluctuation module is used for judging whether the emotion voice database of the user exists or not; voice fluctuation judging module: the device comprises a model module, a user voice database module, a calm state characteristic model judging module, a voice recognition module and a voice recognition module, wherein the model module is used for judging whether the current voice has large fluctuation when the user voice database already exists, if the fluctuation is large, the current emotion of the user is considered to be abnormal, and the model module is matched with the current user database module;
matching the current user database model module: the voice database module is used for comparing the currently acquired voice data with the voice data of the database, executing the recording module after the model matching is successful, and executing the newly-built user voice database module if the model matching is unsuccessful;
a recording module: the system is used for storing the current voice difference data into a user database and storing the current voice data into a current user emotion state table, thereby realizing the purpose of recording the current user emotion state.
As a further improvement of the invention, the newly-built user voice database module comprises a function of newly-built user characteristic emotions and then adding the user characteristic emotions into the user emotion database.
As a further improvement of the present invention, the quiet state feature model determining module specifically includes:
if the system does not find that the emotion of the user fluctuates greatly, comparing the emotion characteristics of the current emotion state and the calm state, judging whether the emotion characteristics are in the calm state or not, if so, successfully matching, considering that the current user state is in the calm state, adding a calm state model, then executing a recording module, and if not, entering a module for matching the current user database model, and matching all the emotion data models of the user.
The invention has the beneficial effects that: 1. the emotion analysis method based on voice recognition focuses on the speech speed, the tone and the language organization habit of the usual speaking habit of the user, analyzes the emotion fluctuation of the user based on the usual language habit, has pertinence and higher accuracy, and the longer the using time is, the more accurate the analysis is; 2. the emotion is an important index of mental health, and the emotion analysis method based on voice recognition is beneficial to the general public to obtain accurate psychological counseling service and timely and effectively treat psychological diseases.
Drawings
Fig. 1 is a flow chart of a mood analysis method of the present invention.
Detailed Description
As shown in fig. 1, the invention discloses a emotion analysis method based on speech recognition, comprising the following steps:
step 1: the system initializes and judges whether the user voice database exists, if not, the step of establishing the user voice characteristic database is entered, which is mainly used for voice recognition and monitoring after the user logs in; if yes, entering step 2;
step 2: acquiring voice data of a user in real time; when the system is in an open state, the voice characteristics of the user can be automatically identified, and when the acquired voice is the voice of the user, the emotion judgment mechanism in the step 3 is entered;
and step 3: an emotion judgment mechanism; establishing a calm state characteristic model and detecting and recording the real-time emotional state of the user.
In the step of establishing the user voice feature database, the following steps are further executed:
the first step is as follows: initializing a user voice characteristic; the method is mainly used for voice recognition and monitoring after the user logs in; the second step is as follows: a user inputs voice;
the third step: and (3) judging whether the establishment of the user voice feature database is successful, if so, entering the step (2), and if not, returning to the first step.
The step 3 further comprises the following steps:
and step S1: disassembling the acquired voice information, and analyzing the speed and tone of the user; and step S2: judging whether the emotion voice database of the user exists or not, if not, executing a step of newly building the user voice database, and if so, entering the step of S3;
and step S3: when the user voice database exists, judging whether the current voice has large fluctuation or not, if so, determining that the current emotion of the user is abnormal, and entering an S4 step, otherwise, entering a calm state judgment step;
and step S4: matching the current user database model; comparing the currently acquired voice data with the voice data of the database, executing a recording step after the model matching is successful, and executing a step of newly building a user voice database if the model matching is unsuccessful;
a recording step: and storing the current voice difference data into a user database, and storing the current voice data into a current user emotion state table, thereby realizing the purpose of recording the current user emotion state.
The step of newly building a user voice database is used for newly building a user emotion state database, and comprises the steps of newly building user characteristic emotions and then adding the user emotion characteristic database;
and after the step of establishing the user voice database is executed, executing a recording step.
The quiet state judgment step specifically comprises the following steps:
if the system does not find that the emotion of the user has large fluctuation, comparing the emotion characteristics of the current emotion state and the calm state, judging whether the emotion characteristics are in the calm state, if so, successfully matching, considering that the current user state is in the calm state, adding a calm state model, then executing a recording step, and if not, entering the step S4 to match all the emotion data models of the user.
The key of the emotion analysis method based on voice recognition is the establishment of a calm state characteristic model and the detection of the real-time emotion state of a user:
the establishment principle of the calm state emotion database is as follows: the normal psychological characteristic of people, in most cases, the emotion is stable, so the calm state emotion database takes the emotion state of the user most of the time as the calm state database. The user calm state model established by taking the calm state database as the model is one of important bases for judging whether the user psychology is healthy or not.
The significance of real-time emotional state monitoring of users is that the normal everyday emotions of each person fluctuate, but a stable state is presented by taking a longer period as analysis, and then the psychological characteristics presented in the stable state are the current psychological characteristics of the user, namely another important basis for judging that the psychological state of the user is very healthy by the user.
The invention also discloses an emotion analysis system based on voice recognition, which comprises the following units: a user voice database judging unit: the system is used for initializing and judging whether a user voice database exists or not, if not, entering a user voice characteristic database establishing unit, and if so, entering a unit for acquiring user voice data in real time;
acquiring a user voice data unit in real time: when the system is in an open state, the voice characteristics of the user can be automatically identified, and when the acquired voice is the voice of the user, the emotion judgment unit can be accessed;
an emotion judgment unit: the method is used for establishing a calm state characteristic model and detecting and recording the real-time emotional state of the user.
The user voice feature database establishing unit further comprises the following modules:
an initialization module: initializing a user voice characteristic; the method is mainly used for voice recognition and monitoring after the user logs in;
a voice recording module: the voice input device is used for inputting voice by a user;
the voice characteristic database module: the system is used for judging whether the establishment of the user voice feature database is successful, if so, entering a real-time user voice data acquisition unit, and if not, returning to the initialization module.
The emotion judging unit further includes the following modules:
an analysis module: the voice recognition device is used for disassembling the acquired voice information and analyzing the speed and tone of the user; voice database judging module: the system comprises a voice fluctuation module, a emotion voice database module and a user voice database module, wherein the voice fluctuation module is used for judging whether the emotion voice database of the user exists or not; voice fluctuation judging module: the voice recognition module is used for judging whether the current voice has larger fluctuation when the voice database of the user already exists, if so, considering that the current emotion of the user is abnormal, entering a model module matched with the current user database, and if not, entering a calm state judgment module if not, judging whether the current emotion of the user has larger fluctuation;
matching the current user database model module: the voice database module is used for comparing the currently acquired voice data with the voice data of the database, executing the recording module after the model matching is successful, and executing the newly-built user voice database module if the model matching is unsuccessful;
a recording module: the system is used for storing the current voice difference data into a user database and storing the current voice data into a current user emotion state table, thereby realizing the purpose of recording the current user emotion state.
The newly-built user voice database module is used for newly building user characteristic emotions and then adding the user characteristic emotions into a user emotion database;
and executing the recording module after the newly built user voice database module is executed.
The calm state judgment module specifically comprises:
if the system does not find that the emotion of the user fluctuates greatly, comparing the emotion characteristics of the current emotion state and the calm state, judging whether the emotion characteristics are in the calm state or not, if so, successfully matching, considering that the current user state is in the calm state, adding a calm state model, then executing a recording module, and if not, entering a module for matching the current user database model, and matching all the emotion data models of the user.
The invention has the beneficial effects that: 1. the emotion analysis method based on voice recognition focuses on the speech speed, the tone and the language organization habit of the usual speaking habit of the user, analyzes the emotion fluctuation of the user based on the usual language habit, has pertinence and higher accuracy, and the longer the using time is, the more accurate the analysis is; 2. the emotion is an important index of mental health, and the emotion analysis method based on voice recognition is beneficial to the general public to obtain accurate psychological counseling service and timely and effectively treat psychological diseases.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (10)

1. A emotion analysis method based on speech recognition is characterized by comprising the following steps:
step 1: the system initializes and judges whether the user voice database exists, if not, the step of establishing the user voice characteristic database is entered, if yes, the step 2 is entered;
step 2: acquiring voice data of a user in real time; when the system is in an open state, the voice characteristics of the user can be automatically identified, and when the acquired voice is the voice of the user, the emotion judgment mechanism in the step 3 is entered;
and step 3: an emotion judgment mechanism; establishing a calm state characteristic model and detecting and recording the real-time emotional state of the user.
2. The emotion analyzing method of claim 1, further comprising, in the user speech feature database creating step, the step of:
the first step is as follows: initializing a user voice characteristic;
the second step is as follows: a user inputs voice;
the third step: and (3) judging whether the establishment of the user voice feature database is successful, if so, entering the step (2), and if not, returning to the first step.
3. The emotion analysis method according to claim 1, wherein the step 3 further includes performing the steps of:
and step S1: disassembling the acquired voice information, and analyzing the speed and tone of the user;
and step S2: judging whether the emotion voice database of the user exists or not, if not, executing a step of newly building the user voice database, and if so, entering the step of S3;
and step S3: when the user voice database exists, judging whether the current voice has large fluctuation or not, if so, determining that the current emotion of the user is abnormal, and entering an S4 step, otherwise, entering a calm state characteristic model judging step;
and step S4: matching the current user database model; comparing the currently acquired voice data with the voice data of the database, executing a recording step after the model matching is successful, and executing a step of newly building a user voice database if the model matching is unsuccessful;
a recording step: and storing the current voice difference data into a user database, and storing the current voice data into a current user emotion state table, thereby realizing the purpose of recording the current user emotion state.
4. The emotion analyzing method of claim 3, wherein the step of creating a user speech database includes creating a user characteristic emotion and adding the created user characteristic emotion to the user emotion database;
and after the step of establishing the user voice database is executed, executing a recording step.
5. The emotion analysis method according to claim 3, wherein the calm state feature model determination step specifically includes:
if the system does not find that the emotion of the user has large fluctuation, comparing the emotion characteristics of the current emotion state and the calm state, judging whether the emotion characteristics are in the calm state, if so, successfully matching, considering that the current user state is in the calm state, adding a calm state model, then executing a recording step, and if not, entering the step S4 to match all the emotion data models of the user.
6. An emotion analysis system based on speech recognition, characterized by comprising the following units:
a user voice database judging unit: the system is used for initializing and judging whether a user voice database exists or not, if not, entering a user voice characteristic database establishing unit, and if so, entering a unit for acquiring user voice data in real time;
acquiring a user voice data unit in real time: when the system is in an open state, the voice characteristics of the user can be automatically identified, and when the acquired voice is the voice of the user, the emotion judgment unit can be accessed;
an emotion judgment unit: the method is used for establishing a calm state characteristic model and detecting and recording the real-time emotional state of the user.
7. The emotion analysis system of claim 6, wherein the user speech feature database creation unit further includes the following modules:
an initialization module: initializing the voice characteristics of a user;
a voice recording module: the voice input device is used for inputting voice by a user;
the voice characteristic database module: the system is used for judging whether the establishment of the user voice feature database is successful, if so, entering a real-time user voice data acquisition unit, and if not, returning to the initialization module.
8. The emotion analysis system according to claim 6, wherein the emotion judgment unit further includes the following modules:
an analysis module: the voice recognition device is used for disassembling the acquired voice information and analyzing the speed and tone of the user;
voice database judging module: the system comprises a voice fluctuation module, a emotion voice database module and a user voice database module, wherein the voice fluctuation module is used for judging whether the emotion voice database of the user exists or not;
voice fluctuation judging module: the device comprises a model module, a user voice database module, a calm state characteristic model judging module, a voice recognition module and a voice recognition module, wherein the model module is used for judging whether the current voice has large fluctuation when the user voice database already exists, if the fluctuation is large, the current emotion of the user is considered to be abnormal, and the model module is matched with the current user database module;
matching the current user database model module: the voice database module is used for comparing the currently acquired voice data with the voice data of the database, executing the recording module after the model matching is successful, and executing the newly-built user voice database module if the model matching is unsuccessful;
a recording module: the system is used for storing the current voice difference data into a user database and storing the current voice data into a current user emotion state table, thereby realizing the purpose of recording the current user emotion state.
9. The emotion analysis system of claim 8, wherein the newly created user speech database module includes a database for creating a characteristic emotion of the user, and then adding the characteristic emotion of the user to the database; and executing the recording module after the newly built user voice database module is executed.
10. The emotion analysis system according to claim 8, wherein the calm state feature model determination module is specifically configured to:
if the system does not find that the emotion of the user fluctuates greatly, comparing the emotion characteristics of the current emotion state and the calm state, judging whether the emotion characteristics are in the calm state or not, if so, successfully matching, considering that the current user state is in the calm state, adding a calm state model, then executing a recording module, and if not, entering a module for matching the current user database model, and matching all the emotion data models of the user.
CN202010662014.5A 2020-07-10 2020-07-10 Emotion analysis method and system based on voice recognition Pending CN111816213A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010662014.5A CN111816213A (en) 2020-07-10 2020-07-10 Emotion analysis method and system based on voice recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010662014.5A CN111816213A (en) 2020-07-10 2020-07-10 Emotion analysis method and system based on voice recognition

Publications (1)

Publication Number Publication Date
CN111816213A true CN111816213A (en) 2020-10-23

Family

ID=72841737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010662014.5A Pending CN111816213A (en) 2020-07-10 2020-07-10 Emotion analysis method and system based on voice recognition

Country Status (1)

Country Link
CN (1) CN111816213A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007148493A1 (en) * 2006-06-23 2007-12-27 Panasonic Corporation Emotion recognizer
CN107919138A (en) * 2017-11-30 2018-04-17 维沃移动通信有限公司 Mood processing method and mobile terminal in a kind of voice
CN108877840A (en) * 2018-06-29 2018-11-23 重庆柚瓣家科技有限公司 Emotion identification method and system based on nonlinear characteristic
CN110246519A (en) * 2019-07-25 2019-09-17 深圳智慧林网络科技有限公司 Emotion identification method, equipment and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007148493A1 (en) * 2006-06-23 2007-12-27 Panasonic Corporation Emotion recognizer
CN107919138A (en) * 2017-11-30 2018-04-17 维沃移动通信有限公司 Mood processing method and mobile terminal in a kind of voice
CN108877840A (en) * 2018-06-29 2018-11-23 重庆柚瓣家科技有限公司 Emotion identification method and system based on nonlinear characteristic
CN110246519A (en) * 2019-07-25 2019-09-17 深圳智慧林网络科技有限公司 Emotion identification method, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
Song et al. Spectral representation of behaviour primitives for depression analysis
Jiang et al. Investigation of different speech types and emotions for detecting depression using different classifiers
Sauter et al. Perceptual cues in nonverbal vocal expressions of emotion
Ramakrishnan Recognition of emotion from speech: A review
CA2311439C (en) Conversational data mining
CN109151218A (en) Call voice quality detecting method, device, computer equipment and storage medium
CN101799849A (en) Method for realizing non-barrier automatic psychological consult by adopting computer
CN106683688A (en) Emotion detection method and device
An et al. Automatically Classifying Self-Rated Personality Scores from Speech.
Samareh et al. Detect depression from communication: How computer vision, signal processing, and sentiment analysis join forces
Sechidis et al. A machine learning perspective on the emotional content of Parkinsonian speech
Warule et al. Significance of voiced and unvoiced speech segments for the detection of common cold
Liu et al. A novel decision tree for depression recognition in speech
CN113035232B (en) Psychological state prediction system, method and device based on voice recognition
JP4631464B2 (en) Physical condition determination device and program thereof
Qadri et al. A critical insight into multi-languages speech emotion databases
Feng Toward knowledge-driven speech-based models of depression: Leveraging spectrotemporal variations in speech vowels
Nasir et al. Still together?: The role of acoustic features in predicting marital outcome
CN111816213A (en) Emotion analysis method and system based on voice recognition
Shalu et al. Depression status estimation by deep learning based hybrid multi-modal fusion model
Tao et al. The androids corpus: A new publicly available benchmark for speech based depression detection
Allen Giving voice to emotion: voice analysis technology uncovering mental states is playing a growing role in medicine, business, and law enforcement
CN113823267B (en) Automatic depression recognition method and device based on voice recognition and machine learning
Teixeira et al. F0, LPC, and MFCC analysis for emotion recognition based on speech
CN113345419B (en) Dialect accent based speech translation method, system and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201023