CN111938674A - Emotion recognition control system for conversation - Google Patents

Emotion recognition control system for conversation Download PDF

Info

Publication number
CN111938674A
CN111938674A CN202010927530.6A CN202010927530A CN111938674A CN 111938674 A CN111938674 A CN 111938674A CN 202010927530 A CN202010927530 A CN 202010927530A CN 111938674 A CN111938674 A CN 111938674A
Authority
CN
China
Prior art keywords
module
classification
audio
receiving
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010927530.6A
Other languages
Chinese (zh)
Inventor
陈天翼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yuyi Technology Co ltd
Original Assignee
Nanjing Yuyi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yuyi Technology Co ltd filed Critical Nanjing Yuyi Technology Co ltd
Priority to CN202010927530.6A priority Critical patent/CN111938674A/en
Publication of CN111938674A publication Critical patent/CN111938674A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7246Details of waveform analysis using correlation, e.g. template matching or determination of similarity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7465Arrangements for interactive communication between patient and care services, e.g. by using a telephone network

Abstract

The invention discloses a dialogue emotion recognition control system, which comprises a login module, a receiving module, a recognition analysis module, a comparison module, a prompt module and a storage module, wherein the login module is respectively connected with the receiving module and the storage module, the recognition analysis module is respectively connected with the receiving module and the recognition analysis module, the comparison module is respectively connected with the storage module, the recognition analysis module and the prompt module, the receiving module comprises an audio receiving module, a video receiving module, a noise filtering module and an audio extraction module, the audio receiving module and the video receiving module are both connected with the noise filtering module, the invention receives user dialogues by utilizing two modes of real-time receiving and file receiving, is convenient for users to use real-time dialogues and recorded dialogues and filters dialogue noise, the identification is clearer, and the user can use the device more conveniently.

Description

Emotion recognition control system for conversation
Technical Field
The invention relates to the technical field of emotion recognition, in particular to a dialogue emotion recognition control system.
Background
The emotion recognition originally refers to recognition of an individual to the emotion of other people, the AI automatically distinguishes the emotion state of the individual by acquiring physiological or non-physiological signals of the individual and is an important component of emotion calculation, and the emotion recognition research content comprises the aspects of facial expression, voice, heart rate, behavior, text, physiological signal recognition and the like, and the emotion state of a user is judged through the content;
however, the currently used conversation emotion recognition control system can only compare and analyze the current conversation emotion of the user, and the repeated conversations of the same user cannot comprehensively compare and analyze the emotion, so that the user cannot well know the long-term emotion condition of the user.
Disclosure of Invention
The invention provides a dialogue emotion recognition control system, which can effectively solve the problem that the currently used dialogue emotion recognition control system only can compare and analyze the emotion of the current dialogue of a user, and the emotion comparison and analysis can not be comprehensively performed by multiple dialogues of the same user, so that the user can not well know the long-term emotion condition.
In order to achieve the purpose, the invention provides the following technical scheme: a dialogue emotion recognition control system comprises a login module, a receiving module, a recognition analysis module, a comparison module, a prompt module and a storage module;
the login module is respectively connected with the receiving module and the storage module, the recognition analysis module is respectively connected with the receiving module and the recognition analysis module, and the comparison module is respectively connected with the storage module, the recognition analysis module and the prompt module.
Preferably, the receiving module comprises an audio receiving module, a video receiving module, a noise filtering module and an audio extracting module;
the audio receiving module and the video receiving module are both connected with the noise filtering module, and the audio extracting module is connected with the noise filtering module.
Preferably, the identification and analysis module comprises an identification module, a rough classification module, a fine classification module, a result output module and a classification comparison module;
the rough classification module is respectively connected with the identification module and the fine classification module, the rough classification module and the fine classification module are both connected with the classification comparison module, and the result output module is connected with the fine classification module.
Preferably, the comparison module comprises a result input module, a record comparison module, an audio output module and a prompt output module;
the result input module, the record input module, the audio output module and the prompt output module are all connected with the record comparison module.
Preferably, the display module comprises an audio playing module, a character conversion module, a prompt matching module and a display screen;
the audio playing module is connected with the character conversion module, and the audio playing module, the character conversion module and the display screen are all connected with the prompt matching module.
Preferably, the storage module comprises a login storage module, a storage classification module, a key module and a classification identification module;
the login storage module, the storage classification module and the classification identification module are all connected with the key module.
Preferably, the emotion recognition control system comprises the following operation steps:
s1, firstly, inputting a user name and a password to log in the system through a login module;
s2, receiving input audio and video through a receiving module and filtering out noise;
s3, classifying and identifying the emotion of the user conversation through an identification and analysis module;
s4, comparing, analyzing and storing the multiple dialogue records of the user through a comparison module;
and S5, finally, displaying the emotion analysis result in real time through the display module.
Preferably, in S2, the audio receiving module and the video receiving module receive the audio and video files, and filter out the noise through the noise filtering module, and then extract the audio in the video file through the audio extracting module, and finally transmit the received audio to the recognition and analysis module;
in the step S3, the recognition module receives the audio file processed by the receiving module for recognition, performs coarse comparison and classification by the coarse classification module and the classification comparison module, performs fine classification by the fine classification module after the coarse classification, and transmits the classification result to the comparison module by the result output module.
Preferably, in S4, the result input module receives the result output from the recognition and analysis module, and recognizes the user audio file through the classification and recognition module in the storage module to retrieve the user usage record, analyzes and compares the record with the current audio file to give a prompt and correct suggestion, and then respectively transmits the audio file and the prompt to the storage module and the display module;
and the audio files, the classification results and the prompt correction opinions received by the storage module enter the storage classification module for classified storage.
Preferably, in S5, the audio playing module plays the received audio file, converts the audio file into text in real time, adds the finely classified emotion types, and finally displays the text, the emotion types, and the prompt correction opinions on the display screen.
Compared with the prior art, the invention has the beneficial effects that:
1. through setting up video receiving module and audio receiving module, utilize real-time reception and record file and receive two kinds of modes and receive the user's dialogue, convenience of customers real-time dialogue and record the dialogue and use, and will talk the noise filtering, discern more clearly, it is more convenient that the user uses.
2. Through setting up rough classification module, subdivision module and categorised contrast module, compare audio and video file and categorised contrast module, advance the emotion rough classification, carry out the fine classification of emotion again, make the classification of emotion more orderly meticulous, make system operation more smooth and easy, the structure is more accurate.
3. Through setting up the record contrast module, the user can carry out contrastive analysis with the audio and video file before oneself with this time of input to carry out solitary classification and save, can carry out the analysis to the long-term conversation mood change condition of oneself, and provide reasonable suggestion, be convenient for analyze oneself mood fluctuation condition.
4. Through setting up audio playback module, word conversion module and suggestion cooperation module, carry out the word conversion to this conversation in real time to real-time analysis shows the mood, and convenience of customers looks over the discernment detection condition of conversation mood.
5. Through setting up key module and storage classification module, carry out the individual storage to different users, guarantee mutual noninterference between the different users, and the load that data processing can be reduced in the analysis of the calling of single user data, avoid the system to operate the difficulty, provide good operational environment for the system.
In conclusion, through the recognition analysis and the classified storage of the current conversation and all conversations, the emotional conversation is more comprehensively analyzed, the emotion of the current conversation of the user can be contrastively analyzed, and the emotion of multiple conversations of the same user cannot be contrastively analyzed, so that the user cannot well know the long-term emotion condition.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
In the drawings:
FIG. 1 is a schematic diagram of the system architecture of the present invention;
FIG. 2 is a schematic diagram of a receiving module according to the present invention;
FIG. 3 is a schematic diagram of a recognition analysis module according to the present invention;
FIG. 4 is a schematic diagram of a comparative module configuration of the present invention;
FIG. 5 is a schematic diagram of a display module according to the present invention;
FIG. 6 is a schematic diagram of the memory module structure of the present invention;
FIG. 7 is a schematic diagram of the operation steps of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example (b): as shown in fig. 1 to 7, the present invention provides a technical solution, a conversational emotion recognition control system, which includes a login module, a receiving module, a recognition analysis module, a comparison module, a prompt module, and a storage module;
the login module is respectively connected with the receiving module and the storage module, the recognition analysis module is respectively connected with the receiving module and the recognition analysis module, and the comparison module is respectively connected with the storage module, the recognition analysis module and the prompt module.
The receiving module comprises an audio receiving module, a video receiving module, a noise filtering module and an audio extracting module;
the audio receiving module and the video receiving module are both connected with the noise filtering module, and the audio extracting module is connected with the noise filtering module.
The identification and analysis module comprises an identification module, a rough classification module, a fine classification module, a result output module and a classification comparison module;
the rough classification module is respectively connected with the identification module and the fine classification module, the rough classification module and the fine classification module are both connected with the classification comparison module, and the result output module is connected with the fine classification module;
the classification comparison module comprises a coarse classification item and a fine classification item, wherein the coarse classification item is as follows: happiness, anger, fear and sadness;
the fine classification items are:
and (3) happy: pleasure, excitement, and enthusiasm;
anger: dysphoria, anger and anger;
fear: fear, panic, thriller;
sadness: feelings of injury, sadness, and sadness.
The comparison module comprises a result input module, a record comparison module, an audio output module and a prompt output module;
the result input module, the record input module, the audio output module and the prompt output module are all connected with the record comparison module.
The display module comprises an audio playing module, a character conversion module, a prompt matching module and a display screen;
the audio playing module is connected with the character conversion module, and the audio playing module, the character conversion module and the display screen are all connected with the prompt matching module.
The storage module comprises a login storage module, a storage classification module, a key module and a classification identification module;
the login storage module, the storage classification module and the classification identification module are all connected with the key module.
The operation steps of the emotion recognition control system are as follows:
s1, firstly, inputting a user name and a password to log in the system through a login module;
s2, receiving input audio and video through a receiving module and filtering out noise;
the receiving module is provided with two receiving modes of real-time receiving and file recording receiving, the receiving mode is selected after logging in the system, the audio receiving module and the video receiving module receive audio and video files, noise filtering is carried out through the noise filtering module, the audio in the video files is extracted through the audio extracting module, and finally the received and processed audio is transmitted to the recognition and analysis module;
s3, classifying and identifying the emotion of the user conversation through an identification and analysis module;
the recognition module receives the audio files processed by the receiving module for recognition, performs contrast coarse classification through the coarse classification module and the classification contrast module, performs fine classification through the fine classification module after coarse classification, and transmits a classification result to the comparison module through the result output module;
s4, comparing, analyzing and storing the multiple dialogue records of the user through a comparison module;
the result input module receives the result output by the identification and analysis module, identifies the user audio file through the classification and identification module in the storage module to retrieve the user use record, analyzes and compares the record with the current audio file to give a prompt and correct suggestion, and then respectively transmits the audio file and the prompt to the storage module and the display module;
the audio files, the classification results and the prompt correction opinions received by the storage module enter the storage classification module for classified storage;
s5, finally, displaying the emotion analysis result in real time through a display module;
the audio playing module plays the received audio file, converts the audio file into characters in real time, adds the finely classified emotion types, and finally displays the characters, the emotion types and the prompt correction opinions on a display screen;
the emotion result comprises emotion analysis conditions of the current conversation and emotion analysis conditions of all conversations.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A system for emotion recognition control of a conversation, characterized by: the system comprises a login module, a receiving module, an identification and analysis module, a comparison module, a prompt module and a storage module;
the login module is respectively connected with the receiving module and the storage module, the recognition analysis module is respectively connected with the receiving module and the recognition analysis module, and the comparison module is respectively connected with the storage module, the recognition analysis module and the prompt module.
2. The emotion recognition control system of a conversation, according to claim 1, wherein the receiving module includes an audio receiving module, a video receiving module, a noise filtering module, and an audio extracting module;
the audio receiving module and the video receiving module are both connected with the noise filtering module, and the audio extracting module is connected with the noise filtering module.
3. The emotion recognition control system of a conversation, according to claim 1, wherein the recognition analysis module includes a recognition module, a rough classification module, a fine classification module, a result output module, and a classification comparison module;
the rough classification module is respectively connected with the identification module and the fine classification module, the rough classification module and the fine classification module are both connected with the classification comparison module, and the result output module is connected with the fine classification module.
4. The emotion recognition control system of a conversation, according to claim 1, wherein the comparison module includes a result input module, a record comparison module, an audio output module, and a prompt output module;
the result input module, the record input module, the audio output module and the prompt output module are all connected with the record comparison module.
5. The emotion recognition control system of a conversation, according to claim 1, wherein the display module includes an audio playing module, a text conversion module, a prompt matching module, and a display screen;
the audio playing module is connected with the character conversion module, and the audio playing module, the character conversion module and the display screen are all connected with the prompt matching module.
6. The emotion recognition control system of a conversation, according to claim 1, wherein the storage module includes a login storage module, a storage classification module, a key module, and a classification recognition module;
the login storage module, the storage classification module and the classification identification module are all connected with the key module.
7. A dialogue emotion recognition control system according to claim 1, wherein the emotion recognition control system operates as follows:
s1, firstly, inputting a user name and a password to log in the system through a login module;
s2, receiving input audio and video through a receiving module and filtering out noise;
s3, classifying and identifying the emotion of the user conversation through an identification and analysis module;
s4, comparing, analyzing and storing the multiple dialogue records of the user through a comparison module;
and S5, finally, displaying the emotion analysis result in real time through the display module.
8. The system of claim 7, wherein in S2, the audio receiving module and the video receiving module receive audio and video files, and perform noise filtering through the noise filtering module, and then extract the audio in the video file through the audio extracting module, and finally transmit the received audio to the recognition and analysis module;
in the step S3, the recognition module receives the audio file processed by the receiving module for recognition, performs coarse comparison and classification by the coarse classification module and the classification comparison module, performs fine classification by the fine classification module after the coarse classification, and transmits the classification result to the comparison module by the result output module.
9. The emotion recognition control system of claim 7, wherein in S4, the result input module receives the result outputted from the recognition and analysis module, and recognizes the user 'S audio file through the classification and recognition module in the storage module to retrieve the user' S usage record, and analyzes and compares the record with the current audio file to give a prompt and correct suggestion, and then transmits the audio file and the prompt to the storage module and the display module, respectively;
and the audio files, the classification results and the prompt correction opinions received by the storage module enter the storage classification module for classified storage.
10. The emotion recognition control system of claim 7, wherein in S5, the audio playing module plays the received audio file, converts the audio file into text in real time, adds the subdivided emotion classifications, and finally displays the text, the emotion classifications, and the prompt modification opinions on the display screen.
CN202010927530.6A 2020-09-07 2020-09-07 Emotion recognition control system for conversation Pending CN111938674A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010927530.6A CN111938674A (en) 2020-09-07 2020-09-07 Emotion recognition control system for conversation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010927530.6A CN111938674A (en) 2020-09-07 2020-09-07 Emotion recognition control system for conversation

Publications (1)

Publication Number Publication Date
CN111938674A true CN111938674A (en) 2020-11-17

Family

ID=73356143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010927530.6A Pending CN111938674A (en) 2020-09-07 2020-09-07 Emotion recognition control system for conversation

Country Status (1)

Country Link
CN (1) CN111938674A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112579744A (en) * 2020-12-28 2021-03-30 北京智能工场科技有限公司 Method for controlling risk in online psychological consultation

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222227A (en) * 2011-04-25 2011-10-19 中国华录集团有限公司 Video identification based system for extracting film images
CN103258532A (en) * 2012-11-28 2013-08-21 河海大学常州校区 Method for recognizing Chinese speech emotions based on fuzzy support vector machine
CN106295568A (en) * 2016-08-11 2017-01-04 上海电力学院 The mankind's naturalness emotion identification method combined based on expression and behavior bimodal
CN106782545A (en) * 2016-12-16 2017-05-31 广州视源电子科技股份有限公司 A kind of system and method that audio, video data is changed into writing record
CN108764010A (en) * 2018-03-23 2018-11-06 姜涵予 Emotional state determines method and device
US20180366143A1 (en) * 2017-06-19 2018-12-20 International Business Machines Corporation Sentiment analysis of mental health disorder symptoms
CN109659009A (en) * 2018-12-26 2019-04-19 杭州行为科技有限公司 Motion management method, apparatus and electronic equipment
CN110377733A (en) * 2019-06-28 2019-10-25 平安科技(深圳)有限公司 A kind of text based Emotion identification method, terminal device and medium
CN110916688A (en) * 2019-11-25 2020-03-27 西安戴森电子技术有限公司 Method for monitoring emotion based on artificial intelligence technology
CN111081279A (en) * 2019-12-24 2020-04-28 深圳壹账通智能科技有限公司 Voice emotion fluctuation analysis method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222227A (en) * 2011-04-25 2011-10-19 中国华录集团有限公司 Video identification based system for extracting film images
CN103258532A (en) * 2012-11-28 2013-08-21 河海大学常州校区 Method for recognizing Chinese speech emotions based on fuzzy support vector machine
CN106295568A (en) * 2016-08-11 2017-01-04 上海电力学院 The mankind's naturalness emotion identification method combined based on expression and behavior bimodal
CN106782545A (en) * 2016-12-16 2017-05-31 广州视源电子科技股份有限公司 A kind of system and method that audio, video data is changed into writing record
US20180366143A1 (en) * 2017-06-19 2018-12-20 International Business Machines Corporation Sentiment analysis of mental health disorder symptoms
CN108764010A (en) * 2018-03-23 2018-11-06 姜涵予 Emotional state determines method and device
CN109659009A (en) * 2018-12-26 2019-04-19 杭州行为科技有限公司 Motion management method, apparatus and electronic equipment
CN110377733A (en) * 2019-06-28 2019-10-25 平安科技(深圳)有限公司 A kind of text based Emotion identification method, terminal device and medium
CN110916688A (en) * 2019-11-25 2020-03-27 西安戴森电子技术有限公司 Method for monitoring emotion based on artificial intelligence technology
CN111081279A (en) * 2019-12-24 2020-04-28 深圳壹账通智能科技有限公司 Voice emotion fluctuation analysis method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112579744A (en) * 2020-12-28 2021-03-30 北京智能工场科技有限公司 Method for controlling risk in online psychological consultation
CN112579744B (en) * 2020-12-28 2024-03-26 北京智能工场科技有限公司 Risk control method in online psychological consultation

Similar Documents

Publication Publication Date Title
US8423369B2 (en) Conversational speech analysis method, and conversational speech analyzer
US8078463B2 (en) Method and apparatus for speaker spotting
EP1076329B1 (en) Personality data mining method using a speech based dialog
US6363346B1 (en) Call distribution system inferring mental or physiological state
US8306814B2 (en) Method for speaker source classification
US6691089B1 (en) User configurable levels of security for a speaker verification system
CN107274916B (en) Method and device for operating audio/video file based on voiceprint information
CN102623011B (en) Information processing apparatus, information processing method and information processing system
CN108010513B (en) Voice processing method and device
CN102543078B (en) The voice input method of electronic business card system and electronic business card, speech retrieval method
CN106778179A (en) A kind of identity identifying method based on the identification of ultrasonic wave lip reading
CN112102850B (en) Emotion recognition processing method and device, medium and electronic equipment
US20090228268A1 (en) System, method, and program product for processing voice data in a conversation between two persons
CN111508474A (en) Voice interruption method, electronic equipment and storage device
Atassi et al. A speaker independent approach to the classification of emotional vocal expressions
CN110868501A (en) Fraud prevention method based on voice recognition and fraud prevention hearing aid
CN111938674A (en) Emotion recognition control system for conversation
CN111835522A (en) Audio processing method and device
CN113129866B (en) Voice processing method, device, storage medium and computer equipment
CN110930643A (en) Intelligent safety system and method for preventing infants from being left in car
JP7039118B2 (en) Call center conversation content display system, method and program
KR102198424B1 (en) Method for managing information of voice call recording and computer program for the same
US10930283B2 (en) Sound recognition device and sound recognition method applied therein
CN113744742A (en) Role identification method, device and system in conversation scene
CN115188395A (en) In-vehicle noise treatment method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201117