CN113870902B - Emotion recognition system, device and method for voice interaction plush toy - Google Patents
Emotion recognition system, device and method for voice interaction plush toy Download PDFInfo
- Publication number
- CN113870902B CN113870902B CN202111256887.7A CN202111256887A CN113870902B CN 113870902 B CN113870902 B CN 113870902B CN 202111256887 A CN202111256887 A CN 202111256887A CN 113870902 B CN113870902 B CN 113870902B
- Authority
- CN
- China
- Prior art keywords
- emotion
- emotion recognition
- score
- voice
- recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000008909 emotion recognition Effects 0.000 title claims abstract description 180
- 230000003993 interaction Effects 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 title claims abstract description 12
- 230000008451 emotion Effects 0.000 claims abstract description 124
- 230000036651 mood Effects 0.000 claims description 12
- 230000002996 emotional effect Effects 0.000 claims description 5
- 239000002131 composite material Substances 0.000 claims description 3
- 230000014759 maintenance of location Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 2
- 241001278387 Panthera tigris tigris Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Child & Adolescent Psychology (AREA)
- Toys (AREA)
Abstract
The invention relates to an emotion recognition system, device and method of a voice interaction plush toy, wherein the system comprises a voice input module, a voice recognition module, a main control module and a voice output module; the voice input module is used for acquiring voice information; the voice recognition module is used for recognizing the voice information, judging whether the voice information meets the triggering condition of the main control module or not, and enabling the main control module to operate if the voice information meets the triggering condition; the main control module is used for acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and acquiring corresponding entertainment resources according to the emotion comprehensive score; the voice output module is used for outputting the entertainment resources. The emotion recognition system of the voice interaction plush toy provided by the invention reduces the cost of emotion recognition of the plush toy and has a simple structure.
Description
Technical Field
The invention relates to the technical field of intelligent plush toys, in particular to a system, a device and a method for recognizing emotion of a voice interaction plush toy.
Background
The voice interaction plush toys realize the voice interaction function by recognizing the voice of a user, most of the voice interaction plush toys push corresponding voice audio mechanically according to the instruction of the user and cannot recognize the emotion of the user, so that the emotion soothing function of the plush toys cannot be fully exerted.
The emotion recognition technology in the prior art needs high-precision measurement of multi-dimensional characteristics of a user, is large in calculation amount and high in cost, and is not suitable for a simple device structure of a plush toy, so that the existing voice interaction plush toy has the defects of high cost and complex structure when realizing an emotion recognition function.
Disclosure of Invention
In view of this, it is necessary to provide a system, a device and a method for emotion recognition of a voice interaction plush toy, so as to solve the problems of high emotion recognition cost and complex structure of the plush toy in the prior art.
In order to solve the problems, the invention provides an emotion recognition system of a voice interaction plush toy, which comprises a voice input module, a voice recognition module, a main control module and a voice output module;
the voice input module is used for acquiring voice information;
the voice recognition module is used for recognizing the voice information, judging whether the voice information meets the triggering condition of the main control module or not, and if the voice information meets the triggering condition, enabling the main control module to operate;
the main control module is used for acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and acquiring corresponding entertainment resources according to the emotion comprehensive score;
the voice output module is used for outputting the entertainment resources.
Furthermore, the main control module comprises a topic acquisition unit, and the topic acquisition unit is used for obtaining an emotion recognition coefficient according to the running time of the main control module and obtaining an emotion recognition topic according to the emotion recognition coefficient.
Further, the topic obtaining unit is configured to obtain an emotion recognition coefficient according to the running time of the main control module, and obtain an emotion recognition topic according to the emotion recognition coefficient, including:
the topic acquisition unit is used for obtaining an emotion recognition coefficient through calculation according to the running time of the main control module and a coefficient calculation formula, obtaining the number of emotion recognition topics according to the emotion recognition coefficient and a topic number calculation formula, and obtaining emotion recognition topics according to the number of emotion recognition topics and a set proportion, wherein the coefficient calculation formula is
Wherein λ isAs an emotion recognition coefficient, T 0 For a set emotional retention time, T 1 The set emotion forgetting time is t, and the t is the running time of the main control module;
the calculation formula of the number of the questions is k = [ lambda k = 0 ]Wherein k is the number of emotion recognition questions, k 0 And lambda is an emotion recognition coefficient for the set number of initial emotion recognition topics.
Further, the main control module comprises an emotion scoring unit, and the emotion scoring unit is used for calculating an emotion recognition topic score according to the emotion recognition topic and obtaining an emotion comprehensive score by using the emotion recognition topic score and the initial emotion score.
Further, the emotion scoring unit is configured to calculate an emotion recognition topic score according to the emotion recognition topic, and obtain an emotion comprehensive score by using the emotion recognition topic score and the initial emotion score, and includes:
the emotion scoring unit is used for obtaining emotion recognition topic scores by utilizing the emotion recognition topics and obtaining emotion comprehensive scores according to the emotion recognition topic scores, the initial emotion scores and a comprehensive score calculation formula
R cur =(1-λ)R pre +λQ,
Wherein R is cur For mood Complex scoring, R pre And Q is the score of the initial emotion, Q is the score of the emotion recognition topic, and lambda is the emotion recognition coefficient.
Further, the main control module comprises an entertainment matching unit, and the entertainment matching unit is used for obtaining an entertainment resource matching coefficient according to the comprehensive emotion score and obtaining corresponding entertainment resources by using the entertainment resource matching coefficient.
Further, the entertainment matching unit is configured to obtain an entertainment resource matching coefficient according to the comprehensive emotion score, and obtain a corresponding entertainment resource by using the entertainment resource matching coefficient, and includes:
the entertainment matching unit is used for calculating a formula according to the emotion comprehensive score and the entertainment resource matching coefficientCalculating to obtain an entertainment resource matching coefficient, and obtaining the corresponding entertainment resource according to the entertainment resource matching coefficient and a matching formula, wherein the entertainment resource matching coefficient calculation formula isWherein R is cur For mood Complex scoring, R 0 Identifying the topic fullness for the set initial emotion, wherein r is an entertainment resource matching coefficient;
the matching formula is d cur =[rd 0 ]Where r is the entertainment resource matching coefficient, d 0 For the total number of entertainment sub-resource pools set, d cur Is the corresponding entertainment resource.
Furthermore, the main control module also comprises an emotion scoring library unit, an emotion recognition question library unit and an entertainment resource library unit;
the emotion scoring library unit is used for storing emotion comprehensive scores, and the previous emotion comprehensive score is used as the next initial emotion score;
the emotion recognition question bank unit comprises emotion recognition sub-question banks of different types of emotions and is used for storing emotion recognition question audios of the different types of emotions, corresponding answer option texts and corresponding scores;
the entertainment resource library unit comprises entertainment sub-resource libraries with different types of emotions and is used for storing entertainment audios corresponding to the different types of emotions.
The invention also provides an emotion recognition device of the voice interaction plush toy, which comprises a voice input module, a voice recognition module, a main control module, a voice output module, a power supply module and a base;
the voice input module is used for acquiring voice information;
the voice recognition module is used for recognizing the voice information, judging whether the voice information meets the triggering condition of the main control module or not, and enabling the main control module to operate if the voice information meets the triggering condition;
the main control module is used for acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and acquiring corresponding entertainment resources according to the emotion comprehensive score;
the voice output module is used for outputting the entertainment resources;
the power supply module is used for supplying power;
the base is used for fixing the voice input module, the voice recognition module, the main control module, the voice output module and the power supply module.
The invention also provides a method for the emotion recognition system of the voice interaction plush toy according to any one of the technical schemes, which comprises the following steps:
acquiring voice information, identifying the voice information, and judging whether the voice information meets a trigger condition;
if the emotion recognition score meets the emotion recognition requirement, acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, and acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score;
and acquiring corresponding entertainment resources according to the comprehensive emotion scores.
The beneficial effects of adopting the above embodiment are: according to the emotion recognition system of the voice interaction plush toy, the voice information is obtained through the voice input module, the voice information is recognized and judged through the voice recognition module, if the triggering condition of the main control module is met, the initial emotion grading and the emotion recognition coefficient are obtained through the main control module, the emotion recognition topic is obtained according to the emotion recognition coefficient, the emotion comprehensive grading is obtained according to the emotion recognition topic and the initial emotion grading, the corresponding entertainment resource is obtained according to the emotion comprehensive grading, the entertainment resource is output through the voice output module, the emotion recognition function of the plush toy is realized, the corresponding entertainment resource is obtained according to different emotion recognition results, namely the emotion comprehensive grading, so that the soothing effect is achieved, the emotion recognition cost of the plush toy is reduced, and the structure is simple.
Drawings
FIG. 1 is a block diagram showing the construction of an emotion recognition system for a voice interactive plush toy according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a mood recognition device for a voice interactive plush toy according to an embodiment of the invention;
fig. 3 is a flow chart of a method for recognizing emotion of a voice interaction plush toy provided by the embodiment of the invention.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the invention and together with the description, serve to explain the principles of the invention and not to limit the scope of the invention.
The embodiment of the invention provides a mood recognition system of a voice interaction plush toy, which has a structural block diagram, as shown in fig. 1, and comprises a voice input module 110, a voice recognition module 120, a main control module 130 and a voice output module 140;
the voice input module 110 is configured to obtain voice information;
the voice recognition module 120 is configured to recognize the voice information, determine whether the voice information satisfies a trigger condition of the main control module, and if the voice information satisfies the trigger condition, operate the main control module;
the main control module 130 is configured to obtain an initial emotion score and an emotion recognition coefficient, obtain an emotion recognition topic according to the emotion recognition coefficient, obtain an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and obtain a corresponding entertainment resource according to the emotion comprehensive score;
the voice output module 140 is configured to output the entertainment resource.
It should be noted that voice interaction is carried out between the voice interaction system and the user, the emotion state of the user is recognized according to the emotion comprehensive scores of the voice recognition module and the main control module, corresponding entertainment resources are matched according to the emotion comprehensive scores, development cost is reduced, emotion recognition efficiency is improved, experience of the user is enhanced, and emotion soothing effect of the voice interaction plush toy is achieved.
As a preferred embodiment, the main control module includes a topic acquisition unit, and the topic acquisition unit is configured to obtain an emotion recognition coefficient according to the running time of the main control module, and obtain an emotion recognition topic according to the emotion recognition coefficient.
In a specific embodiment, the emotion recognition topic is obtained from an emotion recognition topic library unit of the main control module.
As a preferred embodiment, the topic obtaining unit is configured to obtain an emotion recognition coefficient according to a running time of a main control module, and obtain an emotion recognition topic according to the emotion recognition coefficient, and includes:
the topic acquisition unit is used for obtaining an emotion recognition coefficient through calculation according to the running time of the main control module and a coefficient calculation formula, obtaining the number of emotion recognition topics according to the emotion recognition coefficient and a topic number calculation formula, and obtaining emotion recognition topics according to the number of emotion recognition topics and a set proportion, wherein the coefficient calculation formula is
Wherein, lambda is emotion recognition coefficient, T 0 For a set emotional retention time, T 1 The set emotional forgetting time is t, and the t is the running time of the main control module;
the calculation formula of the number of questions is k = [ lambda k = 0 ]Wherein k is the number of emotion recognition questions, k 0 And lambda is an emotion recognition coefficient for the set number of initial emotion recognition topics.
In a particular embodiment, T 0 =2 hours, T 1 =10 hours, t =6 hours, λ =0.5 is obtained from a coefficient calculation formula, and the number of set initial emotion recognition topics is 10, that is, k 0 =10, there are 3 emotion recognition question banks, and the set ratio is 1 01 =2,k 02 =2,k 03 =6, according to emotion recognition coefficient and number of topicsK =5 is obtained by a calculation formula, 5 questions are obtained from the emotion recognition question bank in total, and the results are obtained according toK is obtained by calculation 1 =1、k 2 =1、k 3 =3 number of emotion recognition topics obtained from 3 emotion recognition topic libraries, respectively.
As a preferred embodiment, the main control module includes a sentiment scoring unit, and the sentiment scoring unit is configured to calculate a sentiment identification topic score according to the sentiment identification topic, and obtain a sentiment composite score by using the sentiment identification topic score and the initial sentiment score.
In a specific embodiment, the emotion recognition topic score is obtained in the emotion recognition topic library unit of the main control module according to the 5-channel emotion recognition topic.
As a preferred embodiment, the emotion scoring unit is configured to calculate an emotion recognition topic score according to the emotion recognition topic, and obtain an emotion comprehensive score by using the emotion recognition topic score and the initial emotion score, and includes:
the emotion scoring unit is used for obtaining emotion recognition topic scores by utilizing the emotion recognition topics and obtaining emotion comprehensive scores according to the emotion recognition topic scores, the initial emotion scores and a comprehensive score calculation formula
R cur =(1-λ)R pre +λQ,
Wherein R is cur For mood Complex scoring, R pre And Q is the score of the initial emotion, Q is the score of the emotion recognition topic, and lambda is the emotion recognition coefficient.
In a specific embodiment, R pre =16,q =12, yielding R cur =14。
As a preferred embodiment, the main control module includes an entertainment matching unit, and the entertainment matching unit is configured to obtain an entertainment resource matching coefficient according to the emotion comprehensive score, and obtain a corresponding entertainment resource by using the entertainment resource matching coefficient.
In one embodiment, the entertainment resource is obtained from an entertainment resource library unit of the master control module.
As a preferred embodiment, the entertainment matching unit is configured to obtain an entertainment resource matching coefficient according to the comprehensive emotion score, and obtain a corresponding entertainment resource by using the entertainment resource matching coefficient, and includes:
the entertainment matching unit is used for obtaining an entertainment resource matching coefficient according to the calculation formula of the emotion comprehensive score and the entertainment resource matching coefficient, and obtaining the corresponding entertainment resource according to the entertainment resource matching coefficient and the matching formula, wherein the entertainment resource matching coefficient calculation formula isWherein R is cur For mood Complex scoring, R 0 Identifying the topic fullness for the set initial emotion, wherein r is an entertainment resource matching coefficient;
the matching formula is d cur =[rd 0 ]Where r is the entertainment resource matching coefficient, d 0 For the total number of entertainment sub-resource pools set, d cur Is the corresponding entertainment resource.
In a specific embodiment, the initial emotion recognition topic fullness R 0 =k 0 X 3 is 30 min, d 0 =5, gives d cur And if the value is =2, the entertainment resource is obtained from the corresponding entertainment sub-resource library No. 2.
As a preferred embodiment, the main control module further comprises an emotion scoring library unit, an emotion recognition question library unit and an entertainment resource library unit;
the emotion scoring library unit is used for storing emotion comprehensive scores, and the previous emotion comprehensive score is used as the next initial emotion score;
the emotion recognition question bank unit comprises emotion recognition sub-question banks with different types of emotions and is used for storing emotion recognition question audios, corresponding answer option texts and corresponding scores of the different types of emotions;
the entertainment resource library unit comprises entertainment sub-resource libraries with different types of emotions and is used for storing entertainment audios corresponding to the different types of emotions.
In a specific embodiment, the number of the emotion recognition sub-topic libraries is 3, and one of the emotion recognition topic audios is "is the master, do you work smoothly today? Please choose a very good; B. tiger horse; C. not smooth, the corresponding answer option texts and the corresponding scores are divided into 3 points A, 2 points B and 1 point C, and the number of the entertainment sub-resource libraries is 5.
The embodiment of the invention provides an emotion recognition device of a voice interaction plush toy, which comprises a voice input module 201, a voice recognition module 202, a main control module 203, a voice output module 204, a power supply module 205 and a base 206;
the voice input module 201 is configured to obtain voice information;
the voice recognition module 202 is configured to recognize the voice information, determine whether the voice information meets a trigger condition of the main control module, and if the voice information meets the trigger condition, enable the main control module to operate;
the main control module 203 is configured to obtain an initial emotion score and an emotion recognition coefficient, obtain an emotion recognition topic according to the emotion recognition coefficient, obtain an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and obtain a corresponding entertainment resource according to the emotion comprehensive score;
the voice output module 204 is configured to output the entertainment resource;
the power supply module 205 is used for supplying power;
the base 206 is used for fixing the voice input module, the voice recognition module, the main control module, the voice output module and the power supply module.
In a specific embodiment, a schematic diagram of an emotion recognition device of a voice interaction plush toy is shown in fig. 2, where a voice input module 201 includes a microphone, converts an acoustic signal into an electrical signal, and inputs the obtained voice information into a voice recognition module 202, the voice recognition module 202 can recognize voice commands such as "a", "B", "C", "good", "hayahou", and the like, the voice recognition module 202 is connected to a main control module 203, the main control module 203 is connected to a voice output module 204, the voice output module 204 includes a speaker for playing corresponding entertainment audio, a power supply module 205 is used for supplying power, and a base 206 is used for fixing the voice input module 201, the voice recognition module 202, the main control module 203, the voice output module 204, and the power supply module 205.
The embodiment of the invention provides a method for an emotion recognition system of a voice interaction plush toy according to any one of the technical schemes, the flow diagram of the method is shown in figure 3, and the method comprises the following steps:
s1, acquiring voice information, identifying the voice information, and judging whether the voice information meets a trigger condition;
s2, if the emotion recognition score is met, acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, and acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score;
and S3, acquiring corresponding entertainment resources according to the comprehensive emotion scores.
In summary, the invention discloses a system, a device and a method for recognizing emotion of a voice interaction plush toy, wherein voice information is obtained through a voice input module, the voice information is recognized and judged through a voice recognition module, if a main control module trigger condition is met, an initial emotion score and an emotion recognition coefficient are obtained through the main control module, an emotion recognition question is obtained according to the emotion recognition coefficient, an emotion comprehensive score is obtained according to the emotion recognition question and the initial emotion score, a corresponding entertainment resource is obtained according to the emotion comprehensive score, the entertainment resource is output through a voice output module, the emotion recognition function of the plush toy is realized, the corresponding entertainment resource is obtained according to different emotion recognition results, namely the emotion comprehensive score, so that the emotion soothing effect is achieved, the cost of emotion recognition of the plush toy is reduced, and the structure is simple.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.
Claims (10)
1. An emotion recognition system of a voice interaction plush toy is characterized by comprising a voice input module, a voice recognition module, a main control module and a voice output module;
the voice input module is used for acquiring voice information;
the voice recognition module is used for recognizing the voice information, judging whether the voice information meets the triggering condition of the main control module or not, and enabling the main control module to operate if the voice information meets the triggering condition;
the main control module is used for acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and acquiring corresponding entertainment resources according to the emotion comprehensive score;
the voice output module is used for outputting the entertainment resources.
2. The system of claim 1, wherein the master control module comprises a topic acquisition unit, and the topic acquisition unit is configured to obtain an emotion recognition coefficient according to a running time of the master control module, and obtain an emotion recognition topic according to the emotion recognition coefficient.
3. The emotion recognition system of a voice interaction plush toy as claimed in claim 2, wherein the theme acquisition unit is adapted to obtain an emotion recognition coefficient according to a running time of the main control module, and to acquire an emotion recognition theme according to the emotion recognition coefficient, and comprises:
the topic acquisition unit is used for obtaining an emotion recognition coefficient through calculation according to the running time of the main control module and a coefficient calculation formula, obtaining the number of emotion recognition topics according to the emotion recognition coefficient and a topic number calculation formula, and obtaining emotion recognition topics according to the number of emotion recognition topics and a set proportion, wherein the coefficient calculation formula is
Wherein,in order to identify the coefficient of emotion,for the set emotional retention time,in order to set the emotional forgetting time,the time is the running time of the main control module;
4. The system of claim 1, wherein the master control module comprises a sentiment scoring unit, and the sentiment scoring unit is configured to calculate a sentiment recognition topic score according to the sentiment recognition topic, and obtain a sentiment composite score by using the sentiment recognition topic score and the initial sentiment score.
5. The emotion recognition system of claim 4, wherein the emotion scoring unit is configured to calculate an emotion recognition topic score based on the emotion recognition topic, and obtain an emotion composite score using the emotion recognition topic score and the initial emotion score, and comprises:
the emotion scoring unit is used for obtaining an emotion recognition topic score by using the emotion recognition topic and obtaining an emotion comprehensive score according to the emotion recognition topic score, the initial emotion score and a comprehensive score calculation formula
6. The emotion recognition system of a voice interaction plush toy as claimed in claim 1, wherein the main control module comprises an entertainment matching unit, the entertainment matching unit is configured to obtain an entertainment resource matching coefficient according to the comprehensive emotion score, and obtain a corresponding entertainment resource by using the entertainment resource matching coefficient.
7. The emotion recognition system of a voice interaction plush toy according to claim 6, characterized in that the entertainment matching unit is used for obtaining an entertainment resource matching coefficient according to the comprehensive emotion score, and obtaining a corresponding entertainment resource by using the entertainment resource matching coefficient, and comprises:
the entertainment matching unit is used for obtaining an entertainment resource matching coefficient according to the calculation formula of the emotion comprehensive score and the entertainment resource matching coefficient, and obtaining the corresponding entertainment resource according to the entertainment resource matching coefficient and the matching formula, wherein the entertainment resource matching coefficient calculation formula isWhereinin order to score the overall score of the mood,a topic fullness is identified for the set initial mood,matching coefficients for entertainment resources;
8. The emotion recognition system of a voice-interactive plush toy, as claimed in claim 1, wherein said main control module further comprises an emotion scoring library unit, an emotion recognition question library unit and an entertainment resource library unit;
the emotion scoring library unit is used for storing emotion comprehensive scores, and the previous emotion comprehensive score is used as the next initial emotion score;
the emotion recognition question bank unit comprises emotion recognition sub-question banks of different types of emotions and is used for storing emotion recognition question audios of the different types of emotions, corresponding answer option texts and corresponding scores;
the entertainment resource library unit comprises entertainment sub-resource libraries with different types of emotions and is used for storing entertainment audios corresponding to the different types of emotions.
9. An emotion recognition device of a voice interaction plush toy is characterized by comprising a voice input module, a voice recognition module, a main control module, a voice output module, a power supply module and a base;
the voice input module is used for acquiring voice information;
the voice recognition module is used for recognizing the voice information, judging whether the voice information meets the triggering condition of the main control module or not, and enabling the main control module to operate if the voice information meets the triggering condition;
the main control module is used for acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and acquiring corresponding entertainment resources according to the emotion comprehensive score;
the voice output module is used for outputting the entertainment resources;
the power supply module is used for supplying power;
the base is used for fixing the voice input module, the voice recognition module, the main control module, the voice output module and the power supply module.
10. An emotion recognition method for use in the emotion recognition system for the voice-interactive plush toy as claimed in any one of claims 1 to 8, comprising:
acquiring voice information, identifying the voice information, and judging whether the voice information meets a trigger condition;
if the emotion recognition score meets the emotion recognition requirement, acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, and acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score;
and acquiring corresponding entertainment resources according to the comprehensive emotion scores.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111256887.7A CN113870902B (en) | 2021-10-27 | 2021-10-27 | Emotion recognition system, device and method for voice interaction plush toy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111256887.7A CN113870902B (en) | 2021-10-27 | 2021-10-27 | Emotion recognition system, device and method for voice interaction plush toy |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113870902A CN113870902A (en) | 2021-12-31 |
CN113870902B true CN113870902B (en) | 2023-03-14 |
Family
ID=78997981
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111256887.7A Active CN113870902B (en) | 2021-10-27 | 2021-10-27 | Emotion recognition system, device and method for voice interaction plush toy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113870902B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017084197A1 (en) * | 2015-11-18 | 2017-05-26 | 深圳创维-Rgb电子有限公司 | Smart home control method and system based on emotion recognition |
CN106855879A (en) * | 2016-12-14 | 2017-06-16 | 竹间智能科技(上海)有限公司 | The robot that artificial intelligence psychology is seeked advice from music |
CN108115695A (en) * | 2016-11-28 | 2018-06-05 | 沈阳新松机器人自动化股份有限公司 | A kind of emotional color expression system and robot |
CN111739559A (en) * | 2020-05-07 | 2020-10-02 | 北京捷通华声科技股份有限公司 | Speech early warning method, device, equipment and storage medium |
CN111739516A (en) * | 2020-06-19 | 2020-10-02 | 中国—东盟信息港股份有限公司 | Speech recognition system for intelligent customer service call |
WO2021086589A1 (en) * | 2019-10-29 | 2021-05-06 | Microsoft Technology Licensing, Llc | Providing a response in automated chatting |
CN112951233A (en) * | 2021-03-30 | 2021-06-11 | 平安科技(深圳)有限公司 | Voice question and answer method and device, electronic equipment and readable storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110085220A (en) * | 2018-01-26 | 2019-08-02 | 上海智臻智能网络科技股份有限公司 | Intelligent interaction device |
-
2021
- 2021-10-27 CN CN202111256887.7A patent/CN113870902B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017084197A1 (en) * | 2015-11-18 | 2017-05-26 | 深圳创维-Rgb电子有限公司 | Smart home control method and system based on emotion recognition |
CN108115695A (en) * | 2016-11-28 | 2018-06-05 | 沈阳新松机器人自动化股份有限公司 | A kind of emotional color expression system and robot |
CN106855879A (en) * | 2016-12-14 | 2017-06-16 | 竹间智能科技(上海)有限公司 | The robot that artificial intelligence psychology is seeked advice from music |
WO2021086589A1 (en) * | 2019-10-29 | 2021-05-06 | Microsoft Technology Licensing, Llc | Providing a response in automated chatting |
CN111739559A (en) * | 2020-05-07 | 2020-10-02 | 北京捷通华声科技股份有限公司 | Speech early warning method, device, equipment and storage medium |
CN111739516A (en) * | 2020-06-19 | 2020-10-02 | 中国—东盟信息港股份有限公司 | Speech recognition system for intelligent customer service call |
CN112951233A (en) * | 2021-03-30 | 2021-06-11 | 平安科技(深圳)有限公司 | Voice question and answer method and device, electronic equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113870902A (en) | 2021-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Barker et al. | The fifth'CHiME'speech separation and recognition challenge: dataset, task and baselines | |
Barker et al. | The PASCAL CHiME speech separation and recognition challenge | |
US9330720B2 (en) | Methods and apparatus for altering audio output signals | |
CN106796787A (en) | The linguistic context carried out using preceding dialog behavior in natural language processing is explained | |
CN110602624B (en) | Audio testing method and device, storage medium and electronic equipment | |
KR20200113105A (en) | Electronic device providing a response and method of operating the same | |
CN103165131A (en) | Voice processing system and voice processing method | |
WO2020211006A1 (en) | Speech recognition method and apparatus, storage medium and electronic device | |
CN106774845B (en) | intelligent interaction method, device and terminal equipment | |
CN108885869A (en) | The playback of audio data of the control comprising voice | |
Barker et al. | The CHiME challenges: Robust speech recognition in everyday environments | |
Hamsa et al. | An enhanced emotion recognition algorithm using pitch correlogram, deep sparse matrix representation and random forest classifier | |
CN101013571A (en) | Interaction method and system for using voice order | |
CN113870902B (en) | Emotion recognition system, device and method for voice interaction plush toy | |
Liu et al. | Emotional feature selection of speaker-independent speech based on correlation analysis and fisher | |
Schuller et al. | Incremental acoustic valence recognition: an inter-corpus perspective on features, matching, and performance in a gating paradigm | |
Li et al. | A multi-feature multi-classifier system for speech emotion recognition | |
WO2020211008A1 (en) | Speech recognition method and apparatus, storage medium and electronic device | |
CN112259077A (en) | Voice recognition method, device, terminal and storage medium | |
Demri et al. | Contribution to the creation of an arabic expressive speech corpus | |
KR20100114737A (en) | Study method and system using tts and asr | |
Principi et al. | Keyword spotting based system for conversation fostering in tabletop scenarios: preliminary evaluation | |
CN113035181A (en) | Voice data processing method, device and system | |
Huang et al. | SpeechCaps: Advancing Instruction-Based Universal Speech Models with Multi-Talker Speaking Style Captioning | |
CN112802474A (en) | Voice recognition method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230719 Address after: 725021 Yanhe Commercial Building 2, Fenghuang Community, Yuebinnan Avenue, Hengkou Demonstration Area (Experimental Area), Ankang City, Shaanxi Province Patentee after: Ankang Qinba Manchuang Toys Industry Operation Management Co.,Ltd. Address before: 725021 zone 301, building D, Yanhe commercial building 2, Fenghuang community, yuebinnan Avenue, hengkou demonstration area (experimental area), Ankang City, Shaanxi Province Patentee before: Ankang huizhiqu toy Technology Co.,Ltd. |
|
TR01 | Transfer of patent right |