CN113870902A - Emotion recognition system, device and method for voice interaction plush toy - Google Patents

Emotion recognition system, device and method for voice interaction plush toy Download PDF

Info

Publication number
CN113870902A
CN113870902A CN202111256887.7A CN202111256887A CN113870902A CN 113870902 A CN113870902 A CN 113870902A CN 202111256887 A CN202111256887 A CN 202111256887A CN 113870902 A CN113870902 A CN 113870902A
Authority
CN
China
Prior art keywords
emotion
emotion recognition
recognition
score
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111256887.7A
Other languages
Chinese (zh)
Other versions
CN113870902B (en
Inventor
刘愚
李昕
唐若笠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ankang Qinba Manchuang Toys Industry Operation Management Co.,Ltd.
Original Assignee
Ankang Huizhiqu Toy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ankang Huizhiqu Toy Technology Co ltd filed Critical Ankang Huizhiqu Toy Technology Co ltd
Priority to CN202111256887.7A priority Critical patent/CN113870902B/en
Publication of CN113870902A publication Critical patent/CN113870902A/en
Application granted granted Critical
Publication of CN113870902B publication Critical patent/CN113870902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Child & Adolescent Psychology (AREA)
  • Toys (AREA)

Abstract

The invention relates to an emotion recognition system, device and method of a voice interaction plush toy, wherein the system comprises a voice input module, a voice recognition module, a main control module and a voice output module; the voice input module is used for acquiring voice information; the voice recognition module is used for recognizing the voice information, judging whether the voice information meets the triggering condition of the main control module or not, and enabling the main control module to operate if the voice information meets the triggering condition; the main control module is used for acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and acquiring corresponding entertainment resources according to the emotion comprehensive score; the voice output module is used for outputting the entertainment resources. The emotion recognition system of the voice interaction plush toy provided by the invention reduces the cost of emotion recognition of the plush toy and has a simple structure.

Description

Emotion recognition system, device and method for voice interaction plush toy
Technical Field
The invention relates to the technical field of intelligent plush toys, in particular to a system, a device and a method for recognizing emotion of a voice interaction plush toy.
Background
The voice interaction plush toys realize the voice interaction function by recognizing the voice of a user, most of the voice interaction plush toys push corresponding voice audio mechanically according to the instruction of the user and cannot recognize the emotion of the user, so that the emotion soothing function of the plush toys cannot be fully exerted.
The emotion recognition technology in the prior art needs high-precision measurement of multi-dimensional characteristics of a user, is large in calculation amount and high in cost, and is not suitable for a simple device structure of a plush toy, so that the existing voice interaction plush toy has the defects of high cost and complex structure when realizing an emotion recognition function.
Disclosure of Invention
In view of the above, it is necessary to provide a system, a device and a method for emotion recognition of a voice-interactive plush toy, so as to solve the problems of high emotion recognition cost and complex structure of the plush toy in the prior art.
In order to solve the problems, the invention provides an emotion recognition system of a voice interaction plush toy, which comprises a voice input module, a voice recognition module, a main control module and a voice output module;
the voice input module is used for acquiring voice information;
the voice recognition module is used for recognizing the voice information, judging whether the voice information meets the triggering condition of the main control module or not, and enabling the main control module to operate if the voice information meets the triggering condition;
the main control module is used for acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and acquiring corresponding entertainment resources according to the emotion comprehensive score;
the voice output module is used for outputting the entertainment resources.
Furthermore, the main control module comprises a topic acquisition unit, and the topic acquisition unit is used for obtaining an emotion recognition coefficient according to the running time of the main control module and obtaining an emotion recognition topic according to the emotion recognition coefficient.
Further, the topic acquisition unit is configured to obtain an emotion recognition coefficient according to the running time of the main control module, and acquire an emotion recognition topic according to the emotion recognition coefficient, including:
the topic acquisition unit is used for obtaining an emotion recognition coefficient through calculation according to the running time of the main control module and a coefficient calculation formula, obtaining the number of emotion recognition topics according to the emotion recognition coefficient and a topic number calculation formula, and obtaining emotion recognition topics according to the number of emotion recognition topics and a set proportion, wherein the coefficient calculation formula is
Figure BDA0003324131940000021
Wherein, lambda is emotion recognition coefficient, T0For a set emotional retention time, T1The set emotional forgetting time is t, and the t is the running time of the main control module;
the calculation formula of the number of the questions is k ═ λ k0]Wherein k is the number of emotion recognition questions, k0And lambda is an emotion recognition coefficient for the set number of initial emotion recognition topics.
Further, the main control module comprises an emotion scoring unit, and the emotion scoring unit is used for calculating an emotion recognition topic score according to the emotion recognition topic and obtaining an emotion comprehensive score by using the emotion recognition topic score and the initial emotion score.
Further, the emotion scoring unit is configured to calculate an emotion recognition topic score according to the emotion recognition topic, and obtain an emotion comprehensive score by using the emotion recognition topic score and the initial emotion score, and includes:
the emotion scoring unit is used for obtaining emotion recognition topic scores by utilizing the emotion recognition topics and obtaining emotion comprehensive scores according to the emotion recognition topic scores, the initial emotion scores and a comprehensive score calculation formula
Rcur=(1-λ)Rpre+λQ,
Wherein R iscurFor mood Complex scoring, RpreAnd Q is the score of the initial emotion, Q is the score of the emotion recognition topic, and lambda is the emotion recognition coefficient.
Further, the main control module comprises an entertainment matching unit, and the entertainment matching unit is used for obtaining an entertainment resource matching coefficient according to the comprehensive emotion score and obtaining corresponding entertainment resources by using the entertainment resource matching coefficient.
Further, the entertainment matching unit is configured to obtain an entertainment resource matching coefficient according to the comprehensive emotion score, and obtain a corresponding entertainment resource by using the entertainment resource matching coefficient, and includes:
the entertainment matching unit is used for obtaining an entertainment resource matching coefficient according to the calculation formula of the emotion comprehensive score and the entertainment resource matching coefficient, and obtaining the corresponding entertainment resource according to the entertainment resource matching coefficient and the matching formula, wherein the entertainment resource matching coefficient calculation formula is
Figure BDA0003324131940000031
Wherein R iscurFor mood Complex scoring, R0Identifying the topic fullness for the set initial emotion, wherein r is an entertainment resource matching coefficient;
the matching formula is dcur=[rd0]Where r is the entertainment resource matching coefficient, d0For the total number of entertainment sub-resource pools set, dcurIs the corresponding entertainment resource.
Furthermore, the main control module also comprises an emotion scoring library unit, an emotion recognition question library unit and an entertainment resource library unit;
the emotion scoring library unit is used for storing emotion comprehensive scores, and the previous emotion comprehensive score is used as the next initial emotion score;
the emotion recognition question bank unit comprises emotion recognition sub-question banks of different types of emotions and is used for storing emotion recognition question audios of the different types of emotions, corresponding answer option texts and corresponding scores;
the entertainment resource library unit comprises entertainment sub-resource libraries with different types of emotions and is used for storing entertainment audios corresponding to the different types of emotions.
The invention also provides an emotion recognition device of the voice interaction plush toy, which comprises a voice input module, a voice recognition module, a main control module, a voice output module, a power supply module and a base;
the voice input module is used for acquiring voice information;
the voice recognition module is used for recognizing the voice information, judging whether the voice information meets the triggering condition of the main control module or not, and enabling the main control module to operate if the voice information meets the triggering condition;
the main control module is used for acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and acquiring corresponding entertainment resources according to the emotion comprehensive score;
the voice output module is used for outputting the entertainment resources;
the power supply module is used for supplying power;
the base is used for fixing the voice input module, the voice recognition module, the main control module, the voice output module and the power supply module.
The invention also provides a method for the emotion recognition system of the voice interaction plush toy according to any one of the technical schemes, which comprises the following steps:
acquiring voice information, identifying the voice information, and judging whether the voice information meets a trigger condition;
if the emotion recognition score meets the emotion recognition requirement, acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, and acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score;
and acquiring corresponding entertainment resources according to the comprehensive emotion scores.
The beneficial effects of adopting the above embodiment are: according to the emotion recognition system of the voice interaction plush toy, the voice information is obtained through the voice input module, the voice information is recognized and judged through the voice recognition module, if the triggering condition of the main control module is met, the initial emotion grading and the emotion recognition coefficient are obtained through the main control module, the emotion recognition topic is obtained according to the emotion recognition coefficient, the emotion comprehensive grading is obtained according to the emotion recognition topic and the initial emotion grading, the corresponding entertainment resource is obtained according to the emotion comprehensive grading, the entertainment resource is output through the voice output module, the emotion recognition function of the plush toy is realized, the corresponding entertainment resource is obtained according to different emotion recognition results, namely the emotion comprehensive grading, so that the soothing effect is achieved, the emotion recognition cost of the plush toy is reduced, and the structure is simple.
Drawings
FIG. 1 is a block diagram showing the construction of an emotion recognition system for a voice interactive plush toy according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a mood recognition device for a voice interactive plush toy according to an embodiment of the invention;
fig. 3 is a flow chart of a method for recognizing emotion of a voice interaction plush toy provided by the embodiment of the invention.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the invention and together with the description, serve to explain the principles of the invention and not to limit the scope of the invention.
The embodiment of the invention provides a mood recognition system of a voice interaction plush toy, which has a structural block diagram, as shown in fig. 1, and comprises a voice input module 110, a voice recognition module 120, a main control module 130 and a voice output module 140;
the voice input module 110 is configured to obtain voice information;
the voice recognition module 120 is configured to recognize the voice information, determine whether the voice information meets a trigger condition of the main control module, and if the voice information meets the trigger condition, enable the main control module to operate;
the main control module 130 is configured to obtain an initial emotion score and an emotion recognition coefficient, obtain an emotion recognition topic according to the emotion recognition coefficient, obtain an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and obtain a corresponding entertainment resource according to the emotion comprehensive score;
the voice output module 140 is configured to output the entertainment resource.
It should be noted that voice interaction is carried out between the voice interaction system and the user, the emotion state of the user is recognized according to the emotion comprehensive scores of the voice recognition module and the main control module, corresponding entertainment resources are matched according to the emotion comprehensive scores, development cost is reduced, emotion recognition efficiency is improved, experience of the user is enhanced, and emotion soothing effect of the voice interaction plush toy is achieved.
As a preferred embodiment, the main control module includes a topic acquisition unit, and the topic acquisition unit is configured to obtain an emotion recognition coefficient according to the running time of the main control module, and obtain an emotion recognition topic according to the emotion recognition coefficient.
In a specific embodiment, the emotion recognition topic is obtained from an emotion recognition topic library unit of the main control module.
As a preferred embodiment, the topic obtaining unit is configured to obtain an emotion recognition coefficient according to a running time of a main control module, and obtain an emotion recognition topic according to the emotion recognition coefficient, and includes:
the topic acquisition unit is used for obtaining an emotion recognition coefficient through calculation according to the running time of the main control module and a coefficient calculation formula, obtaining the number of emotion recognition topics according to the emotion recognition coefficient and a topic number calculation formula, and obtaining emotion recognition topics according to the number of emotion recognition topics and a set proportion, wherein the coefficient calculation formula is
Figure BDA0003324131940000071
Wherein λ is emotion recognition coefficient,T0For a set emotional retention time, T1The set emotional forgetting time is t, and the t is the running time of the main control module;
the calculation formula of the number of the questions is k ═ λ k0]Wherein k is the number of emotion recognition questions, k0And lambda is an emotion recognition coefficient for the set number of initial emotion recognition topics.
In a particular embodiment, T02 hours, T110 hours and t 6 hours, the coefficient calculation formula obtains lambda 0.5, and the number of the set initial emotion recognition topics is 10, namely k 010, 3 emotion recognition sub-question banks with a set ratio of 1:1:3, respectively obtaining 2, 2 and 6 questions, namely k from the emotion recognition sub-question banks01=2,k02=2,k036, obtaining k 5 according to the emotion recognition coefficient and the topic number calculation formula, obtaining 5 topics from the emotion recognition sub-topic library, and obtaining 5 topics according to the emotion recognition coefficient and the topic number calculation formula
Figure BDA0003324131940000072
K is obtained by calculation1=1、k2=1、k3The number of emotion recognition topics obtained from the 3 emotion recognition topic banks is 3.
As a preferred embodiment, the main control module includes a sentiment scoring unit, and the sentiment scoring unit is configured to calculate a sentiment identification topic score according to the sentiment identification topic, and obtain a sentiment composite score by using the sentiment identification topic score and the initial sentiment score.
In a specific embodiment, the emotion recognition topic score is obtained in the emotion recognition topic library unit of the main control module according to the 5-channel emotion recognition topic.
As a preferred embodiment, the emotion scoring unit is configured to calculate an emotion recognition topic score according to the emotion recognition topic, and obtain an emotion comprehensive score by using the emotion recognition topic score and the initial emotion score, and includes:
the emotion scoring unit is used for obtaining emotion recognition topic scores by utilizing the emotion recognition topics and obtaining emotion comprehensive scores according to the emotion recognition topic scores, the initial emotion scores and a comprehensive score calculation formula
Rcur=(1-λ)Rpre+λQ,
Wherein R iscurFor mood Complex scoring, RpreAnd Q is the score of the initial emotion, Q is the score of the emotion recognition topic, and lambda is the emotion recognition coefficient.
In a specific embodiment, RpreR is obtained when Q is 16, 12cur=14。
As a preferred embodiment, the main control module includes an entertainment matching unit, and the entertainment matching unit is configured to obtain an entertainment resource matching coefficient according to the emotion comprehensive score, and obtain a corresponding entertainment resource by using the entertainment resource matching coefficient.
In one embodiment, the entertainment resource is obtained from an entertainment resource library unit of the master control module.
As a preferred embodiment, the entertainment matching unit is configured to obtain an entertainment resource matching coefficient according to the comprehensive emotion score, and obtain a corresponding entertainment resource by using the entertainment resource matching coefficient, and includes:
the entertainment matching unit is used for obtaining an entertainment resource matching coefficient according to the calculation formula of the emotion comprehensive score and the entertainment resource matching coefficient, and obtaining the corresponding entertainment resource according to the entertainment resource matching coefficient and the matching formula, wherein the entertainment resource matching coefficient calculation formula is
Figure BDA0003324131940000081
Wherein R iscurFor mood Complex scoring, R0Identifying the topic fullness for the set initial emotion, wherein r is an entertainment resource matching coefficient;
the matching formula is dcur=[rd0]Where r is the entertainment resource matching coefficient, d0For the total number of entertainment sub-resource pools set, dcurIs the corresponding entertainment resource.
In one particular embodiment, the initial conditionEnd-of-thread identification question full mark R0=k0X 3 is 30 min, d0D is obtained when the formula is 5curAnd 2, obtaining the entertainment resource from the corresponding number 2 entertainment sub-resource library.
As a preferred embodiment, the main control module further comprises an emotion scoring library unit, an emotion recognition question library unit and an entertainment resource library unit;
the emotion scoring library unit is used for storing emotion comprehensive scores, and the previous emotion comprehensive score is used as the next initial emotion score;
the emotion recognition question bank unit comprises emotion recognition sub-question banks of different types of emotions and is used for storing emotion recognition question audios of the different types of emotions, corresponding answer option texts and corresponding scores;
the entertainment resource library unit comprises entertainment sub-resource libraries with different types of emotions and is used for storing entertainment audios corresponding to the different types of emotions.
In a specific embodiment, the number of the emotion recognition sub-topic libraries is 3, and one of the emotion recognition topic audios is "is the master, do you work smoothly today? Please choose a very good; B. tiger horse; C. not smooth, the corresponding answer option texts and the corresponding scores are divided into 3 points A, 2 points B and 1 point C, and the number of the entertainment sub-resource libraries is 5.
The embodiment of the invention provides an emotion recognition device of a voice interaction plush toy, which comprises a voice input module 201, a voice recognition module 202, a main control module 203, a voice output module 204, a power supply module 205 and a base 206;
the voice input module 201 is configured to obtain voice information;
the voice recognition module 202 is configured to recognize the voice information, determine whether the voice information meets a trigger condition of the main control module, and if the voice information meets the trigger condition, enable the main control module to operate;
the main control module 203 is configured to obtain an initial emotion score and an emotion recognition coefficient, obtain an emotion recognition topic according to the emotion recognition coefficient, obtain an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and obtain a corresponding entertainment resource according to the emotion comprehensive score;
the voice output module 204 is configured to output the entertainment resource;
the power supply module 205 is used for supplying power;
the base 206 is used for fixing the voice input module, the voice recognition module, the main control module, the voice output module and the power supply module.
In a specific embodiment, a schematic diagram of an emotion recognition device of a voice interaction plush toy is shown in fig. 2, where a voice input module 201 includes a microphone, converts an acoustic signal into an electrical signal, and inputs the obtained voice information into a voice recognition module 202, the voice recognition module 202 can recognize voice commands such as "a", "B", "C", "good", "hayahou", and the like, the voice recognition module 202 is connected to a main control module 203, the main control module 203 is connected to a voice output module 204, the voice output module 204 includes a speaker for playing corresponding entertainment audio, a power supply module 205 is used for supplying power, and a base 206 is used for fixing the voice input module 201, the voice recognition module 202, the main control module 203, the voice output module 204, and the power supply module 205.
An embodiment of the present invention provides a method for an emotion recognition system of a voice interaction plush toy according to any one of the above technical solutions, the method has a flow chart, as shown in fig. 3, and the method includes:
step S1, acquiring voice information, recognizing the voice information, and judging whether the voice information meets the triggering condition;
step S2, if the emotion recognition score is met, acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, and acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score;
and step S3, obtaining corresponding entertainment resources according to the comprehensive emotion scores.
In summary, the invention discloses a system, a device and a method for recognizing emotion of a voice interaction plush toy, wherein voice information is obtained through a voice input module, the voice information is recognized and judged through a voice recognition module, if a main control module triggering condition is met, an initial emotion score and an emotion recognition coefficient are obtained through the main control module, an emotion recognition question is obtained according to the emotion recognition coefficient, an emotion comprehensive score is obtained according to the emotion recognition question and the initial emotion score, a corresponding entertainment resource is obtained according to the emotion comprehensive score, the entertainment resource is output through a voice output module, the emotion recognition function of the plush toy is realized, the corresponding entertainment resource is obtained according to different emotion recognition results, namely the emotion comprehensive score, so that the emotion soothing effect is achieved, and the cost of emotion recognition of the plush toy is reduced, and the structure is simple.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1. An emotion recognition system of a voice interaction plush toy is characterized by comprising a voice input module, a voice recognition module, a main control module and a voice output module;
the voice input module is used for acquiring voice information;
the voice recognition module is used for recognizing the voice information, judging whether the voice information meets the triggering condition of the main control module or not, and enabling the main control module to operate if the voice information meets the triggering condition;
the main control module is used for acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and acquiring corresponding entertainment resources according to the emotion comprehensive score;
the voice output module is used for outputting the entertainment resources.
2. The system of claim 1, wherein the master control module comprises a topic acquisition unit, and the topic acquisition unit is configured to obtain an emotion recognition coefficient according to a running time of the master control module, and obtain an emotion recognition topic according to the emotion recognition coefficient.
3. The emotion recognition system of a voice interaction plush toy as claimed in claim 2, wherein the theme acquisition unit is adapted to obtain an emotion recognition coefficient according to a running time of the main control module, and to acquire an emotion recognition theme according to the emotion recognition coefficient, and comprises:
the topic acquisition unit is used for obtaining an emotion recognition coefficient through calculation according to the running time of the main control module and a coefficient calculation formula, obtaining the number of emotion recognition topics according to the emotion recognition coefficient and a topic number calculation formula, and obtaining emotion recognition topics according to the number of emotion recognition topics and a set proportion, wherein the coefficient calculation formula is
Figure FDA0003324131930000011
Wherein, lambda is emotion recognition coefficient, T0For a set emotional retention time, T1The set emotional forgetting time is t, and the t is the running time of the main control module;
the calculation formula of the number of the questions is k ═ λ k0]Wherein k is the number of emotion recognition questions, k0And lambda is an emotion recognition coefficient for the set number of initial emotion recognition topics.
4. The system of claim 1, wherein the master control module comprises a sentiment scoring unit, and the sentiment scoring unit is configured to calculate a sentiment recognition topic score according to the sentiment recognition topic, and obtain a sentiment composite score by using the sentiment recognition topic score and the initial sentiment score.
5. The emotion recognition system of claim 4, wherein the emotion scoring unit is configured to calculate an emotion recognition topic score based on the emotion recognition topic, and obtain an emotion composite score using the emotion recognition topic score and the initial emotion score, and comprises:
the emotion scoring unit is used for obtaining emotion recognition topic scores by utilizing the emotion recognition topics and obtaining emotion comprehensive scores according to the emotion recognition topic scores, the initial emotion scores and a comprehensive score calculation formula
Rcur=(1-λ)Rpre+λQ,
Wherein R iscurFor mood Complex scoring, RpreAnd Q is the score of the initial emotion, Q is the score of the emotion recognition topic, and lambda is the emotion recognition coefficient.
6. The emotion recognition system of a voice interaction plush toy as claimed in claim 1, wherein the main control module comprises an entertainment matching unit, the entertainment matching unit is configured to obtain an entertainment resource matching coefficient according to the comprehensive emotion score, and obtain a corresponding entertainment resource by using the entertainment resource matching coefficient.
7. The emotion recognition system of claim 6, wherein the entertainment matching unit is configured to obtain an entertainment resource matching coefficient according to the comprehensive emotion score, and obtain a corresponding entertainment resource by using the entertainment resource matching coefficient, and the system comprises:
the entertainment matching unit is used for obtaining an entertainment resource matching coefficient according to the calculation formula of the emotion comprehensive score and the entertainment resource matching coefficient, and obtaining the corresponding entertainment resource according to the entertainment resource matching coefficient and the matching formula, wherein the entertainment resource matching coefficient calculation formula is
Figure FDA0003324131930000031
Wherein R iscurFor mood Complex scoring, R0To be the initial of settingThe emotion recognition topic is full, and r is an entertainment resource matching coefficient;
the matching formula is dcur=[rd0]Where r is the entertainment resource matching coefficient, d0For the total number of entertainment sub-resource pools set, dcurIs the corresponding entertainment resource.
8. The emotion recognition system of a voice-interactive plush toy, as claimed in claim 1, wherein said main control module further comprises an emotion scoring library unit, an emotion recognition question library unit and an entertainment resource library unit;
the emotion scoring library unit is used for storing emotion comprehensive scores, and the previous emotion comprehensive score is used as the next initial emotion score;
the emotion recognition question bank unit comprises emotion recognition sub-question banks of different types of emotions and is used for storing emotion recognition question audios of the different types of emotions, corresponding answer option texts and corresponding scores;
the entertainment resource library unit comprises entertainment sub-resource libraries with different types of emotions and is used for storing entertainment audios corresponding to the different types of emotions.
9. An emotion recognition device of a voice interaction plush toy is characterized by comprising a voice input module, a voice recognition module, a main control module, a voice output module, a power supply module and a base;
the voice input module is used for acquiring voice information;
the voice recognition module is used for recognizing the voice information, judging whether the voice information meets the triggering condition of the main control module or not, and enabling the main control module to operate if the voice information meets the triggering condition;
the main control module is used for acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score, and acquiring corresponding entertainment resources according to the emotion comprehensive score;
the voice output module is used for outputting the entertainment resources;
the power supply module is used for supplying power;
the base is used for fixing the voice input module, the voice recognition module, the main control module, the voice output module and the power supply module.
10. The method of emotion recognition system of voice-interactive plush toy according to any of claims 1-8, characterized in that it comprises:
acquiring voice information, identifying the voice information, and judging whether the voice information meets a trigger condition;
if the emotion recognition score meets the emotion recognition requirement, acquiring an initial emotion score and an emotion recognition coefficient, acquiring an emotion recognition topic according to the emotion recognition coefficient, and acquiring an emotion comprehensive score according to the emotion recognition topic and the initial emotion score;
and acquiring corresponding entertainment resources according to the comprehensive emotion scores.
CN202111256887.7A 2021-10-27 2021-10-27 Emotion recognition system, device and method for voice interaction plush toy Active CN113870902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111256887.7A CN113870902B (en) 2021-10-27 2021-10-27 Emotion recognition system, device and method for voice interaction plush toy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111256887.7A CN113870902B (en) 2021-10-27 2021-10-27 Emotion recognition system, device and method for voice interaction plush toy

Publications (2)

Publication Number Publication Date
CN113870902A true CN113870902A (en) 2021-12-31
CN113870902B CN113870902B (en) 2023-03-14

Family

ID=78997981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111256887.7A Active CN113870902B (en) 2021-10-27 2021-10-27 Emotion recognition system, device and method for voice interaction plush toy

Country Status (1)

Country Link
CN (1) CN113870902B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017084197A1 (en) * 2015-11-18 2017-05-26 深圳创维-Rgb电子有限公司 Smart home control method and system based on emotion recognition
CN106855879A (en) * 2016-12-14 2017-06-16 竹间智能科技(上海)有限公司 The robot that artificial intelligence psychology is seeked advice from music
CN108115695A (en) * 2016-11-28 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of emotional color expression system and robot
CN110085220A (en) * 2018-01-26 2019-08-02 上海智臻智能网络科技股份有限公司 Intelligent interaction device
CN111739516A (en) * 2020-06-19 2020-10-02 中国—东盟信息港股份有限公司 Speech recognition system for intelligent customer service call
CN111739559A (en) * 2020-05-07 2020-10-02 北京捷通华声科技股份有限公司 Speech early warning method, device, equipment and storage medium
WO2021086589A1 (en) * 2019-10-29 2021-05-06 Microsoft Technology Licensing, Llc Providing a response in automated chatting
CN112951233A (en) * 2021-03-30 2021-06-11 平安科技(深圳)有限公司 Voice question and answer method and device, electronic equipment and readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017084197A1 (en) * 2015-11-18 2017-05-26 深圳创维-Rgb电子有限公司 Smart home control method and system based on emotion recognition
CN108115695A (en) * 2016-11-28 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of emotional color expression system and robot
CN106855879A (en) * 2016-12-14 2017-06-16 竹间智能科技(上海)有限公司 The robot that artificial intelligence psychology is seeked advice from music
CN110085220A (en) * 2018-01-26 2019-08-02 上海智臻智能网络科技股份有限公司 Intelligent interaction device
WO2021086589A1 (en) * 2019-10-29 2021-05-06 Microsoft Technology Licensing, Llc Providing a response in automated chatting
CN111739559A (en) * 2020-05-07 2020-10-02 北京捷通华声科技股份有限公司 Speech early warning method, device, equipment and storage medium
CN111739516A (en) * 2020-06-19 2020-10-02 中国—东盟信息港股份有限公司 Speech recognition system for intelligent customer service call
CN112951233A (en) * 2021-03-30 2021-06-11 平安科技(深圳)有限公司 Voice question and answer method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN113870902B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
Barker et al. The fifth'CHiME'speech separation and recognition challenge: dataset, task and baselines
US10381016B2 (en) Methods and apparatus for altering audio output signals
US10068573B1 (en) Approaches for voice-activated audio commands
Barker et al. The PASCAL CHiME speech separation and recognition challenge
CN107464555B (en) Method, computing device and medium for enhancing audio data including speech
US7603273B2 (en) Simultaneous multi-user real-time voice recognition system
EP3721605A1 (en) Streaming radio with personalized content integration
CN106796787A (en) The linguistic context carried out using preceding dialog behavior in natural language processing is explained
CN110602624B (en) Audio testing method and device, storage medium and electronic equipment
JP2016036500A (en) Voice output device, network system, voice output method, and voice output program
KR20200113105A (en) Electronic device providing a response and method of operating the same
CN106774845B (en) intelligent interaction method, device and terminal equipment
CN108885869A (en) The playback of audio data of the control comprising voice
WO2019242414A1 (en) Voice processing method and apparatus, storage medium, and electronic device
CN111261195A (en) Audio testing method and device, storage medium and electronic equipment
WO2020211006A1 (en) Speech recognition method and apparatus, storage medium and electronic device
Barker et al. The CHiME challenges: Robust speech recognition in everyday environments
US10424292B1 (en) System for recognizing and responding to environmental noises
CN109460548B (en) Intelligent robot-oriented story data processing method and system
Hamsa et al. An enhanced emotion recognition algorithm using pitch correlogram, deep sparse matrix representation and random forest classifier
CN101013571A (en) Interaction method and system for using voice order
CN113870902B (en) Emotion recognition system, device and method for voice interaction plush toy
CN112906369A (en) Lyric file generation method and device
Liu et al. Emotional feature selection of speaker-independent speech based on correlation analysis and fisher
Liu et al. An intelligent personal assistant robot: BoBi secretary

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230719

Address after: 725021 Yanhe Commercial Building 2, Fenghuang Community, Yuebinnan Avenue, Hengkou Demonstration Area (Experimental Area), Ankang City, Shaanxi Province

Patentee after: Ankang Qinba Manchuang Toys Industry Operation Management Co.,Ltd.

Address before: 725021 zone 301, building D, Yanhe commercial building 2, Fenghuang community, yuebinnan Avenue, hengkou demonstration area (experimental area), Ankang City, Shaanxi Province

Patentee before: Ankang huizhiqu toy Technology Co.,Ltd.