JPWO2023286224A5 - - Google Patents
Download PDFInfo
- Publication number
- JPWO2023286224A5 JPWO2023286224A5 JP2022507774A JP2022507774A JPWO2023286224A5 JP WO2023286224 A5 JPWO2023286224 A5 JP WO2023286224A5 JP 2022507774 A JP2022507774 A JP 2022507774A JP 2022507774 A JP2022507774 A JP 2022507774A JP WO2023286224 A5 JPWO2023286224 A5 JP WO2023286224A5
- Authority
- JP
- Japan
- Prior art keywords
- response
- conversation
- evaluation value
- song
- evaluation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004044 response Effects 0.000 claims description 65
- 238000011156 evaluation Methods 0.000 claims description 52
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 238000000034 method Methods 0.000 claims 4
- 230000008921 facial expression Effects 0.000 claims 2
Description
かかる課題を解決すべく、第1の発明は、以下のステップをコンピュータに実行させる会話処理プログラムを提供する。第1のステップでは、スピーカより出力された質問に対して、マイクより取得された会話相手の応答を解析する。第2のステップでは、応答がネガティブであるか否かを示す所定の評価基準に従って、それぞれの質問に対する応答を評価して評価値を付与する。第3のステップでは、評価値を時系列的に累積した評価累積値が所定のしきい値に到達した場合、会話途中において、スピーカより歌を再生すべき旨を指示する。第4のステップでは、ある応答に関する評価値の符号に応じて、この応答に対応する質問の提示頻度を調整する。 In order to solve this problem, the first invention provides a conversation processing program that causes a computer to execute the following steps. In the first step, the conversation partner's response obtained from the microphone is analyzed in response to the question output from the speaker. In a second step, the response to each question is evaluated and assigned a rating value according to predetermined criteria that indicate whether the response is negative or not. In the third step, when the accumulated evaluation value obtained by accumulating the evaluation values in time series reaches a predetermined threshold value, an instruction to reproduce the song from the speaker is given during the conversation. In the fourth step, depending on the sign of the evaluation value for a certain response, the presentation frequency of the question corresponding to this response is adjusted.
ここで、第1の発明において、スピーカによる歌の再生時にマイクより音声を取得し、マイクより取得された音声波形と、歌の音声波形との差分を算出することによって、歌の再生時における会話相手の反応を特定する第5のステップを設けてもよい。 Here, in the first invention, voice is acquired from the microphone when the song is played back by the speaker, and by calculating the difference between the voice waveform acquired from the microphone and the voice waveform of the song, conversation during playback of the song is performed. A fifth step of identifying the opponent's reaction may be provided.
第1の発明において、上記第3のステップは、上記評価値に応じて、スピーカより再生すべき歌の長さまたは種類を変えてもよい。また、上記評価値に応じて、人間と会話するキャラクターの動作を指示する第6のステップを設けてもよい。 In the first invention, the third step may change the length or type of song to be reproduced from the speaker according to the evaluation value. Further, a sixth step of instructing the action of the character conversing with the human according to the evaluation value may be provided.
第2の発明は、質問生成部と、応答解析部と、応答評価部と、歌指示部とを有する会話処理システムを提供する。質問生成部は、スピーカより出力すべき質問を生成する。応答解析部は、スピーカより出力された質問に対して、マイクより取得された会話相手の応答を解析する。応答評価部は、応答がネガティブであるか否かを示す所定の評価基準に従って、それぞれの質問に対する応答を評価して評価値を付与する。歌指示部は、評価値を時系列的に累積した評価累積値が所定のしきい値に到達した場合、会話途中において、スピーカより歌を再生すべき旨を指示する。ここで、質問生成部は、ある応答に関する評価値の符号に応じて、この応答に対応する質問の提示頻度を調整する。 A second invention provides a conversation processing system having a question generation section, a response analysis section, a response evaluation section, and a song instruction section. The question generator generates questions to be output from the speaker. The response analysis unit analyzes the conversational partner's response obtained from the microphone in response to the question output from the speaker. The response evaluator evaluates the response to each question according to predetermined evaluation criteria indicating whether the response is negative or not, and assigns an evaluation value. The song instruction unit instructs that a song should be reproduced from the speaker during conversation when an evaluation accumulated value obtained by accumulating evaluation values in time series reaches a predetermined threshold value. Here, the question generation unit adjusts the presentation frequency of the question corresponding to a certain response according to the sign of the evaluation value regarding this response.
第2の発明において、上記歌指示部は、上記評価値に応じて、スピーカより再生すべき歌の長さまたは種類を変えてもよい。また、上記評価値に応じて、人間と会話するキャラクターの動作を指示する動作指示部を設けてもよい。 In the second invention, the song instruction section may change the length or type of song to be reproduced from the speaker according to the evaluation value. Further, an action instructing section may be provided for instructing the action of the character that converses with the human according to the evaluation value.
第3の発明は、スピーカと、マイクと、歌再生部とを有する会話型ロボットを提供する。スピーカは、会話相手に対して質問および歌を出力する。マイクは、スピーカより出力された質問に対する会話相手の応答を取得する。歌再生部は、評価累積値が所定のしきい値に到達したタイミングにおいて、会話途中で歌を挿入してスピーカより再生する。ここで、評価累積値は、評価値を時系列的に累積した値である。また、評価値は、マイクより取得された応答がネガティブであるか否かを示す所定の評価基準に従って、それぞれの質問に対する応答を評価した値である。さらに、ある応答に対応する質問の提示頻度は、この応答に関する評価値の符号に応じて調整される。 A third invention provides a conversational robot having a speaker, a microphone, and a song reproducing section. The speaker outputs questions and songs to the conversation partner. A microphone acquires a conversation partner's response to a question output from a speaker. The song reproducing unit inserts a song in the middle of conversation and reproduces it from the speaker at the timing when the cumulative evaluation value reaches a predetermined threshold value. Here, the cumulative evaluation value is a value obtained by accumulating the evaluation values in time series. Also, the evaluation value is a value obtained by evaluating the response to each question according to a predetermined evaluation criterion indicating whether or not the response obtained from the microphone is negative. Furthermore, the presentation frequency of the question corresponding to a certain response is adjusted according to the sign of the evaluation value for this response.
Claims (22)
スピーカより出力された質問に対して、マイクより取得された会話相手の応答を解析する第1のステップと、
前記応答がネガティブであるか否かを示す所定の評価基準に従って、それぞれの質問に対する応答を評価して評価値を付与する第2のステップと、
前記評価値を時系列的に累積した評価累積値が所定のしきい値に到達した場合、会話途中において、前記スピーカより歌を再生すべき旨を指示する第3のステップと、
ある応答に関する前記評価値の符号に応じて、当該応答に対応する質問の提示頻度を調整する第4のステップと
を有する処理をコンピュータに実行させることを特徴とする会話処理プログラム。 In a conversation processing program,
a first step of analyzing a conversational partner's response obtained from a microphone in response to a question output from a speaker;
a second step of evaluating the response to each question and assigning a rating value according to a predetermined criteria indicating whether the response is negative;
a third step of instructing that a song should be reproduced from the speaker during conversation when the accumulated evaluation value obtained by accumulating the evaluation values in time series reaches a predetermined threshold value ;
a fourth step of adjusting the presentation frequency of a question corresponding to a certain response according to the sign of the evaluation value for the response;
A conversation processing program characterized by causing a computer to execute processing having
3. The conversation processing program according to claim 1 , wherein said third step changes the length or type of song to be reproduced from said speaker according to said evaluation value.
スピーカより出力すべき質問を生成する質問生成部と、
前記スピーカより出力された質問に対して、マイクより取得された会話相手の応答を解析する応答解析部と、
前記応答がネガティブであるか否かを示す所定の評価基準に従って、それぞれの質問に対する応答を評価して評価値を付与する応答評価部と、
前記評価値を時系列的に累積した評価累積値が所定のしきい値に到達した場合、会話途中において、前記スピーカより歌を再生すべき旨を指示する歌指示部とを有し、
前記質問生成部は、ある応答に関する前記評価値の符号に応じて、当該応答に対応する質問の提示頻度を調整することを特徴とする会話処理システム。 In a conversation processing system,
a question generator that generates a question to be output from a speaker;
a response analysis unit that analyzes a conversation partner's response obtained from a microphone in response to a question output from the speaker;
a response evaluation unit that evaluates the response to each question and gives an evaluation value according to a predetermined evaluation criterion indicating whether the response is negative;
a song instruction unit for instructing that a song should be reproduced from the speaker during conversation when an accumulated evaluation value obtained by accumulating the evaluation values in time series reaches a predetermined threshold ;
The conversation processing system , wherein the question generation unit adjusts the presentation frequency of the question corresponding to a certain response according to the sign of the evaluation value related to the response.
会話相手に対して質問および歌を出力するスピーカと、
前記スピーカより出力された質問に対する会話相手の応答を取得するマイクと、
評価累積値が所定のしきい値に到達したタイミングにおいて、会話途中で歌を挿入して前記スピーカより再生する歌再生部とを有し、
前記評価累積値は、評価値を時系列的に累積した値であって、
前記評価値は、前記マイクより取得された応答がネガティブであるか否かを示す所定の評価基準に従って、それぞれの質問に対する応答を評価した値であって、
ある応答に対応する質問の提示頻度は、当該応答に関する前記評価値の符号に応じて調整されることを特徴とする会話型ロボット。 In conversational robots,
a speaker that outputs questions and songs to a conversation partner;
a microphone that acquires a conversation partner's response to a question output from the speaker;
a song reproducing unit that inserts a song during conversation and reproduces it from the speaker at the timing when the accumulated evaluation value reaches a predetermined threshold;
The evaluation cumulative value is a value obtained by accumulating evaluation values in time series,
The evaluation value is a value obtained by evaluating the response to each question according to a predetermined evaluation criterion indicating whether or not the response obtained from the microphone is negative ,
A conversational robot , wherein the presentation frequency of a question corresponding to a certain response is adjusted according to the sign of the evaluation value relating to the response .
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/026535 WO2023286224A1 (en) | 2021-07-14 | 2021-07-14 | Conversation processing program, conversation processing system, and conversational robot |
Publications (3)
Publication Number | Publication Date |
---|---|
JP7142403B1 JP7142403B1 (en) | 2022-09-27 |
JPWO2023286224A1 JPWO2023286224A1 (en) | 2023-01-19 |
JPWO2023286224A5 true JPWO2023286224A5 (en) | 2023-06-20 |
Family
ID=83436666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2022507774A Active JP7142403B1 (en) | 2021-07-14 | 2021-07-14 | Speech processing program, speech processing system and conversational robot |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP7142403B1 (en) |
WO (1) | WO2023286224A1 (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002351305A (en) * | 2001-05-23 | 2002-12-06 | Apollo Seiko Ltd | Robot for language training |
JP4270911B2 (en) * | 2003-03-10 | 2009-06-03 | 富士通株式会社 | Patient monitoring device |
JP2015184563A (en) * | 2014-03-25 | 2015-10-22 | シャープ株式会社 | Interactive household electrical system, server device, interactive household electrical appliance, method for household electrical system to interact, and program for realizing the same by computer |
JP6876295B2 (en) * | 2017-04-14 | 2021-05-26 | 株式会社Nttドコモ | Server device |
-
2021
- 2021-07-14 JP JP2022507774A patent/JP7142403B1/en active Active
- 2021-07-14 WO PCT/JP2021/026535 patent/WO2023286224A1/en unknown
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4327241B2 (en) | Speech enhancement device and speech enhancement method | |
US10789937B2 (en) | Speech synthesis device and method | |
Lee et al. | Modeling mutual influence of interlocutor emotion states in dyadic spoken interactions | |
JP2002091482A (en) | Method and device for detecting feeling and recording medium | |
EP1859437A2 (en) | An automatic donor ranking and selection system and method for voice conversion | |
JP6270661B2 (en) | Spoken dialogue method and spoken dialogue system | |
JP2018171683A (en) | Robot control program, robot device, and robot control method | |
JP6766675B2 (en) | Voice dialogue device | |
US20100235169A1 (en) | Speech differentiation | |
JP2020034683A (en) | Voice recognition device, voice recognition program and voice recognition method | |
JP6565500B2 (en) | Utterance state determination device, utterance state determination method, and determination program | |
JP2012163692A (en) | Voice signal processing system, voice signal processing method, and voice signal processing method program | |
Oliveira et al. | An active audition framework for auditory-driven HRI: Application to interactive robot dancing | |
JP4627154B2 (en) | Music output device and music output method according to human emotional state | |
JPWO2011062071A1 (en) | Acoustic image segment classification apparatus and method | |
JP2004021121A (en) | Voice interaction controller unit | |
JPWO2023286224A5 (en) | ||
JP2016090775A (en) | Response generation apparatus, response generation method, and program | |
CN111653281A (en) | Method for individualized signal processing of an audio signal of a hearing aid | |
JP6772881B2 (en) | Voice dialogue device | |
JP6728660B2 (en) | Spoken dialogue method, spoken dialogue device and program | |
JP6657888B2 (en) | Voice interaction method, voice interaction device, and program | |
JP6657887B2 (en) | Voice interaction method, voice interaction device, and program | |
JP7020390B2 (en) | Control device, voice dialogue device, voice recognition server and program | |
JP7142403B1 (en) | Speech processing program, speech processing system and conversational robot |