WO2013175665A1 - Communication apparatus - Google Patents

Communication apparatus Download PDF

Info

Publication number
WO2013175665A1
WO2013175665A1 PCT/JP2012/082453 JP2012082453W WO2013175665A1 WO 2013175665 A1 WO2013175665 A1 WO 2013175665A1 JP 2012082453 W JP2012082453 W JP 2012082453W WO 2013175665 A1 WO2013175665 A1 WO 2013175665A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice
importance
emotion
unit
calculation unit
Prior art date
Application number
PCT/JP2012/082453
Other languages
French (fr)
Japanese (ja)
Inventor
有香 三好
Original Assignee
Necカシオモバイルコミュニケーションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Necカシオモバイルコミュニケーションズ株式会社 filed Critical Necカシオモバイルコミュニケーションズ株式会社
Publication of WO2013175665A1 publication Critical patent/WO2013175665A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72427User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting games or graphical animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Definitions

  • the present invention relates to a communication device, a server, an information providing method, and a program that provide information related to voice.
  • An object of the present invention is to provide a communication device, a server, an information providing method, and a program that solve the above-described problems.
  • the communication device of the present invention An emotion determination unit that determines an emotion of a speaker of the voice based on the input voice; A usage frequency calculation unit that divides the voice into words and calculates a usage frequency for each word; An importance calculator that calculates the importance of the word based on the emotion determined by the emotion determiner and the use frequency calculated by the use frequency calculator; A notifying unit for notifying information according to the importance calculated by the importance calculating unit.
  • the program of the present invention is A program for causing a computer to execute, A procedure for determining an emotion of a speaker of the voice based on the inputted voice; Dividing the speech into words; A procedure for calculating a use frequency for each of the divided words; Calculating the importance of the word based on the emotion and the usage frequency; And a procedure for informing information according to the calculated importance.
  • FIG. 1 is a diagram showing an embodiment of a communication system having a communication device of the present invention.
  • the emotion determination unit 101 determines the emotion of the speaker of the voice based on the input voice.
  • This determination method uses the voice recognition function to determine the emotion of the speaker who is emitting the voice based on the strength of the voice, the height, the interval between the sounds (sounding condition), and the like. . For example, if the voice is strong (the volume is high), the speaker's emotion is considered high. Further, when the voice is high, it is considered that the emotion of the speaker is high. In addition, when the interval between sounds is short, the emotion of the speaker is considered high. In this way, each determination element and the level of emotion are associated in advance as emotion association data, and the emotion determination unit 101 determines an emotion based on the association (determination criterion). The determination method is not particularly specified here.
  • FIG. 3 is a diagram illustrating an example of determination criteria stored in advance in the emotion determination unit 101 illustrated in FIG.
  • the emotion determination unit 101 shown in FIG. 2 is stored in association with emotional levels, voice strength, voice level, and sound interval.
  • the emotion level is “high”, “medium”, “low”, the voice strength is “strong”, “medium”, “weak”, and the voice level is “high”, “medium”, “Low” indicates the sound interval as “short”, “medium”, and “long”.
  • This is an expression for convenience of explanation, and may be an appropriate numerical value representing each.
  • the number of levels is not limited to three. Further, such a determination criterion may be set in advance by the user, or may be statistically calculated based on the actual data acquired in the past.
  • the voice input here may be a voice during a call, or a surrounding voice (for example, voice spoken during a meeting or lecturer voice during a lecture) inputted to a microphone or the like. Also good.
  • the notification unit 104 notifies information based on the importance calculated by the importance calculation unit 103.
  • the notification method may be display of information using a display or the like, transmission of information by sound using a speaker, or notification using vibration.
  • the information notified by the notification unit 104 may be, for example, information indicating an outline of the call content or may be a summary characterizing the call content, but the importance level calculated by the importance level calculation unit 103 is high. Information that includes words.
  • the information notified by the notification unit 104 may simply be a word having the highest importance calculated by the importance calculation unit 103 or a predetermined number of words from the highest importance.
  • the notification unit 104 notifies the score.
  • the communication device 100-1 shown in FIG. 1 includes an emotion determination unit 101, a usage frequency calculation unit 102, an importance calculation unit 103, a notification unit 104, and a recording unit 105. It has been.
  • the internal configuration of the communication device 100-2 may be the same as the internal configuration of the communication device 100-1, or may have a general call function.
  • the score (score) calculated for the call (conversation) and the high importance information (word) calculated by the importance calculation unit 103 are “Speak”. It is displayed as "a word that the person was pleased with.”
  • the communication apparatus 100-1 determines the emotion of the voice uttered by the speaker, calculates the use frequency, and indicates the importance calculated from each, so that the user I can recognize what I wanted to convey.
  • emotion determination, usage frequency calculation, and importance calculation may be performed by a device other than the communication device 100-1.
  • the recording unit 115 records the input voice.
  • the voice recorded by the recording unit 115 may be a voice during a call or a surrounding voice input to a microphone or the like (for example, a voice spoken during a meeting or a lecturer voice during a lecture). It may be.
  • the notification unit 114 notifies information transmitted from the server 300.
  • the notification method may be display of information using a display or the like, transmission of information by sound using a speaker, or notification using vibration.
  • the server 300 shown in FIG. 8 includes an emotion determination unit 301, a usage frequency calculation unit 302, an importance level calculation unit 303, and a communication unit 304.
  • the importance level calculation unit 303 calculates the importance level of words based on the emotion determined by the emotion determination unit 301 and the usage frequency calculated by the usage frequency calculation unit 302.
  • the importance calculation method is the same as that in the importance calculation unit 103 shown in FIG.
  • FIG. 11 is a sequence diagram for explaining the information providing method in the form shown in FIG.
  • a case will be described as an example in which communication device 110-1 records a voice during a call and transmits voice data indicating the recorded voice to server 300.
  • the emotion determination unit 301 When the communication unit 304 of the server 300 receives the audio data transmitted from the communication device 110-1, the emotion determination unit 301, based on the received audio data, in step 15, the audio data indicated by the audio data is displayed. Determine the emotion of the speaker. If the audio data transmitted from the communication device 110-1 is compressed, the emotion determination unit 301 decompresses the audio data after the communication unit 111 decompresses the audio data. Emotion is judged based on
  • the usage frequency calculation unit 302 divides the voice data transmitted from the communication device 110-1 into words. Subsequently, in step 17, the usage frequency calculation unit 302 calculates the usage frequency of each divided word.
  • the importance calculation unit 303 calculates the importance of each word based on the emotion determined by the emotion determination unit 301 and the use frequency calculated by the use frequency calculation unit 302.
  • a device external to the communication device 110-1 performs emotion determination, usage frequency calculation, and importance calculation, even a communication device that does not have the function of performing such processing.
  • the present invention can be used.
  • a conversation performed using a communication terminal (communication device) having a voice communication function is automatically determined and recorded using voice recognition.
  • voice recognition technology For example, you can use voice recognition technology to count the number of words used in a conversation, and give a ranking to inform you of the topic of the conversation and what you wanted to convey most after the call. . The most spoken word is considered as the most wanted word.
  • emotion judgment function based on the level of the voice, strength, weakness, and sound-to-sound (clogging), it is possible to connect the speaker's words and emotions. , You can automatically check what you have angered or what the other party was most eager to convey (what is important) and automatically check it later.
  • the content of the conversation is automatically judged, items that the user was not aware of, important matters that the other party wanted to convey, feelings of the other party in the conversation,
  • the user can easily check without a work load after a call.
  • each component provided in each of the communication device 100-1 and the server 300 described above may be performed by a logic circuit that is produced according to the purpose.
  • a computer program (hereinafter referred to as a program) in which processing contents are described as a procedure is recorded on a recording medium readable by each of the communication device 100-1 and the server 300, and the program recorded on the recording medium is stored in the communication device.
  • 100-1 and server 300 may be read and executed.
  • the recording medium readable by each of the communication device 100-1 and the server 300 includes a transferable recording medium such as a floppy (registered trademark) disk, a magneto-optical disk, a DVD, and a CD, as well as the communication device 100-1 and the server.
  • the functions of the communication device 100-1 and the server 300 described above include an IC (Integrated Circuit) recorder, a mobile phone, a mobile terminal, a tablet or notebook PC (Personal Computer), a smartphone, a PDA (Personal Digital Assistants). ), And can be applied to devices such as game machines.
  • IC Integrated Circuit
  • An emotion determination unit that determines an emotion of a speaker of the voice based on the input voice
  • a usage frequency calculation unit that divides the voice into words and calculates a usage frequency for each word
  • An importance calculator that calculates the importance of the word based on the emotion determined by the emotion determiner and the use frequency calculated by the use frequency calculator
  • a communication apparatus comprising: a notification unit that notifies information according to the importance calculated by the importance calculation unit.
  • reports the word with the highest said importance calculated by the said importance calculation part,
  • the emotion determination part which determines the emotion of the speaker of the audio
  • a usage frequency calculation unit that divides the voice data into words and calculates a usage frequency for each word;
  • An importance calculator that calculates the importance of the word based on the emotion determined by the emotion determiner and the use frequency calculated by the use frequency calculator;
  • a communication unit that transmits information corresponding to the importance calculated by the importance calculation unit to the communication device.

Abstract

A communication apparatus (100-1) determines, on the basis of input voice, the emotion of the speaker of the voice, divides the voice into words, calculates the use frequency for each word as divided, calculates the importance level of the word on the basis of both the determined emotion and the calculated use frequency, and transmits information in accordance with the calculated importance level.

Description

通信装置Communication device
 本発明は、音声に関する情報を提供する通信装置、サーバ、情報提供方法およびプログラムに関する。 The present invention relates to a communication device, a server, an information providing method, and a program that provide information related to voice.
 近年、意思疎通の方法として、電子メールでのやり取りが増えている。しかし、電子メールのような文字だけの伝達では伝わりにくいこともある。 In recent years, e-mail exchanges are increasing as a method of communication. However, there are cases where it is difficult to communicate only with characters such as e-mail.
 一方、音声通信(通話)は、発話者の口調等で、文字だけでは伝わりにくいことを伝えやすくすることができる。 On the other hand, voice communication (call) can make it easier to convey that it is difficult to communicate with characters alone, such as the tone of the speaker.
 例えば、利用者を撮影した画像および利用者の音声に基づいて、その感情を判断する技術が考えられている(例えば、特許文献1参照。)。この技術を通話に用いれば、利用者が文字だけでは伝えられない情報を伝えることができる。 For example, a technique for judging an emotion based on an image of a user and a voice of the user has been considered (see, for example, Patent Document 1). If this technology is used for a telephone call, the user can convey information that cannot be conveyed only by characters.
特開2009-151766号公報JP 2009-151766 A
 音声通信における会話内容を、後で確認する必要がある場合、当該会話をしながらその会話の中の重要事項と思われる内容を中心にメモを取ることが一般的に行われていた。しかし、近年では、そのようにメモを取る手間を省いたり、メモへの書き漏れを避けたりするために、レコーダー等を用いて、会話全体を録音したりするなどの方法を用いることが多くなってきている。 When it is necessary to confirm the content of a conversation in voice communication later, it is common practice to take notes centering on content that seems to be important in the conversation. However, in recent years, in order to save the trouble of taking notes and avoid writing omissions in the notes, methods such as recording a whole conversation using a recorder or the like are often used. It is coming.
 しかしながら、利用者が会話全体を録音して、その会話の中から重要事項を抽出(認識)するために、録音した内容を全て確認するのは、利用者に高い作業負担かかってしまうという問題点がある。また、利用者がこのような確認を行ったとしても、重要事項であるかどうかは利用者自身の主観で判断するため、話す側が重要事項として話している内容が重要事項であることに気付かないおそれがある。 However, since the user records the entire conversation and extracts (recognizes) important matters from the conversation, it is a problem that it is burdensome for the user to check all the recorded contents. There is. In addition, even if the user confirms this, since it is judged by the user's own subjectivity whether it is an important matter, the person speaking does not realize that the content being spoken as an important matter is an important matter. There is a fear.
 また、特許文献1に記載されたような利用者の感情に基づく判断だけでは、会話における重要事項を認識しきれるとは言い難い。 In addition, it is difficult to say that important matters in conversation can be recognized only by judgment based on the emotions of users as described in Patent Document 1.
 本発明の目的は、上述した課題を解決する通信装置、サーバ、情報提供方法およびプログラムを提供することである。 An object of the present invention is to provide a communication device, a server, an information providing method, and a program that solve the above-described problems.
 本発明の通信装置は、
 入力された音声に基づいて、該音声の発話者の感情を判定する感情判定部と、
 前記音声を単語に分割し、該単語ごとの使用頻度を算出する使用頻度算出部と、
 前記感情判定部が判定した前記感情と、前記使用頻度算出部が算出した前記使用頻度とに基づいて、前記単語の重要度を算出する重要度算出部と、
 前記重要度算出部が算出した前記重要度に応じた情報を報知する報知部とを有する。
The communication device of the present invention
An emotion determination unit that determines an emotion of a speaker of the voice based on the input voice;
A usage frequency calculation unit that divides the voice into words and calculates a usage frequency for each word;
An importance calculator that calculates the importance of the word based on the emotion determined by the emotion determiner and the use frequency calculated by the use frequency calculator;
A notifying unit for notifying information according to the importance calculated by the importance calculating unit.
 また、本発明のサーバは、
 通信装置から送信されてきた音声データに基づいて、該音声データが示す音声の発話者の感情を判定する感情判定部と、
 前記音声データを単語に分割し、該単語ごとの使用頻度を算出する使用頻度算出部と、
 前記感情判定部が判定した前記感情と、前記使用頻度算出部が算出した前記使用頻度とに基づいて、前記単語の重要度を算出する重要度算出部と、
 前記重要度算出部が算出した前記重要度に応じた情報を前記通信装置へ送信する通信部とを有する。
The server of the present invention
An emotion determination unit that determines the emotion of the speaker of the voice represented by the voice data based on the voice data transmitted from the communication device;
A usage frequency calculation unit that divides the voice data into words and calculates a usage frequency for each word;
An importance calculator that calculates the importance of the word based on the emotion determined by the emotion determiner and the use frequency calculated by the use frequency calculator;
And a communication unit that transmits information corresponding to the importance calculated by the importance calculation unit to the communication device.
 また、本発明の情報提供方法は、
 入力された音声に基づいて、該音声の発話者の感情を判定する処理と、
 前記音声を単語に分割する処理と、
 前記分割された単語ごとの使用頻度を算出する処理と、
 前記感情と前記使用頻度とに基づいて、前記単語の重要度を算出する処理と、
 前記算出した前記重要度に応じた情報を報知する処理とを行う。
The information providing method of the present invention includes:
A process of determining the emotion of the speaker of the voice based on the input voice;
Processing to divide the voice into words;
A process of calculating a use frequency for each of the divided words;
Processing to calculate the importance of the word based on the emotion and the usage frequency;
A process of notifying information according to the calculated importance is performed.
 また、本発明のプログラムは、
 コンピュータに実行させるためのプログラムであって、
 入力された音声に基づいて、該音声の発話者の感情を判定する手順と、
 前記音声を単語に分割する手順と、
 前記分割された単語ごとの使用頻度を算出する手順と、
 前記感情と前記使用頻度とに基づいて、前記単語の重要度を算出する手順と、
 前記算出した前記重要度に応じた情報を報知する手順とを実行させる。
The program of the present invention is
A program for causing a computer to execute,
A procedure for determining an emotion of a speaker of the voice based on the inputted voice;
Dividing the speech into words;
A procedure for calculating a use frequency for each of the divided words;
Calculating the importance of the word based on the emotion and the usage frequency;
And a procedure for informing information according to the calculated importance.
 以上説明したように、本発明においては、会話の重要な事項を容易に認識することができる。 As described above, in the present invention, important matters of conversation can be easily recognized.
本発明の通信装置を有する通信システムの実施の一形態を示す図である。It is a figure which shows one Embodiment of the communication system which has a communication apparatus of this invention. 図1に示した通信装置の内部構成の一例を示す図である。It is a figure which shows an example of an internal structure of the communication apparatus shown in FIG. 図2に示した感情判定部にあらかじめ記憶されている判定基準の一例を示す図である。FIG. 3 is a diagram illustrating an example of determination criteria stored in advance in an emotion determination unit illustrated in FIG. 2. 図1に示した通信装置の内部構成の他の例を示す図である。It is a figure which shows the other example of an internal structure of the communication apparatus shown in FIG. 図1に示した通信装置における情報提供方法を説明するためのフローチャートである。3 is a flowchart for explaining an information providing method in the communication apparatus shown in FIG. 1. 図2に示した報知部が報知した情報の一例を示す図である。It is a figure which shows an example of the information which the alerting | reporting part shown in FIG. 2 alert | reported. 図2に示した報知部が報知した情報の他の例を示す図である。It is a figure which shows the other example of the information which the alerting | reporting part shown in FIG. 2 alert | reported. 本発明の通信装置を有する通信システムの実施の他の形態を示す図である。It is a figure which shows the other form of implementation of the communication system which has a communication apparatus of this invention. 図8に示した通信装置の内部構成の一例を示す図である。It is a figure which shows an example of an internal structure of the communication apparatus shown in FIG. 図8に示したサーバの内部構成の一例を示す図である。It is a figure which shows an example of the internal structure of the server shown in FIG. 図8に示した形態における情報提供方法を説明するためのシーケンス図である。It is a sequence diagram for demonstrating the information provision method in the form shown in FIG.
 以下に、本発明の実施の形態について図面を参照して説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は、本発明の通信装置を有する通信システムの実施の一形態を示す図である。 FIG. 1 is a diagram showing an embodiment of a communication system having a communication device of the present invention.
 本形態では図1に示すように、通信装置100-1と、通信装置100-2とが通信ネットワーク200を介して通信可能となっている。 In this embodiment, as shown in FIG. 1, the communication device 100-1 and the communication device 100-2 can communicate with each other via the communication network 200.
 通信装置100-1~100-2は、利用者が使用する、互いに通話が可能な通信端末である。 The communication devices 100-1 to 100-2 are communication terminals that are used by users and can communicate with each other.
 図2は、図1に示した通信装置100-1の内部構成の一例を示す図である。 FIG. 2 is a diagram illustrating an example of an internal configuration of the communication apparatus 100-1 illustrated in FIG.
 図1に示した通信装置100-1には図2に示すように、感情判定部101と、使用頻度算出部102と、重要度算出部103と、報知部104とが設けられている。 As shown in FIG. 2, the communication device 100-1 shown in FIG. 1 includes an emotion determination unit 101, a usage frequency calculation unit 102, an importance calculation unit 103, and a notification unit 104.
 感情判定部101は、入力された音声に基づいて、その音声の発話者の感情を判定する。この判定方法は、音声認識機能を用いて、音声の強弱や、高低、音と音との間隔(詰まり具合)等に基づいて、その音声を発している発話者の感情を判定するものである。例えば、音声の強さが強い(音量が大きい)場合は、発話者の感情が高いと考えられる。また、音声の高さが高い場合は、発話者の感情が高いと考えられる。また、音と音との間隔が短い場合は、発話者の感情が高いと考えられる。このように、それぞれの判定要素と、感情の高低とが感情紐付けデータとしてあらかじめ対応付けられており、感情判定部101は、その対応付け(判定基準)に基づいて、感情を判定するものであっても良く、その判定方法はここでは特に規定しない。 The emotion determination unit 101 determines the emotion of the speaker of the voice based on the input voice. This determination method uses the voice recognition function to determine the emotion of the speaker who is emitting the voice based on the strength of the voice, the height, the interval between the sounds (sounding condition), and the like. . For example, if the voice is strong (the volume is high), the speaker's emotion is considered high. Further, when the voice is high, it is considered that the emotion of the speaker is high. In addition, when the interval between sounds is short, the emotion of the speaker is considered high. In this way, each determination element and the level of emotion are associated in advance as emotion association data, and the emotion determination unit 101 determines an emotion based on the association (determination criterion). The determination method is not particularly specified here.
 図3は、図2に示した感情判定部101にあらかじめ記憶されている判定基準の一例を示す図である。 FIG. 3 is a diagram illustrating an example of determination criteria stored in advance in the emotion determination unit 101 illustrated in FIG.
 図2に示した感情判定部101は図3に示すように、感情の高低と、音声の強弱、音声の高低および音の間隔とに対応付けられて記憶されている。図3では、感情の高低を「高」、「中」、「低」とし、音声の強弱を「強」、「中」、「弱」とし、音声の高低を「高」、「中」、「低」とし、音の間隔を「短」、「中」、「長」として示している。これは説明の便宜上の表現であり、それぞれを表す適当な数値であっても良い。また、そのレベルの数も3段階に限らない。また、このような判定基準は、利用者があらかじめ設定しておくものであっても良いし、過去に取得した実績データに基づいて、統計的に算出されたものであっても良い。 2, the emotion determination unit 101 shown in FIG. 2 is stored in association with emotional levels, voice strength, voice level, and sound interval. In FIG. 3, the emotion level is “high”, “medium”, “low”, the voice strength is “strong”, “medium”, “weak”, and the voice level is “high”, “medium”, “Low” indicates the sound interval as “short”, “medium”, and “long”. This is an expression for convenience of explanation, and may be an appropriate numerical value representing each. Also, the number of levels is not limited to three. Further, such a determination criterion may be set in advance by the user, or may be statistically calculated based on the actual data acquired in the past.
 また、ここで入力する音声は、通話中の音声であっても良いし、マイク等に入力した周囲の音声(例えば、会議中に発言された音声や講義中の講師の音声等)であっても良い。 The voice input here may be a voice during a call, or a surrounding voice (for example, voice spoken during a meeting or lecturer voice during a lecture) inputted to a microphone or the like. Also good.
 使用頻度算出部102は、入力された音声を単語に分割する。ここで、使用頻度算出部102が音声を単語に分割するには、使用頻度算出部102が音声認識機能を具備していることが必要である。また、使用頻度算出部102は、分割された単語ごとの使用頻度を算出する。例えば、音声が通話中の音声である場合、使用頻度算出部102は、通話開始から通話終了までの、分割されたそれぞれの単語の使用回数を算出するものであっても良いし、単位時間における使用率を算出するものであっても良い。なお、使用頻度算出部102は、音声を単語ではなく、文節や、キーワード等、あらかじめ設定された単位に分割し、その分割した単位ごとに処理を行うものであっても良い。 The usage frequency calculation unit 102 divides the input voice into words. Here, in order for the usage frequency calculation unit 102 to divide the speech into words, the usage frequency calculation unit 102 needs to have a speech recognition function. Further, the usage frequency calculation unit 102 calculates the usage frequency for each divided word. For example, when the voice is a voice during a call, the usage frequency calculation unit 102 may calculate the number of times each divided word is used from the start of the call to the end of the call. The usage rate may be calculated. Note that the usage frequency calculation unit 102 may divide the speech into a preset unit such as a phrase or a keyword instead of a word, and perform processing for each divided unit.
 重要度算出部103は、感情判定部101が判定した感情と、使用頻度算出部102が算出した使用頻度とに基づいて、単語それぞれの重要度を算出する。このとき、重要度算出部103は、感情判定部101が判定した感情が高いほど、重要度を高い値とする。また、重要度算出部103は、使用頻度算出部102が算出した使用頻度が高いほど、重要度を高い値とする。そして、重要度算出部103は、それぞれの重要度をかけ合わせた値を当該単語の重要度とする。なお、重要度算出部103は、それぞれの重要度をかけ合わせる際に、それぞれの重要度にあらかじめ設定された重み付け等を付与しても良い。つまり、重要度算出部103が算出した重要度は、当該音声の発話者が、その単語に興味を示した度合いや、その単語を伝えようとした熱意の度合いを示すものとなる。 The importance calculating unit 103 calculates the importance of each word based on the emotion determined by the emotion determining unit 101 and the usage frequency calculated by the usage frequency calculating unit 102. At this time, the importance calculation unit 103 sets the importance to a higher value as the emotion determined by the emotion determination unit 101 is higher. Also, the importance level calculation unit 103 sets the importance level to a higher value as the usage frequency calculated by the usage frequency calculation unit 102 is higher. Then, the importance level calculation unit 103 sets a value obtained by multiplying the respective importance levels as the importance level of the word. Note that the importance calculation unit 103 may assign a weight or the like set in advance to each importance when multiplying each importance. In other words, the importance calculated by the importance calculation unit 103 indicates the degree of interest of the speech speaker and the degree of enthusiasm for transmitting the word.
 また、重要度算出部103は、それぞれの単語について算出した重要度に応じたスコア(点数)を算出するものであっても良い。例えば、重要度が高い値である場合、重要度算出部103は、「80点」という高いスコアを算出したり、重要度が低い値である場合、「20点」という低いスコアを算出したりするものであっても良い。 Also, the importance calculation unit 103 may calculate a score (score) corresponding to the importance calculated for each word. For example, when the importance is a high value, the importance calculation unit 103 calculates a high score of “80 points”, or when the importance is a low value, calculates a low score of “20 points”. It may be what you do.
 報知部104は、重要度算出部103が算出した重要度に基づいて、情報を報知する。ここで、報知の方法は、ディスプレイ等を用いた情報の表示であっても良いし、スピーカーを用いた音による情報の伝達であっても良いし、振動を用いた通知であっても良い。報知部104が報知する情報は、例えば、通話内容の概要を示す情報であっても良いし、通話内容を特徴付ける要約等であっても良いが、重要度算出部103が算出した重要度の高い単語が含まれる情報となる。報知部104が報知する情報は、単に、重要度算出部103が算出した重要度が最も高い単語や、重要度の最も高いものから所定の数の単語であっても良い。また、重要度算出部103が上述したようなスコアを算出した場合、報知部104は、そのスコアを報知する。 The notification unit 104 notifies information based on the importance calculated by the importance calculation unit 103. Here, the notification method may be display of information using a display or the like, transmission of information by sound using a speaker, or notification using vibration. The information notified by the notification unit 104 may be, for example, information indicating an outline of the call content or may be a summary characterizing the call content, but the importance level calculated by the importance level calculation unit 103 is high. Information that includes words. The information notified by the notification unit 104 may simply be a word having the highest importance calculated by the importance calculation unit 103 or a predetermined number of words from the highest importance. When the importance calculation unit 103 calculates the score as described above, the notification unit 104 notifies the score.
 図4は、図1に示した通信装置100-1の内部構成の他の例を示す図である。 FIG. 4 is a diagram showing another example of the internal configuration of the communication apparatus 100-1 shown in FIG.
 図1に示した通信装置100-1には図4に示すように、感情判定部101と、使用頻度算出部102と、重要度算出部103と、報知部104と、録音部105とが設けられている。 As shown in FIG. 4, the communication device 100-1 shown in FIG. 1 includes an emotion determination unit 101, a usage frequency calculation unit 102, an importance calculation unit 103, a notification unit 104, and a recording unit 105. It has been.
 録音部105は、入力された音声を録音する。ここで、録音部105が録音する音声は、通話中の音声であっても良いし、マイク等に入力した周囲の音声(例えば、会議中に発言された音声や講義中の講師の音声等)であっても良い。 The recording unit 105 records the input voice. Here, the voice recorded by the recording unit 105 may be a voice during a call, or a surrounding voice input to a microphone or the like (for example, a voice spoken during a meeting or a lecturer voice during a lecture). It may be.
 感情判定部101は、録音部105が録音した音声に基づいて、発話者の感情を判定する。この判定方法は、上述したものと同じである。 The emotion determination unit 101 determines the speaker's emotion based on the voice recorded by the recording unit 105. This determination method is the same as described above.
 使用頻度算出部102は、録音部105が録音した音声を単語に分割する。また、使用頻度算出部102は、分割された単語ごとの使用頻度を算出する。この使用頻度の算出方法は、上述したものと同じである。 The usage frequency calculation unit 102 divides the voice recorded by the recording unit 105 into words. Further, the usage frequency calculation unit 102 calculates the usage frequency for each divided word. The method of calculating the usage frequency is the same as described above.
 なお、通信装置100-2の内部構成も通信装置100-1の内部構成と同じであっても良いし、一般的な通話機能を具備するものであっても良い。 The internal configuration of the communication device 100-2 may be the same as the internal configuration of the communication device 100-1, or may have a general call function.
 以下に、図1に示した通信装置100-1における情報提供方法について説明する。 Hereinafter, an information providing method in the communication apparatus 100-1 shown in FIG. 1 will be described.
 図5は、図1に示した通信装置100-1における情報提供方法を説明するためのフローチャートである。 FIG. 5 is a flowchart for explaining an information providing method in communication apparatus 100-1 shown in FIG.
 まず、ステップ1にて、通信装置100-1に音声が入力すると、ステップ2にて、感情判定部101が、音声を発した発話者の感情を判定する。この判定方法は、上述したように、あらかじめ設定された判定基準に基づくものであっても良いし、統計データに基づくものであっても良い。 First, when voice is input to the communication device 100-1 in step 1, the emotion determination unit 101 determines the emotion of the speaker who has uttered the voice in step 2. As described above, this determination method may be based on a predetermined determination criterion or may be based on statistical data.
 また、ステップ3にて、使用頻度算出部102は、入力された音声を単語に分割する。続いて、ステップ4にて、使用頻度算出部102は、分割したそれぞれの単語の使用頻度を算出する。使用頻度の算出方法は、上述した通りである。 In step 3, the usage frequency calculation unit 102 divides the input voice into words. Subsequently, in step 4, the usage frequency calculation unit 102 calculates the usage frequency of each divided word. The calculation method of the usage frequency is as described above.
 すると、ステップ5にて、重要度算出部103は、感情判定部101が判定した感情と、使用頻度算出部102が算出した使用頻度とに基づいて、単語それぞれの重要度を算出する。重要度の算出方法は、上述した通りである。 Then, in step 5, the importance calculation unit 103 calculates the importance of each word based on the emotion determined by the emotion determination unit 101 and the use frequency calculated by the use frequency calculation unit 102. The importance calculation method is as described above.
 続いて、ステップ6にて、報知部104は、重要度算出部103が算出した重要度に基づいて、情報を報知する。 Subsequently, in step 6, the notification unit 104 notifies information based on the importance calculated by the importance calculation unit 103.
 図6は、図2に示した報知部104が報知した情報の一例を示す図である。 FIG. 6 is a diagram showing an example of information notified by the notification unit 104 shown in FIG.
 図2に示した報知部104には図6に示すように、通話記録と、重要度算出部103が算出した重要度の高い情報(単語)が「重要事項」として表示されている。また、重要事項に関する情報(例えば、重要度の2番目以降のもの)が、「トピックス」として表示されている。 2, as shown in FIG. 6, the notification unit 104 shown in FIG. 2 displays call records and information (words) with high importance calculated by the importance calculation unit 103 as “important matters”. In addition, information related to important matters (for example, information after the second importance level) is displayed as “topics”.
 図7は、図2に示した報知部104が報知した情報の他の例を示す図である。 FIG. 7 is a diagram showing another example of information notified by the notification unit 104 shown in FIG.
 図2に示した報知部104には図7に示すように、通話(会話)について算出されたスコア(点数)と、重要度算出部103が算出した重要度の高い情報(単語)が「発話者が喜んでいた一言」として表示されている。 In the notification unit 104 shown in FIG. 2, as shown in FIG. 7, the score (score) calculated for the call (conversation) and the high importance information (word) calculated by the importance calculation unit 103 are “Speak”. It is displayed as "a word that the person was pleased with."
 このように、通信装置100-1が、発話者が発した音声について、感情を判定し、また、使用頻度を算出し、それぞれから算出した重要度を示すことで、利用者は、発話者が伝えたかったことを認識することができる。 As described above, the communication apparatus 100-1 determines the emotion of the voice uttered by the speaker, calculates the use frequency, and indicates the importance calculated from each, so that the user I can recognize what I wanted to convey.
 また、上述したように、感情の判定、使用頻度の算出および重要度の算出を、通信装置100-1以外の他の装置が行うものであっても良い。 As described above, emotion determination, usage frequency calculation, and importance calculation may be performed by a device other than the communication device 100-1.
 図8は、本発明の通信装置を有する通信システムの実施の他の形態を示す図である。 FIG. 8 is a diagram showing another embodiment of the communication system having the communication device of the present invention.
 本形態では図8に示すように、通信装置110-1~110-2と、サーバ300とが通信ネットワーク200を介して通信可能となっている。 In this embodiment, as shown in FIG. 8, the communication devices 110-1 to 110-2 and the server 300 can communicate with each other via the communication network 200.
 通信装置110-1~110-2は、利用者が使用する、互いに通話が可能な通信装置である。 The communication devices 110-1 to 110-2 are communication devices that can be used by users and can communicate with each other.
 サーバ300は、通信装置110-1~110-2に入力された音声について、感情の判定、使用頻度の算出および重要度の算出を行う。 The server 300 performs emotion determination, usage frequency calculation, and importance calculation for the voices input to the communication devices 110-1 to 110-2.
 図9は、図8に示した通信装置110-1の内部構成の一例を示す図である。 FIG. 9 is a diagram illustrating an example of an internal configuration of the communication device 110-1 illustrated in FIG.
 図8に示した通信装置110-1には図9に示すように、通信部111と、報知部114と、録音部115とが設けられている。なお、図8に示した通信装置110-2の内部構成も同様である。 As shown in FIG. 9, the communication device 110-1 shown in FIG. 8 includes a communication unit 111, a notification unit 114, and a recording unit 115. The internal configuration of communication device 110-2 shown in FIG. 8 is also the same.
 録音部115は、入力された音声を録音する。ここで、録音部115が録音する音声は、通話中の音声であっても良いし、マイク等に入力した周囲の音声(例えば、会議中に発言された音声や講義中の講師の音声等)であっても良い。 The recording unit 115 records the input voice. Here, the voice recorded by the recording unit 115 may be a voice during a call or a surrounding voice input to a microphone or the like (for example, a voice spoken during a meeting or a lecturer voice during a lecture). It may be.
 通信部111は、通信装置110-2との間で通話を行う。また、通信部111は、通話の開始および終了を検出する。また、通信部111は、入力された音声を示す音声データをサーバ300へ送信する。ここで、通信部111が送信する音声データが示す音声は、通話中の音声であっても良いし、マイク等に入力した周囲の音声(例えば、会議中に発言された音声や講義中の講師の音声等)であっても良い。また、このとき、録音部115が音声を録音している場合、通信部111は、録音部115が録音した音声を示す音声データをサーバ300へ送信する。 The communication unit 111 makes a call with the communication device 110-2. The communication unit 111 detects the start and end of a call. In addition, the communication unit 111 transmits audio data indicating the input audio to the server 300. Here, the voice indicated by the voice data transmitted by the communication unit 111 may be a voice during a call, or a surrounding voice input to a microphone or the like (for example, a voice spoken during a meeting or a lecturer during a lecture) Or the like. At this time, if the recording unit 115 is recording voice, the communication unit 111 transmits voice data indicating the voice recorded by the recording unit 115 to the server 300.
 報知部114は、サーバ300から送信されてきた情報を報知する。ここで、報知の方法は、ディスプレイ等を用いた情報の表示であっても良いし、スピーカーを用いた音による情報の伝達であっても良いし、振動を用いた通知であっても良い。 The notification unit 114 notifies information transmitted from the server 300. Here, the notification method may be display of information using a display or the like, transmission of information by sound using a speaker, or notification using vibration.
 図10は、図8に示したサーバ300の内部構成の一例を示す図である。 FIG. 10 is a diagram showing an example of the internal configuration of the server 300 shown in FIG.
 図8に示したサーバ300には図10に示すように、感情判定部301と、使用頻度算出部302と、重要度算出部303と、通信部304とが設けられている。 As shown in FIG. 10, the server 300 shown in FIG. 8 includes an emotion determination unit 301, a usage frequency calculation unit 302, an importance level calculation unit 303, and a communication unit 304.
 感情判定部301は、通信装置110-1または通信装置110-2から送信されてきた音声データに基づいて、その音声データが示す音声の発話者の感情を判定する。この判定方法は、図2に示した感情判定部101におけるものと同じである。 The emotion determination unit 301 determines the emotion of the voice speaker indicated by the voice data based on the voice data transmitted from the communication device 110-1 or the communication device 110-2. This determination method is the same as that in the emotion determination unit 101 shown in FIG.
 使用頻度算出部302は、通信装置110-1または通信装置110-2から送信されてきた音声データを単語に分割する。また、使用頻度算出部302は、分割された単語ごとの使用頻度を算出する。この使用頻度の算出方法は、図2に示した使用頻度算出部102におけるものと同じである。 The usage frequency calculation unit 302 divides the voice data transmitted from the communication device 110-1 or the communication device 110-2 into words. In addition, the usage frequency calculation unit 302 calculates the usage frequency for each divided word. The calculation method of the usage frequency is the same as that in the usage frequency calculation unit 102 shown in FIG.
 重要度算出部303は、感情判定部301が判定した感情と、使用頻度算出部302が算出した使用頻度とに基づいて、単語の重要度を算出する。この重要度の算出方法は、図2に示した重要度算出部103におけるものと同じである。 The importance level calculation unit 303 calculates the importance level of words based on the emotion determined by the emotion determination unit 301 and the usage frequency calculated by the usage frequency calculation unit 302. The importance calculation method is the same as that in the importance calculation unit 103 shown in FIG.
 通信部304は、重要度算出部303が算出した重要度に基づいて、所定の情報を通信装置110-1~110-2へ送信する。この所定の情報とは、例えば、通話内容の概要を示す情報であっても良いし、通話内容を特徴付ける要約等であっても良いが、重要度算出部303が算出した重要度の高い単語が含まれる情報となる。また、通信部304が送信する情報は、単に、重要度算出部303が算出した重要度が最も高い単語を示す情報や、重要度の最も高いものから所定の数の単語を示す情報であっても良い。また、重要度算出部303が上述したようなスコアを算出した場合、通信部304は、そのスコアを送信する。 The communication unit 304 transmits predetermined information to the communication devices 110-1 to 110-2 based on the importance calculated by the importance calculation unit 303. The predetermined information may be, for example, information indicating the outline of the call content, or may be a summary characterizing the call content, but the high importance word calculated by the importance calculation unit 303 is It is included information. The information transmitted by the communication unit 304 is simply information indicating the word with the highest importance calculated by the importance calculation unit 303 or information indicating a predetermined number of words from the highest importance. Also good. When the importance calculation unit 303 calculates the score as described above, the communication unit 304 transmits the score.
 以下に、図8に示した形態における情報提供方法について説明する。 Hereinafter, an information providing method in the form shown in FIG. 8 will be described.
 図11は、図8に示した形態における情報提供方法を説明するためのシーケンス図である。なお、ここでは、通信装置110-1が通話中の音声を録音して、録音した音声を示す音声データをサーバ300へ送信する場合を例に挙げて説明する。 FIG. 11 is a sequence diagram for explaining the information providing method in the form shown in FIG. Here, a case will be described as an example in which communication device 110-1 records a voice during a call and transmits voice data indicating the recorded voice to server 300.
 まず、ステップ11にて、通信装置110-1の通信部111が通話を開始すると、ステップ12にて、録音部115が通話中の音声の録音を開始する。 First, when the communication unit 111 of the communication device 110-1 starts a call in step 11, the recording unit 115 starts recording the voice during the call in step 12.
 その後、ステップ13にて、通信部111が通話を終了すると、ステップ14にて、録音部115が録音した音声を示す音声データを通信部111がサーバ300へ送信する。なお、このとき、通信部111は、当該音声データを圧縮処理した後、送信するものであっても良い。 Thereafter, when the communication unit 111 terminates the call at step 13, the communication unit 111 transmits voice data indicating the voice recorded by the recording unit 115 to the server 300 at step 14. At this time, the communication unit 111 may transmit the audio data after compression processing.
 サーバ300の通信部304が、通信装置110-1から送信された音声データを受信すると、ステップ15にて、感情判定部301が、当該受信した音声データに基づいて、その音声データが示す音声の発話者の感情を判定する。なお、通信装置110-1から送信されてきた音声データが圧縮処理されたものである場合は、通信部111が当該音声データを解凍処理した後に、感情判定部301が、解凍処理された音声データに基づいて感情を判定する。 When the communication unit 304 of the server 300 receives the audio data transmitted from the communication device 110-1, the emotion determination unit 301, based on the received audio data, in step 15, the audio data indicated by the audio data is displayed. Determine the emotion of the speaker. If the audio data transmitted from the communication device 110-1 is compressed, the emotion determination unit 301 decompresses the audio data after the communication unit 111 decompresses the audio data. Emotion is judged based on
 また、ステップ16にて、使用頻度算出部302は、通信装置110-1から送信されてきた音声データを単語に分割する。続いて、ステップ17にて、使用頻度算出部302は、分割したそれぞれの単語の使用頻度を算出する。 Also, in step 16, the usage frequency calculation unit 302 divides the voice data transmitted from the communication device 110-1 into words. Subsequently, in step 17, the usage frequency calculation unit 302 calculates the usage frequency of each divided word.
 すると、ステップ18にて、重要度算出部303は、感情判定部301が判定した感情と、使用頻度算出部302が算出した使用頻度とに基づいて、単語それぞれの重要度を算出する。 Then, in step 18, the importance calculation unit 303 calculates the importance of each word based on the emotion determined by the emotion determination unit 301 and the use frequency calculated by the use frequency calculation unit 302.
 重要度算出部303が単語それぞれの重要度を算出すると、ステップ19にて、通信部304は、重要度算出部303が算出した重要度に基づいて、所定の情報を送信する。この情報は、上述した通りの情報である。 When the importance calculation unit 303 calculates the importance of each word, in step 19, the communication unit 304 transmits predetermined information based on the importance calculated by the importance calculation unit 303. This information is the information as described above.
 サーバ300から送信されてきた情報を通信装置110-1の通信部111が受信すると、ステップ20にて、報知部114が当該情報を報知する。報知の内容は、図2に示した報知部104が報知する内容と同じである。 When the communication unit 111 of the communication device 110-1 receives the information transmitted from the server 300, the notification unit 114 notifies the information in step 20. The content of the notification is the same as the content notified by the notification unit 104 shown in FIG.
 このように、通信装置110-1の外部の装置が、感情の判定、使用頻度の算出および重要度の算出を行うため、このような処理を行う機能を具備していない通信装置であっても、本発明を利用することができる。 As described above, since a device external to the communication device 110-1 performs emotion determination, usage frequency calculation, and importance calculation, even a communication device that does not have the function of performing such processing. The present invention can be used.
 上述したように、本発明は、音声通信機能を具備した通信端末(通信装置)を用いて行った会話を、音声認識を利用して自動的に会話判断、記録を行う。例えば、音声認識技術を利用して、会話で使用した単語の使用回数をカウントし、ランキングをつけて報知することで、通話終了後、会話のトピックスや最も伝えたかった事柄を確認することが出来る。最も多く発声された単語は、最も伝えたかったこととして考えられる。また、音声の高低や、強弱、音と音との間隔(詰まり具合)などで感情判断を行う機能を用いることで、発話者の言葉と感情を結びつけることができ、どの内容に喜んでいたか、どの内容で怒らせてしまったか、あるいは相手が最も熱心に強く伝えようとしていたこと(重要事項)は何だったか、などを自動的に整理し、後から容易に確認することが出来る。 As described above, according to the present invention, a conversation performed using a communication terminal (communication device) having a voice communication function is automatically determined and recorded using voice recognition. For example, you can use voice recognition technology to count the number of words used in a conversation, and give a ranking to inform you of the topic of the conversation and what you wanted to convey most after the call. . The most spoken word is considered as the most wanted word. In addition, by using the emotion judgment function based on the level of the voice, strength, weakness, and sound-to-sound (clogging), it is possible to connect the speaker's words and emotions. , You can automatically check what you have angered or what the other party was most eager to convey (what is important) and automatically check it later.
 このように、音声認識を利用することで、会話の内容を自動的に判断し、利用者が意識できていなかった事項や、相手が伝えたかった重要事項、会話での相手の感情などを、利用者は、通話後、作業負荷なく簡単に確認することが出来る。 In this way, by using voice recognition, the content of the conversation is automatically judged, items that the user was not aware of, important matters that the other party wanted to convey, feelings of the other party in the conversation, The user can easily check without a work load after a call.
 また、取得したキーワードランキングを活用して、発話者の口癖を調べたり、感情判定データを活用して、会話の何割が「喜び」などプラスの感情で占められていたかで会話の採点を行ったりすることで、エンターテイメント性をもった機能を実現できる。これにより、通話に新たな楽しさ・価値が生まれ、音声での意思疎通を行う機会が増える。 In addition, by using the keyword ranking obtained, the speech of the speaker is examined, and emotion evaluation data is used to score the conversation according to what percentage of the conversation is occupied by positive emotions such as “joy”. By doing this, you can realize entertainment functions. This creates new enjoyment and value for the call and increases the opportunity for voice communication.
 また、上述した通信装置100-1、サーバ300それぞれに設けられた各構成要素が行う処理は、目的に応じてそれぞれ作製された論理回路で行うようにしても良い。また、処理内容を手順として記述したコンピュータプログラム(以下、プログラムと称する)を通信装置100-1、サーバ300それぞれにて読取可能な記録媒体に記録し、この記録媒体に記録されたプログラムを通信装置100-1、サーバ300それぞれに読み込ませ、実行するものであっても良い。通信装置100-1、サーバ300それぞれにて読取可能な記録媒体とは、フロッピー(登録商標)ディスク、光磁気ディスク、DVD、CDなどの移設可能な記録媒体の他、通信装置100-1、サーバ300それぞれに内蔵されたROM、RAM等のメモリやHDD等を指す。この記録媒体に記録されたプログラムは通信装置100-1、サーバ300それぞれに設けられたCPU(不図示)にて読み込まれ、CPUの制御によって、上述したものと同様の処理が行われる。ここで、CPUは、プログラムが記録された記録媒体から読み込まれたプログラムを実行するコンピュータとして動作するものである。 Further, the processing performed by each component provided in each of the communication device 100-1 and the server 300 described above may be performed by a logic circuit that is produced according to the purpose. In addition, a computer program (hereinafter referred to as a program) in which processing contents are described as a procedure is recorded on a recording medium readable by each of the communication device 100-1 and the server 300, and the program recorded on the recording medium is stored in the communication device. 100-1 and server 300 may be read and executed. The recording medium readable by each of the communication device 100-1 and the server 300 includes a transferable recording medium such as a floppy (registered trademark) disk, a magneto-optical disk, a DVD, and a CD, as well as the communication device 100-1 and the server. Reference numeral 300 denotes a memory such as a ROM or a RAM, an HDD, or the like built in each of 300. The program recorded on the recording medium is read by a CPU (not shown) provided in each of the communication device 100-1 and the server 300, and the same processing as described above is performed under the control of the CPU. Here, the CPU operates as a computer that executes a program read from a recording medium on which the program is recorded.
 なお、上述した通信装置100-1、サーバ300が具備する機能は、IC(Integrated Circuit)レコーダ、携帯電話機、携帯端末、タブレット型やノート型のPC(Personal Computer)、スマートフォン、PDA(Personal Digital Assistants)、ゲーム機等の装置に適用することが可能である。 Note that the functions of the communication device 100-1 and the server 300 described above include an IC (Integrated Circuit) recorder, a mobile phone, a mobile terminal, a tablet or notebook PC (Personal Computer), a smartphone, a PDA (Personal Digital Assistants). ), And can be applied to devices such as game machines.
 上記の実施の形態の一部または全部は、以下の付記のようにも記載され得るが、以下には限られない。
(付記1)入力された音声に基づいて、該音声の発話者の感情を判定する感情判定部と、
 前記音声を単語に分割し、該単語ごとの使用頻度を算出する使用頻度算出部と、
 前記感情判定部が判定した前記感情と、前記使用頻度算出部が算出した前記使用頻度とに基づいて、前記単語の重要度を算出する重要度算出部と、
 前記重要度算出部が算出した前記重要度に応じた情報を報知する報知部とを有する通信装置。
(付記2)前記入力された音声を録音する録音部を有し、
 前記感情判定部は、前記録音部が録音した音声に基づいて、前記発話者の感情を判定し、
 前記使用頻度算出部は、前記録音部が録音した音声を単語に分割することを特徴とする、付記1に記載の通信装置。
(付記3)前記音声は、通話中の音声であることを特徴とする、付記1または付記2に記載の通信装置。
(付記4)前記重要度算出部は、前記単語について前記重要度に応じたスコアを算出し、
 前記報知部は、前記スコアを報知することを特徴とする、付記1から3のいずれか1項に記載の通信装置。
(付記5)前記報知部は、前記重要度算出部が算出した前記重要度が最も高い単語を報知することを特徴とする、付記1から4のいずれか1項に記載の通信装置。
(付記6)通信装置から送信されてきた音声データに基づいて、該音声データが示す音声の発話者の感情を判定する感情判定部と、
 前記音声データを単語に分割し、該単語ごとの使用頻度を算出する使用頻度算出部と、
 前記感情判定部が判定した前記感情と、前記使用頻度算出部が算出した前記使用頻度とに基づいて、前記単語の重要度を算出する重要度算出部と、
 前記重要度算出部が算出した前記重要度に応じた情報を前記通信装置へ送信する通信部とを有するサーバ。
(付記6)入力された音声に基づいて、該音声の発話者の感情を判定する処理と、
 前記音声を単語に分割する処理と、
 前記分割された単語ごとの使用頻度を算出する処理と、
 前記感情と前記使用頻度とに基づいて、前記単語の重要度を算出する処理と、
 前記算出した前記重要度に応じた情報を報知する処理とを行う情報提供方法。
(付記7)コンピュータに、
 入力された音声に基づいて、該音声の発話者の感情を判定する手順と、
 前記音声を単語に分割する手順と、
 前記分割された単語ごとの使用頻度を算出する手順と、
 前記感情と前記使用頻度とに基づいて、前記単語の重要度を算出する手順と、
 前記算出した前記重要度に応じた情報を報知する手順とを実行させるためのプログラム。
A part or all of the above embodiment can be described as in the following supplementary notes, but is not limited thereto.
(Supplementary note 1) An emotion determination unit that determines an emotion of a speaker of the voice based on the input voice;
A usage frequency calculation unit that divides the voice into words and calculates a usage frequency for each word;
An importance calculator that calculates the importance of the word based on the emotion determined by the emotion determiner and the use frequency calculated by the use frequency calculator;
A communication apparatus comprising: a notification unit that notifies information according to the importance calculated by the importance calculation unit.
(Additional remark 2) It has a recording part which records the said input voice,
The emotion determination unit determines the emotion of the speaker based on the voice recorded by the recording unit,
The communication apparatus according to appendix 1, wherein the usage frequency calculation unit divides the voice recorded by the recording unit into words.
(Supplementary note 3) The communication apparatus according to Supplementary note 1 or 2, wherein the voice is a voice during a call.
(Additional remark 4) The said importance calculation part calculates the score according to the said importance about the said word,
The communication device according to any one of appendices 1 to 3, wherein the notification unit notifies the score.
(Additional remark 5) The said alerting | reporting part alert | reports the word with the highest said importance calculated by the said importance calculation part, The communication apparatus of any one of Additional remark 1 to 4 characterized by the above-mentioned.
(Additional remark 6) Based on the audio | voice data transmitted from the communication apparatus, the emotion determination part which determines the emotion of the speaker of the audio | voice which this audio | voice data shows,
A usage frequency calculation unit that divides the voice data into words and calculates a usage frequency for each word;
An importance calculator that calculates the importance of the word based on the emotion determined by the emotion determiner and the use frequency calculated by the use frequency calculator;
And a communication unit that transmits information corresponding to the importance calculated by the importance calculation unit to the communication device.
(Additional remark 6) Based on the input audio | voice, the process which determines the emotion of the speaker of the said audio | voice,
Processing to divide the voice into words;
A process of calculating a use frequency for each of the divided words;
Processing to calculate the importance of the word based on the emotion and the usage frequency;
The information provision method which performs the process which alert | reports the information according to the calculated said importance.
(Appendix 7)
A procedure for determining an emotion of a speaker of the voice based on the inputted voice;
Dividing the speech into words;
A procedure for calculating a use frequency for each of the divided words;
Calculating the importance of the word based on the emotion and the usage frequency;
The program for performing the procedure which alert | reports the information according to the calculated said importance.
 以上、実施の形態を参照して本願発明を説明したが、本願発明は上記実施の形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 As mentioned above, although this invention was demonstrated with reference to embodiment, this invention is not limited to the said embodiment. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 この出願は、2012年5月24日に出願された日本出願特願2012-118699を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2012-118699 filed on May 24, 2012, the entire disclosure of which is incorporated herein.

Claims (8)

  1.  入力された音声に基づいて、該音声の発話者の感情を判定する感情判定部と、
     前記音声を単語に分割し、該単語ごとの使用頻度を算出する使用頻度算出部と、
     前記感情判定部が判定した前記感情と、前記使用頻度算出部が算出した前記使用頻度とに基づいて、前記単語の重要度を算出する重要度算出部と、
     前記重要度算出部が算出した前記重要度に応じた情報を報知する報知部とを有する通信装置。
    An emotion determination unit that determines an emotion of a speaker of the voice based on the input voice;
    A usage frequency calculation unit that divides the voice into words and calculates a usage frequency for each word;
    An importance calculator that calculates the importance of the word based on the emotion determined by the emotion determiner and the use frequency calculated by the use frequency calculator;
    A communication apparatus comprising: a notification unit that notifies information according to the importance calculated by the importance calculation unit.
  2.  請求項1に記載の通信装置において、
     前記入力された音声を録音する録音部を有し、
     前記感情判定部は、前記録音部が録音した音声に基づいて、前記発話者の感情を判定し、
     前記使用頻度算出部は、前記録音部が録音した音声を単語に分割することを特徴とする通信装置。
    The communication device according to claim 1,
    A recording unit for recording the input voice;
    The emotion determination unit determines the emotion of the speaker based on the voice recorded by the recording unit,
    The communication device according to claim 1, wherein the usage frequency calculation unit divides the voice recorded by the recording unit into words.
  3.  請求項1または請求項2に記載の通信装置において、
     前記音声は、通話中の音声であることを特徴とする通信装置。
    The communication device according to claim 1 or 2,
    The communication apparatus according to claim 1, wherein the voice is a voice during a call.
  4.  請求項1から3のいずれか1項に記載の通信装置において、
     前記重要度算出部は、前記単語について前記重要度に応じたスコアを算出し、
     前記報知部は、前記スコアを報知することを特徴とする通信装置。
    The communication apparatus according to any one of claims 1 to 3,
    The importance calculation unit calculates a score corresponding to the importance for the word,
    The communication unit is configured to notify the score.
  5.  請求項1から4のいずれか1項に記載の通信装置において、
     前記報知部は、前記重要度算出部が算出した前記重要度が最も高い単語を報知することを特徴とする通信装置。
    The communication device according to any one of claims 1 to 4,
    The communication unit is configured to notify a word having the highest importance calculated by the importance calculation unit.
  6.  通信装置から送信されてきた音声データに基づいて、該音声データが示す音声の発話者の感情を判定する感情判定部と、
     前記音声データを単語に分割し、該単語ごとの使用頻度を算出する使用頻度算出部と、
     前記感情判定部が判定した前記感情と、前記使用頻度算出部が算出した前記使用頻度とに基づいて、前記単語の重要度を算出する重要度算出部と、
     前記重要度算出部が算出した前記重要度に応じた情報を前記通信装置へ送信する通信部とを有するサーバ。
    An emotion determination unit that determines the emotion of the speaker of the voice represented by the voice data based on the voice data transmitted from the communication device;
    A usage frequency calculation unit that divides the voice data into words and calculates a usage frequency for each word;
    An importance calculator that calculates the importance of the word based on the emotion determined by the emotion determiner and the use frequency calculated by the use frequency calculator;
    And a communication unit that transmits information corresponding to the importance calculated by the importance calculation unit to the communication device.
  7.  入力された音声に基づいて、該音声の発話者の感情を判定する処理と、
     前記音声を単語に分割する処理と、
     前記分割された単語ごとの使用頻度を算出する処理と、
     前記感情と前記使用頻度とに基づいて、前記単語の重要度を算出する処理と、
     前記算出した前記重要度に応じた情報を報知する処理とを行う情報提供方法。
    A process of determining the emotion of the speaker of the voice based on the input voice;
    Processing to divide the voice into words;
    A process of calculating a use frequency for each of the divided words;
    Processing to calculate the importance of the word based on the emotion and the usage frequency;
    The information provision method which performs the process which alert | reports the information according to the calculated said importance.
  8.  コンピュータに、
     入力された音声に基づいて、該音声の発話者の感情を判定する手順と、
     前記音声を単語に分割する手順と、
     前記分割された単語ごとの使用頻度を算出する手順と、
     前記感情と前記使用頻度とに基づいて、前記単語の重要度を算出する手順と、
     前記算出した前記重要度に応じた情報を報知する手順とを実行させるためのプログラム。
    On the computer,
    A procedure for determining an emotion of a speaker of the voice based on the inputted voice;
    Dividing the speech into words;
    A procedure for calculating a use frequency for each of the divided words;
    Calculating the importance of the word based on the emotion and the usage frequency;
    The program for performing the procedure which alert | reports the information according to the calculated said importance.
PCT/JP2012/082453 2012-05-24 2012-12-14 Communication apparatus WO2013175665A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012118699 2012-05-24
JP2012-118699 2012-05-24

Publications (1)

Publication Number Publication Date
WO2013175665A1 true WO2013175665A1 (en) 2013-11-28

Family

ID=49623383

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/082453 WO2013175665A1 (en) 2012-05-24 2012-12-14 Communication apparatus

Country Status (1)

Country Link
WO (1) WO2013175665A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019204420A (en) * 2018-05-25 2019-11-28 京セラドキュメントソリューションズ株式会社 Voice processing system and voice processing program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08286693A (en) * 1995-04-13 1996-11-01 Toshiba Corp Information processing device
WO2006085565A1 (en) * 2005-02-08 2006-08-17 Nippon Telegraph And Telephone Corporation Information communication terminal, information communication system, information communication method, information communication program, and recording medium on which program is recorded
JP2007049657A (en) * 2005-08-05 2007-02-22 Seiya Takada Automatic answering telephone apparatus
JP2011170109A (en) * 2010-02-18 2011-09-01 Nikon Corp Information processing apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08286693A (en) * 1995-04-13 1996-11-01 Toshiba Corp Information processing device
WO2006085565A1 (en) * 2005-02-08 2006-08-17 Nippon Telegraph And Telephone Corporation Information communication terminal, information communication system, information communication method, information communication program, and recording medium on which program is recorded
JP2007049657A (en) * 2005-08-05 2007-02-22 Seiya Takada Automatic answering telephone apparatus
JP2011170109A (en) * 2010-02-18 2011-09-01 Nikon Corp Information processing apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019204420A (en) * 2018-05-25 2019-11-28 京セラドキュメントソリューションズ株式会社 Voice processing system and voice processing program
JP7177383B2 (en) 2018-05-25 2022-11-24 京セラドキュメントソリューションズ株式会社 Audio processing system and audio processing program

Similar Documents

Publication Publication Date Title
US10079014B2 (en) Name recognition system
US9571638B1 (en) Segment-based queueing for audio captioning
US9525767B2 (en) System and method for answering a communication notification
US7706510B2 (en) System and method for personalized text-to-voice synthesis
JP6604836B2 (en) Dialog text summarization apparatus and method
KR101626438B1 (en) Method, device, and system for audio data processing
US9666209B2 (en) Prevention of unintended distribution of audio information
US9560316B1 (en) Indicating sound quality during a conference
CN107919138B (en) Emotion processing method in voice and mobile terminal
US20080201142A1 (en) Method and apparatus for automication creation of an interactive log based on real-time content
TW201926079A (en) Bidirectional speech translation system, bidirectional speech translation method and computer program product
CA2539649C (en) System and method for personalized text-to-voice synthesis
JP5864285B2 (en) Telephone reception service support system and method
US20150149171A1 (en) Contextual Audio Recording
JP7463469B2 (en) Automated Call System
JP2010034695A (en) Voice response device and method
US10789954B2 (en) Transcription presentation
US9881611B2 (en) System and method for providing voice communication from textual and pre-recorded responses
CN111684411A (en) Concurrent receipt of multiple user speech inputs for translation
US10535360B1 (en) Phone stand using a plurality of directional speakers
WO2013175665A1 (en) Communication apparatus
JP7052335B2 (en) Information processing system, information processing method and program
US10070283B2 (en) Method and apparatus for automatically identifying and annotating auditory signals from one or more parties
JP2020119043A (en) Voice translation system and voice translation method
JP6434799B2 (en) Message processing device, terminal device, and computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12877405

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12877405

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP