WO2017130474A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2017130474A1
WO2017130474A1 PCT/JP2016/080485 JP2016080485W WO2017130474A1 WO 2017130474 A1 WO2017130474 A1 WO 2017130474A1 JP 2016080485 W JP2016080485 W JP 2016080485W WO 2017130474 A1 WO2017130474 A1 WO 2017130474A1
Authority
WO
WIPO (PCT)
Prior art keywords
information processing
information
processing apparatus
present
content
Prior art date
Application number
PCT/JP2016/080485
Other languages
French (fr)
Japanese (ja)
Inventor
真一 河野
東山 恵祐
伸樹 古江
圭祐 齊藤
佐藤 大輔
亮介 三谷
美和 市川
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Priority to EP16888059.9A priority Critical patent/EP3410432A4/en
Priority to JP2017563679A priority patent/JP6841239B2/en
Priority to US16/068,987 priority patent/US11120063B2/en
Publication of WO2017130474A1 publication Critical patent/WO2017130474A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present disclosure relates to an information processing apparatus, an information processing method, and a program.
  • speaker When a person who speaks (hereinafter referred to as “speaker”) speaks, it is difficult to speak only what the speaker wants to convey.
  • This disclosure proposes a new and improved information processing apparatus, information processing method, and program capable of summarizing the content of an utterance.
  • an information processing apparatus including a processing unit that performs a summarization process for summarizing the content of an utterance indicated by voice information based on a user's utterance based on information indicating a weight related to the acquired summary.
  • the information processing apparatus which has a step which performs the summarization process which summarizes the content of the speech which the audio
  • a program for causing a computer to implement a function of performing a summarization process for summarizing the content of utterances indicated by voice information based on a user's utterances based on information indicating weights related to the acquired summaries Provided.
  • the information processing method according to the present embodiment will be described by dividing it into a first information processing method and a second information processing method.
  • a case where the same information processing apparatus performs both the processing related to the first information processing method and the processing related to the second information processing method will be mainly described.
  • the information processing apparatus that performs the process according to the above may be different from the information processing apparatus that performs the process according to the second information processing method.
  • a person who is a target of processing related to the information processing method according to the present embodiment is indicated as “user”.
  • a user for example, “speaker (or a person who can be a speaker)” (when a first information processing method described later is performed) or “operator of an operation device related to notification” ( And a second information processing method to be described later).
  • the information processing apparatus performs processing for summarizing the content of the utterance (hereinafter referred to as “summarization processing”) as processing related to the first information processing method.
  • the information processing apparatus summarizes the content of the utterance indicated by the voice information based on the user's utterance based on the information indicating the weight related to the acquired summary.
  • the summary for example, the content of an utterance is selected based on the weight related to the summary, or a part is extracted from the content of the utterance based on the weight related to the summary.
  • the information indicating the weight related to the summary includes, for example, data indicating the weight related to the summary, which is stored in a table (or database, hereinafter the same applies) for setting the weight related to the summary described later. Further, the information indicating the weight regarding the summary may be data indicating that the weight regarding the summary is relatively large or small. The information indicating the weight related to the summary is acquired by referring to a table for setting the weight related to the summary described later, for example.
  • the voice information according to the present embodiment is voice data including voice based on the utterance of the speaker.
  • the voice information according to the present embodiment is generated when a voice input device such as a microphone picks up voice based on the utterance of the speaker.
  • the audio information according to the present embodiment may be information obtained by converting an analog signal generated according to the audio picked up by the audio input device into a digital signal by an AD (Analog-to-Digital) converter.
  • the voice input device (or the voice input device and the AD converter) may be included in the information processing apparatus according to the present embodiment, or may be a device external to the information processing apparatus according to the present embodiment. There may be.
  • the content of the utterance indicated by the voice information includes, for example, a character string indicated by text data (hereinafter, referred to as “voice text information”) obtained as a result of arbitrary voice recognition processing performed on the voice information. It is done.
  • voice text information a character string indicated by text data
  • the information processing apparatus recognizes the character string indicated by the voice text information as the content of the utterance indicated by the voice information, and summarizes the character string indicated by the voice text information.
  • the voice recognition processing for the voice information may be performed by the information processing apparatus according to the present embodiment, or may be performed by an external device of the information processing apparatus according to the present embodiment.
  • the information processing apparatus according to the present embodiment performs the speech recognition process
  • the information processing apparatus according to the present embodiment indicates the character indicated by the speech text information obtained as a result of performing the speech recognition process on the acquired speech information. Summarize the column.
  • the external device of the information processing apparatus according to the present embodiment performs speech recognition processing
  • the information processing apparatus according to the present embodiment summarizes the character string indicated by the speech text information acquired from the external device.
  • the voice recognition process may be repeatedly performed, for example, periodically / non-periodically, or at a predetermined trigger such as a timing when the voice information is acquired. It may be done accordingly.
  • the voice recognition process may be performed when a predetermined operation such as a voice recognition start operation related to the summary is performed, for example.
  • the weight related to the summary according to the present embodiment is an index for extracting more important words (in other words, words that the speaker will want to convey) from the content of the utterance indicated by the voice information. is there. Based on the weight related to the summary according to the present embodiment, the content of the utterance indicated by the voice information is summarized, so that more important words corresponding to the weight related to the summary are included in the content of the summarized utterance. .
  • the weight related to the summary according to the present embodiment is at least one of audio information, information about the user, information about the application, information about the environment, and information about the device (1 or 2 of these) as shown below, for example. Is set based on the above.
  • the information on the user includes, for example, at least one of user status information indicating the user status and user operation information based on the user operation.
  • the user state for example, an action taken by the user (including an operation such as a gesture), a state of emotion of the user, and the like can be mentioned.
  • the user state is one or more of user biometric information obtained from an arbitrary biosensor, a detection result of a motion sensor such as a velocity sensor or an angular velocity sensor, and a captured image captured by an imaging device. It is estimated by an arbitrary action estimation process or an arbitrary emotion estimation process used.
  • the processing related to the estimation of the user state may be performed by the information processing apparatus according to the present embodiment, or may be performed by an external device of the information processing apparatus according to the present embodiment.
  • examples of user operations include various operations such as a speech recognition start operation related to summarization and an operation for starting a predetermined application.
  • the information about the application indicates, for example, the execution state of the application.
  • the information regarding the environment indicates, for example, a situation around the user (or a situation where the user is placed).
  • the environmental information include data indicating the level of noise around the user.
  • the level of noise around the user is specified by threshold processing using one or two or more thresholds for level classification, for example, by extracting non-speech from voice information generated by a microphone.
  • the processing related to the acquisition of information related to the environment as described above may be performed by the information processing apparatus according to the present embodiment, or may be performed by an external apparatus of the information processing apparatus according to the present embodiment.
  • the information about the device indicates, for example, one or both of the device type and the device state.
  • Examples of the state of the device include a processing load of a processor included in the device.
  • the content of the utterance indicated by the voice information is summarized by performing the summarization process according to the first information processing method. Therefore, the content of the utterance of the speaker indicated by the voice information can be simplified.
  • the content of the utterance is summarized based on the weights related to the summary set as described above, so that more important words corresponding to the weights related to the summary are summarized. Included in the content of the utterance.
  • the information processing apparatus performs processing for controlling notification of notification content (hereinafter referred to as “notification control processing”) based on summary information as processing related to the second information processing method.
  • the summary information indicates the content of the summarized utterance corresponding to the voice information based on the utterance of the first user.
  • the summary information is obtained, for example, by performing summary processing according to the first information processing method.
  • the content of the summarized utterance indicated by the summary information is not limited to the above, and may be summarized by any method capable of summarizing the utterance content indicated by the speech information based on the user's utterance. Good.
  • the summary information indicates the content of the summarized utterance obtained by performing the summary processing according to the first information processing method.
  • the information processing apparatus controls the notification of the notification content to the second user.
  • the notification content for the second user may be, for example, the content of the summarized utterance indicated by the summary information, or the content of the notification that is different from the content of the summarized utterance, or the summary
  • the content of the summarized utterance indicated by the summary information, such as the translated content of the utterance, may not be used.
  • the first user according to the present embodiment and the second user according to the present embodiment may be different or the same. As an example of the case where the first user and the second user are different, there is a case where the first user is a speaker and the second user is a communication partner. Moreover, the case where a 1st user and a 2nd user are the same as an example as a case where a 1st user and a 2nd user are the same speaker is mentioned.
  • the information processing apparatus causes notification contents to be notified by one or both of notification by a visual method and notification by an auditory method, for example.
  • the information processing apparatus causes the notification content to be displayed on the display screen of the display device.
  • the information processing apparatus transmits, for example, a display control signal including display data corresponding to the notification content and a display command to the display device, so that the notification content is displayed on the display screen of the display device. Display.
  • a display screen for displaying the notification content for example, a display device that constitutes a display unit (described later) included in the information processing apparatus according to the present embodiment, or an external device of the information processing apparatus according to the present embodiment.
  • Display devices When the display screen for displaying the notification content is an external display device, the information processing apparatus according to the present embodiment includes, for example, a communication unit (described later) included in the information processing apparatus according to the present embodiment, or the present embodiment.
  • the display control signal is transmitted to an external display device to a communication device external to the information processing apparatus.
  • the information processing apparatus outputs, for example, the notification content from a sound output device such as a speaker as sound (music may be included). To let you know.
  • the information processing apparatus transmits the audio output control signal including the audio data indicating the audio corresponding to the notification content and the audio output command to the audio output device. , Voice output from the audio output device.
  • the audio output device that outputs the notification content by audio may be, for example, an audio output device included in the information processing apparatus according to the present embodiment, or an audio output outside the information processing apparatus according to the present embodiment. It may be a device.
  • the audio output device that outputs the notification content by voice is an external audio output device
  • the information processing apparatus according to the present embodiment includes, for example, a communication unit (described later) included in the information processing apparatus according to the present embodiment, or The audio output control signal is transmitted to the external audio output device to the communication device external to the information processing apparatus according to the present embodiment.
  • the notification method of the notification content in the information processing apparatus according to the present embodiment is not limited to one or both of the notification method using the visual method and the notification method using the auditory method.
  • the information processing apparatus according to the present embodiment can also notify a break in the notification content by a tactile notification method, for example, by vibrating a vibration device.
  • the notification content based on the summarized utterance content obtained by the summarization process according to the first information processing method is notified.
  • the content of the summarized utterance obtained by the summary processing according to the first information processing method is “because it is difficult to utter only the content that the speaker wants to convey” This is a summary result that can further reduce the possibility of the “event to occur”.
  • the notification content is notified, so that “the communication partner needs time to understand the content that the speaker wants to convey”. It is possible to further reduce the possibility of occurrence of “an event caused by difficulty in speaking only the content that the speaker wants to convey” such as “it takes time for translation”.
  • the processes related to the information processing method according to the present embodiment include the summary process related to the first information processing method and the second information described above. It is not restricted to the notification control process which concerns on a processing method.
  • the process related to the information processing method according to the present embodiment includes a process (hereinafter referred to as “translation process”) for translating the content of the utterance summarized by the summary process according to the first information processing method into another language. .) May be further included.
  • translation process the content of the summarized utterance is translated from the first language corresponding to the speech information based on the utterance into a second language different from the first language.
  • translation result the content of the translated summary utterance obtained by performing the translation process is referred to as “translation result”.
  • the translation processing according to the present embodiment may be performed as part of the processing according to the first information processing method, or may be performed as part of the processing according to the second information processing method.
  • one or both of the result of the summary processing according to the first information processing method and the result of the translation processing according to the present embodiment may be arbitrarily set.
  • a recording control process for recording on the recording medium may be further included.
  • the recording control process for example, “one or both of the result of the summary process according to the first information processing method and the result of the translation process according to the embodiment” and “position information corresponding to the user”
  • Information related to the user such as the user's biometric information obtained from an arbitrary biosensor (described later) may be associated and recorded as a log.
  • the use case to which the information processing method according to this embodiment is applied is not limited to “conversation support”.
  • the information processing method according to the present embodiment can be applied to any use case in which the content of the utterance indicated by the audio information can be summarized as described below. ⁇ "Meeting with meeting letters" realized by summarizing the utterances indicated by the voice information indicating the voice of the meeting, generated by an IC (Integrated Circuit) recorder, etc.
  • 1 to 5 are explanatory diagrams for explaining an example of a use case to which the information processing method according to this embodiment is applied.
  • the person indicated by “U1” corresponds to the user according to the present embodiment. 2 and 5, the person indicated by “U2” corresponds to the partner with whom the user U1 communicates.
  • the person indicated by “U1” in FIGS. 1, 2 and 5 is indicated as “user U1”, and the person indicated by “U2” in FIGS. 2 and 5 is indicated as “communication partner U2”. Show.
  • a case where the native language of the communication partner U2 is Japanese is taken as an example.
  • FIG. 1, FIG. 2, and FIG. 5 show an example in which the user U1 is wearing an eyewear type device having a display screen.
  • an audio input device such as a microphone
  • an audio output device such as a speaker
  • an imaging device are connected to the eyewear type apparatus worn by the user U1 shown in FIGS. Yes.
  • a wearable apparatus used by being worn on the body of the user U1 such as the eyewear type apparatus shown in FIG. Examples include a communication device such as a smartphone and a computer such as a server.
  • a communication device such as a smartphone
  • a computer such as a server.
  • the information processing apparatus sets a weight related to a summary, for example, by using a table for setting a weight related to the summary.
  • the table for setting the weight regarding the summary may be stored in a storage unit (described later) included in the information processing apparatus according to the present embodiment, or may be stored outside the information processing apparatus according to the present embodiment. It may be stored in a recording medium.
  • the information processing apparatus according to the present embodiment uses, for example, a table for setting a weight related to summarization by appropriately referring to a storage unit (described later) or an external recording medium.
  • the information processing apparatus can set the weight related to the summary by determining the weight related to the summary using an arbitrary algorithm for determining the weight related to the summary, for example.
  • 6 to 8 are explanatory diagrams showing examples of tables for setting the weights related to the summary according to the present embodiment.
  • FIG. 6 shows an example of a table for specifying the weight related to the summary, and shows an example of the table weighted for each type of weight related to the summary for each registered vocabulary.
  • the combination indicated by the value “1” corresponds to the weighted combination.
  • the combination indicated by the value “0” corresponds to the combination that is not weighted.
  • FIG. 7 and 8 show examples of tables for specifying the types of weights related to the summary.
  • FIG. 7 shows an example of a table in which schedule contents specified from the state of the schedule application (or schedule contents estimated from the state of the schedule application) and weight types related to the summary are associated with each other.
  • FIG. 8 shows an example of a table in which user behavior (an example of a user state) is associated with a summary weight type.
  • the information processing apparatus includes, for example, a table for specifying the type of weight related to the summary as shown in FIGS. 7 and 8 and a table for specifying the weight related to the summary as shown in FIG. By using both as tables for setting the weight for the summary, the weight for the summary is set.
  • the example of the table for specifying the kind of weight regarding the summary which concerns on this embodiment is not restricted to the example shown to FIG. 7, FIG. 8, and the example of the table for specifying the weight regarding a summary is, It goes without saying that the present invention is not limited to the example shown in FIG. Moreover, the table for setting the weight regarding the summary which concerns on this embodiment may be provided for every languages, such as Japanese, English, Chinese, for example.
  • the information processing apparatus determines the type of weight related to summarization based on at least one of, for example, audio information, information about a user, information about an application, information about an environment, and information about a device. In this case, it is possible to set the weight for the summary using only the table for specifying the weight for the summary as shown in FIG.
  • the information processing apparatus for example, as illustrated in FIG. 6 based on a recognition result based on at least one of audio information, information about a user, information about an application, information about an environment, and information about a device. From the table for specifying the weight for the summary, the type of the weight for the summary is determined by selecting the type of the weight for the summary related to the recognition result. Then, the information processing apparatus according to the present embodiment refers to, for example, a table for specifying the weight related to the summary as illustrated in FIG. , A weight is set for the vocabulary corresponding to the combination indicated by the value “1”.
  • the information processing apparatus sets a weight related to the summary by performing, for example, any one of the following processes (a-1) to (a-5).
  • examples relating to the setting of weights related to summarization are not limited to the examples shown in (a-1) to (a-5) below.
  • the information processing apparatus can set a weight related to summarization according to a language recognized based on voice information.
  • weight settings for summarization according to language include, for example, “if the language recognized based on speech information is Japanese, increase the verb weight” or “recognize based on speech information. If the language used is English, increase the weight of the nouns ”.
  • the information processing apparatus relates to, for example, a weight related to the summary according to the situation around the user indicated by the information related to the environment, and a summary corresponding to the content (for example, device type) indicated by the information related to the device Each weight may be set.
  • (A-1) First example of setting of weight related to summary: an example of setting of weight related to summary based on user status indicated by user status information included in information related to user
  • user U1 is a device such as a smartphone
  • the information processing apparatus according to the present embodiment recognizes that the user U1 is moving with respect to the destination. Then, the information processing apparatus according to the present embodiment sets the weight related to the summary corresponding to the recognition result by referring to the table for setting the weight related to the summary.
  • the information processing apparatus determines the type of weight related to the summary illustrated in FIG. 8 based on the recognition result that the user U1 obtained as described above is moving to the destination. From the table for specifying, “time” corresponding to the action “moving” is specified as the type of weight related to the summary. Then, the information processing apparatus according to the present embodiment refers to the table for specifying the weight related to the summary illustrated in FIG. 6, and the value of the combination of the weight type and the vocabulary related to the specified summary is “ A weight is set for the vocabulary corresponding to the combination indicated by 1 ′′. When the table for specifying the weight related to the summary shown in FIG. 6 is used, weights are set for the vocabulary “AM”, “when”,.
  • the information processing apparatus when the user U1 operates a device such as a smartphone and starts a game application, the information processing apparatus according to the present embodiment recognizes that the user U1 is playing a game. Then, the information processing apparatus according to the present embodiment sets the weight related to the summary corresponding to the recognition result by referring to the table for setting the weight related to the summary.
  • the information processing apparatus based on the recognition result that the user U1 obtained as described above is playing a game, from the table for specifying the type of weight related to the summary illustrated in FIG.
  • the “game term” corresponding to the action “in game” is specified as the type of weight related to the summary.
  • the information processing apparatus refers to the table for specifying the weights related to the summary illustrated in FIG. 6, and the value of the combination of the determined weight type and vocabulary related to the summary is “ A weight is set for the vocabulary corresponding to the combination indicated by 1 ′′.
  • the information processing apparatus is included in the table for specifying the weight related to the summary illustrated in FIG. 6 based on the recognition result that the user U1 obtained as described above is in the game. It is also possible to determine the type of weight related to the summary related to the recognition result such as “game term” as the type of weight related to the summary.
  • the information processing apparatus refers to the table for specifying the weights related to the summary illustrated in FIG. 6, and the value of the combination of the determined weight type and vocabulary related to the summary is “ A weight is set for the vocabulary corresponding to the combination indicated by 1 ′′.
  • the information processing apparatus recognizes the state of the user U1 estimated based on the detection result of a motion sensor such as an acceleration sensor or an angular velocity sensor provided in an apparatus such as a smartphone used by the user U1, for example. It is also possible to set a weight for the summary based on the result.
  • a motion sensor such as an acceleration sensor or an angular velocity sensor provided in an apparatus such as a smartphone used by the user U1, for example. It is also possible to set a weight for the summary based on the result.
  • the action “meal” is selected from the table for specifying the type of weight related to the summary illustrated in FIG.
  • the “cooking” corresponding to is specified as a weight type related to the summary.
  • the information processing apparatus refers to the table for specifying the weights related to the summary illustrated in FIG. 6, and the value of the combination of the determined weight type and vocabulary related to the summary is “ A weight is set for the vocabulary corresponding to the combination indicated by 1 ′′.
  • (A-2) Second example of weight setting for summarization: an example of setting weight for summarization based on voice information
  • the information processing apparatus sets weights for summarization based on voice information.
  • the information processing apparatus determines the type of weight related to the summary, for example, as follows based on the audio information.
  • the average frequency band of the voice indicated by the voice information is, for example, 300 to 550 [Hz]: “Male” is determined as the type of weight related to the summary.
  • the average frequency band of the voice indicated by the voice information is, for example, 400 to 700 [Hz]: “Women” is determined as the type of weight regarding the summary.
  • the sound pressure and volume of the sound indicated by the sound information are equal to or higher than the set first threshold value, or when the sound pressure and sound volume of the sound indicated by the sound information are larger than the first threshold value:
  • One or both of “anger” and “joy” is determined as the type.
  • the first threshold value for example, a fixed value such as 72 [dB] can be cited.
  • fixed values such as 54 [dB] are mentioned, for example.
  • the first threshold value and the second threshold value may change dynamically depending on the distance between a user such as the user U1 and a communication partner such as the communication partner U2. .
  • the threshold value is increased by 6 [dB] and moved away by 0.5 [m]. Every 6 [dB] lowering ”.
  • the distance may be estimated by, for example, arbitrary image processing on a captured image captured by an imaging device, or may be acquired by a distance sensor. When the distance is estimated, the process related to the distance estimation may be performed by the information processing apparatus according to the present embodiment or may be performed by an external apparatus of the information processing apparatus according to the present embodiment.
  • the third threshold value and the fourth threshold value may be fixed values set in advance, or may be variable values that can be changed based on a user operation or the like.
  • the emotion eg, anger, joy, sadness
  • the type of weight related to the summary corresponding to the estimated emotion is determined. It is possible to set.
  • the information processing apparatus for example, the rate of change of the fundamental frequency obtained from the speech information, the rate of change of the sound, Based on the rate of change or the like, the strength of the weight related to emotion may be changed.
  • the information processing apparatus provides a table for specifying the types of weights related to summarization as shown in FIGS.
  • the type of weight related to the summary may be determined using the table, or the weight related to the summary may be determined using only the table for specifying the weight related to the summary as shown in FIG.
  • the information processing apparatus specifies the weight related to the summary as illustrated in FIG. 6 as in the first example illustrated in (a-1), for example.
  • weighting is set for the vocabulary corresponding to the combination whose value is indicated by “1” among the combinations of the weight type and the vocabulary regarding the specified summary.
  • (A-3) Third example of weight setting for summarization: an example of setting weight for summarization based on the execution state of the application indicated by the information regarding the application The information processing apparatus according to this embodiment is based on the execution state of the application. To set the weight for the summary.
  • the information processing apparatus when the user U1 operates a device such as a smartphone to start a schedule application and confirms a destination, the information processing apparatus according to the present embodiment is based on the execution state of the schedule application, as shown in FIG. From the table for specifying the type of weight related to the summary shown in (2), “time” and “location” corresponding to the schedule content “place move (biz)” are specified as the types of weight related to the summary. Then, the information processing apparatus according to the present embodiment refers to the table for specifying the weight related to the summary illustrated in FIG. 6, and the value of the combination of the weight type and the vocabulary related to the specified summary is “ A weight is set for the vocabulary corresponding to the combination indicated by 1 ′′. When the table for specifying the weights related to the summary shown in FIG. 6 is used, weights are set for the vocabularies “AM”, “Shibuya”, “when”, “where”,. .
  • the information processing apparatus can determine the type of weight related to the summary based on the properties of the application being executed, for example, and set the weight related to the summary as described below.
  • the map application is executed: “Time”, “Location”, “Person name”, etc. are determined as the types of weights related to the summary.
  • the transfer guidance application is executed: “Time”, “Place”, “Train”, etc. are determined as the types of weights related to the summary.
  • an application for smoothly advancing questions for hearing about Japan is being executed: “Question”, “Japan”, etc. are determined as the types of weights related to summarization.
  • (A-4) Fourth example of setting of weight related to summary: an example of setting of weight related to summary based on user operation indicated by user operation information included in information related to user
  • the information processing apparatus includes: A summary weight is set based on the user's operation.
  • the information processing apparatus uses the type of weight related to the summary selected by the operation of selecting the type of weight related to the summary (an example of the user's operation) to set the weight related to the summary. Decide as the type.
  • the information processing apparatus for example, when a predetermined operation such as a speech recognition start operation related to the summary is performed, weights related to the summary that are associated in advance with the predetermined operation.
  • the type may be set automatically. As an example, when a speech recognition start operation related to a summary is performed, “question” or the like is determined as the type of weight related to the summary.
  • the information processing apparatus specifies the weight related to the summary as illustrated in FIG. 6 as in the first example illustrated in (a-1), for example.
  • weighting is set for the vocabulary corresponding to the combination whose value is indicated by “1” among the combinations of the weight type and the vocabulary regarding the specified summary.
  • the information processing apparatus relates to summarization by combining two or more of (a-1) to (a-4) above. It is possible to set a weight.
  • the information processing apparatus performs a summarization process according to the first information processing method, and, for example, an utterance indicated by voice information generated by a microphone connected to the eyewear type apparatus illustrated in FIG.
  • the contents of are summarized.
  • the information processing apparatus summarizes a character string indicated by voice text information based on voice information, for example.
  • the information processing apparatus uses, for example, the content of the utterance by the objective function using the weight related to the summary set by the processing shown in (a) as shown in Equation 1 below. To summarize.
  • Equation 1 W is a weight related to the summary.
  • a i shown in Equation 1 is a parameter for adjusting the contribution rate of each weight related to the summary, and takes a real number from 0 to 1, for example.
  • z yi is, if included phrase y i "1" indicates, unless contains phrase y i is a binary variable indicating the "0".
  • the information processing apparatus is not limited to the method using the objective function using the weight related to the summarization shown in Equation 1, but can summarize the content of the utterance using the set weight related to the summarization. Any possible method can be used.
  • FIG. 3 shows an example of the result of the summary process according to the first information processing method.
  • FIG. 3A shows an example of the content of an utterance before it is summarized.
  • 3B shows an example of the content of the summarized utterance, and
  • C of FIG. 3 shows another example of the content of the summarized utterance.
  • the content of the utterance is summarized, so that the content of the utterance is simplified before the content is summarized. Therefore, even if the communication partner U2 cannot fully understand English by summarizing the content of the utterance as shown in FIG. 3B, the communication partner U2 is visited by the user U1. It is possible to increase the possibility of understanding the content.
  • C in FIG. 3 indicates that “the information processing apparatus according to the present embodiment further performs morphological analysis on the summary result illustrated in B in FIG. 3 and combines the morphemes based on the result of the morphological analysis.
  • the information processing apparatus when the language of the character string indicated by the speech text information corresponding to the content of the utterance is Japanese, the information processing apparatus according to this embodiment includes the main part of speech (noun, verb, adjective, adverb) Split text is generated in units that combine morphemes other than. For example, when the language of the character string indicated by the speech text information corresponding to the content of the utterance is English, the information processing apparatus according to the present embodiment further sets 5W1H as the divided text.
  • the content of the utterance is summarized as shown in FIG. 3C
  • the content of the utterance is simplified more than the summary result shown in B of FIG. Therefore, even if the communication partner U2 cannot fully understand English by summarizing the content of the utterance as shown in FIG. 3C, the communication partner U2 is visited by the user U1.
  • the possibility that the contents can be understood can be further increased as compared with the case of obtaining the summary result shown in FIG.
  • the information processing apparatus may further translate the content of the utterance summarized by the summary process shown in (b) above into another language, for example. As described above, the information processing apparatus according to the present embodiment translates the first language corresponding to the utterance into a second language different from the first language.
  • the information processing apparatus specifies, for example, the position where the user U1 exists, and the language of the character string indicated by the speech text information corresponding to the content of the utterance is different from the official language at the specified position. In some cases, the content of the summarized utterance is translated into the official language.
  • the position where the user U1 exists is acquired from, for example, a wearable device worn by the user U1 such as the eyewear-type device shown in FIG. 1 or a communication device such as a smartphone possessed by the user U1.
  • the location information for example, data indicating the detection result of a device capable of specifying a location such as a GNSS (Global Navigation Satellite System) device (or estimation of a device capable of estimating the location by an arbitrary method) Data showing the results).
  • GNSS Global Navigation Satellite System
  • the information processing apparatus for example, if the language of the character string indicated by the speech text information corresponding to the utterance content is different from the set language, the summarized utterance content is You may translate into the set language.
  • the information processing apparatus translates the content of the summarized utterance into another language by processing of an arbitrary algorithm that can be translated into another language.
  • FIG. 4 shows an example of the result of the translation processing according to this embodiment.
  • FIG. 4A shows the summary result shown in FIG. 3C as an example of the content of the summarized speech before being translated.
  • 4B shows an example of the content of the summary result shown in FIG. 3C translated into another language by the translation process.
  • the summary result shown in FIG. 3C is translated into Japanese. An example is shown.
  • the translation result obtained by translating the divided text such as the summary result shown in FIG. 3C may be referred to as “divided translation text”.
  • the content of the utterance summarized as shown in B of FIG. 4 is translated into Japanese, which is the native language of the communication partner U2, so that the communication partner U2 can understand the content that the user U1 is visiting.
  • the possibility of being able to be made can be further increased as compared with the case where the content of the summarized utterance is not translated.
  • (D) An example of the notification control process according to the second information processing method The information processing apparatus according to the present embodiment notifies the content of the utterance indicated by the voice information summarized by the summarization process shown in (b) above. . Further, when the content of the summarized utterance is translated into another language by further performing the translation processing shown in (c) above, the information processing apparatus according to the present embodiment notifies the translation result. .
  • the information processing apparatus for example, summarizes the content of an utterance (or a translation result) by one or both of a notification by a visual method and a notification by an auditory method. , Let the notification content.
  • FIG. 5 shows an example of the result of the notification control process according to the present embodiment.
  • FIG. 5 “an example in which the translation result is audibly notified by outputting a voice indicating the translation result from the voice output device connected to the eyewear type device worn by the user U1” Is shown.
  • FIG. 5 shows an example in which the translation result shown in B of FIG. 4 is notified.
  • FIG. 5 shows an example in which the sound pressure at the location corresponding to the speech location where the sound pressure is strong (the “why” portion shown in FIG. 5) is made stronger than the other locations based on the voice information.
  • FIG. 5 shows an example in which when a voice indicating the translation result is output, the division of the divided text is notified by inserting a sound feedback as indicated by a symbol “S” in FIG. Yes.
  • notification realized by the notification control process according to the second information processing method is not limited to the example shown in FIG. Another example of the notification realized by the notification control process according to the second information processing method will be described later.
  • the content (translation result) of the summarized utterance translated into Japanese which is the native language of the communication partner U2
  • the content (translation result) of the summarized utterance translated into Japanese which is the native language of the communication partner U2
  • Japanese which is the native language of the communication partner U2
  • the information processing apparatus summarizes the utterance content indicated by the voice information based on the user's utterance based on the information indicating the weight related to the summary. .
  • the weight regarding the summary is set based on, for example, one or more of the voice information, the user state, the application execution state, and the user operation. Further, as described above, the information processing apparatus according to the present embodiment summarizes the content of an utterance by an objective function using a weight related to a set summary as shown in Equation 1 above, for example.
  • the information processing apparatus can perform one or more of the following processes (1) to (3), for example, as the summary process.
  • Examples of the start conditions for the summary processing according to the present embodiment include the following examples. ⁇ Conditions related to the no-speech period during which no utterance continues ⁇ Conditions related to the state of speech recognition for acquiring the utterance content from speech information ⁇ Conditions related to the utterance content ⁇ Elapsed time since the speech information was obtained Conditions
  • FIGS. 9A to 9C are explanatory diagrams for explaining an example of the summary processing according to the first information processing method, and show an outline of the start timing of the summary processing.
  • FIGS. 9A to 9C show an outline of processing in each start condition.
  • start condition an example in which the start condition is a condition related to a non-speech period
  • the condition related to a non-speech period include a condition related to the length of a non-speech period.
  • the information processing apparatus is configured so that the non-speech period is set or the non-speech period is set. When the predetermined period is exceeded, it is determined that the start condition is satisfied.
  • the period according to the first example of the start condition may be a fixed period that is set in advance, or may be a variable period that can be changed based on a user operation or the like.
  • the “silent period” shown in A of FIG. 9A corresponds to the silent period.
  • the information processing apparatus detects, for example, a voice section in which voice is present based on voice information. Then, the information processing apparatus according to the present embodiment detects the silent period exceeding the set time after the voice section is detected, or detects the silent period longer than the set time. If this is done, the summarization process is started as a summarization process start trigger (hereinafter referred to as “summary trigger”).
  • summary trigger a summarization process start trigger
  • start condition is first condition related to voice recognition state
  • detection of a voice recognition stop request is detected.
  • the conditions concerning are mentioned.
  • the information processing apparatus determines that the start condition is satisfied based on the detection of the voice recognition stop request. .
  • the information processing apparatus according to the present embodiment determines that the start condition is satisfied, for example, when a voice recognition stop request is detected.
  • the information processing apparatus for example, after the voice recognition is started based on the “speech recognition start operation” illustrated in B of FIG. 9A, “ When a speech recognition stop request including a speech recognition stop command based on the “speech recognition stop operation” is detected, the summary processing is started as a summary trigger.
  • the voice recognition start operation and the voice recognition stop operation include an operation on an arbitrary UI (User Interface) related to voice recognition.
  • a speech recognition stop request is not limited to being obtained based on the speech recognition stop operation.
  • a speech recognition stop request is generated by a device that performs speech recognition processing when an error occurs during speech recognition processing or when an interrupt processing is entered during speech recognition processing. May be.
  • start condition an example in which the start condition is a second condition related to the state of speech recognition
  • the second condition related to the state of speech recognition includes a condition related to completion of speech recognition Is mentioned.
  • the information processing apparatus determines that the start condition is satisfied based on the completion of the voice recognition.
  • the information processing apparatus determines that the start condition is satisfied, for example, when the completion of voice recognition is detected.
  • the information processing apparatus displays a summary trigger when the result of the speech recognition process is obtained, as indicated by “acquisition of speech recognition result” in A of FIG. As a result, the summarization process is started.
  • start condition an example in which the start condition is the first condition related to the content of the utterance
  • the first condition related to the content of the utterance is based on the content of the utterance indicated by the voice information. Examples include conditions relating to detection of a predetermined word.
  • the predetermined start condition is the first condition regarding the content of the utterance
  • the information processing apparatus starts the condition based on the detection of the predetermined word from the content of the utterance indicated by the voice information. Is determined to be satisfied.
  • the information processing apparatus according to the present embodiment determines that the start condition is satisfied, for example, when a predetermined word is detected from the utterance content indicated by the audio information.
  • Examples of the predetermined word relating to the first condition regarding the content of the utterance include a word called a filler word.
  • the predetermined words related to the first condition relating to the content of the utterance may be preset fixed words that cannot be added, deleted, or changed, or may be added, deleted, or changed based on a user operation or the like. May be possible.
  • Et shown in B of FIG. 9B corresponds to an example of filler word (an example of a predetermined word).
  • the information processing apparatus starts the summarization process using, for example, a summary trigger as a case where a filler word is detected from a character string indicated by voice text information obtained based on voice information.
  • start condition an example in which the start condition is the second condition related to the content of the utterance
  • the second condition related to the content of the utterance is based on the utterance content indicated by the voice information.
  • a condition related to the detection of stagnation is given.
  • the information processing apparatus determines that the start condition is satisfied based on the detection of stagnation based on the voice information.
  • the information processing apparatus according to the present embodiment determines that the start condition is satisfied, for example, when stagnation is detected based on audio information.
  • the information processing apparatus for example, from “a method of detecting voiced pause (including syllable extension) from voice information” or “a character string indicated by voice text information obtained based on voice information, Say stagnation based on speech information by any method that can detect stagnation based on speech information or estimate speech based on speech information, such as "Method of detecting words associated with sloppyness" Is detected.
  • the information processing apparatus starts the summarization process using, for example, a summarization trigger when it is estimated that there is stagnation.
  • start condition is a condition related to the elapsed time since the voice information was obtained.
  • the condition related to the elapsed time after the voice information was obtained is the elapsed time.
  • a condition related to the length of time is given.
  • the predetermined start condition is a condition related to the elapsed time since the voice information is obtained
  • the information processing apparatus is configured to operate when the elapsed time exceeds a predetermined period, or When the time is equal to or longer than a predetermined period, it is determined that the start condition is satisfied.
  • the period according to the sixth example of the start condition may be a fixed period set in advance or a variable period that can be changed based on a user operation or the like.
  • the information processing apparatus performs, for example, a summarization process when a predetermined time has elapsed since it was detected that audio information was obtained as a summarization trigger. To start.
  • the start conditions are from the start condition according to the first example shown in (1-1) to the start condition according to the sixth example shown in (1-6) above.
  • the condition which combined 2 or more of them may be sufficient.
  • the information processing apparatus starts the summarization process when any one of the combined start conditions is satisfied as a summarization trigger.
  • the information processing apparatus sets a summary processing exclusion condition (hereinafter referred to as “summary exclusion condition”). If it is determined that the condition is satisfied, the summarization process is not performed.
  • summary exclusion condition a summary processing exclusion condition
  • the summary exclusion condition for example, a condition related to gesture detection can be given.
  • the information processing apparatus determines that the summary exclusion condition is satisfied when a predetermined gesture that has been set is detected.
  • the predetermined gesture related to the summary exclusion condition may be a fixed gesture set in advance, or may be added, deleted, or changed based on a user operation or the like.
  • the information processing apparatus performs image processing on a captured image obtained by imaging with an imaging device, estimates a motion based on a detection result of a motion sensor such as an acceleration sensor or an angular velocity sensor, and the like. Then, it is determined whether or not a predetermined gesture related to the summary exclusion condition has been performed.
  • summary exclusion condition according to the present embodiment is not limited to the above-described conditions related to gesture detection.
  • the summary exclusion condition according to the present embodiment is “the operation for invalidating the function for performing the summary processing, such as pressing a button for invalidating the function for performing the summary processing,” or “ It may be an arbitrary condition set as the summary exclusion condition, such as “the processing load of the information processing apparatus according to the present embodiment is larger than a set threshold”.
  • the information processing apparatus includes an utterance period identified based on speech information and the number of characters identified based on speech information. Based on one or both of the above, the utterance content summary level (or the utterance content summary level, the same shall apply hereinafter) is changed. In other words, the information processing apparatus according to the present embodiment changes the level of the summary of the utterance content based on at least one of the utterance period specified based on the voice information and the number of characters specified based on the voice information. .
  • the information processing apparatus changes the level of utterance content summarization by, for example, limiting the number of characters indicated by the summarized utterance content.
  • the information processing apparatus limits the number of characters indicated by the summarized utterance content, for example, by preventing the number of characters indicated by the summarized utterance content from exceeding the set upper limit value. .
  • By limiting the number of characters indicated by the summarized utterance content it is possible to automatically reduce the number of characters indicated by the summarized utterance content, that is, the summary amount.
  • the utterance period is specified, for example, by detecting a voice section in which voice is present based on voice information. Further, the number of characters corresponding to the utterance is specified by counting the number of characters in the character string indicated by the speech text information based on the speech information.
  • the information processing apparatus When changing the summary level of the utterance content based on the utterance period, the information processing apparatus according to the present embodiment, for example, when the utterance period exceeds a predetermined period or the utterance period is set When the predetermined period is exceeded, the summary level of the content of the utterance is changed.
  • the above-mentioned period when the level of the summary of the utterance content is changed based on the utterance period may be a fixed period set in advance, or a variable that can be changed based on a user operation or the like. It may be a period.
  • the information processing apparatus when changing the summary level of the content of the utterance based on the number of characters specified based on the voice information, the information processing apparatus according to the present embodiment, for example, when the number of characters is larger than a set threshold, Or, when the number of characters exceeds a set threshold value, the level of utterance content summarization is changed.
  • the threshold in the case of changing the summary level of the utterance content based on the number of characters specified based on the voice information may be a preset fixed threshold, or may be used by a user operation or the like. It may be a variable threshold that can be changed based on this.
  • the information processing apparatus uses the contents of the utterances summarized by the summarization processing according to the first information processing method. It is possible to further perform a translation process for translating into the above languages. As described above, the information processing apparatus according to the present embodiment translates the first language corresponding to the utterance into a second language different from the first language.
  • the reliability of the translation result may be set for each translation unit.
  • Translation unit is a unit that translates in translation processing.
  • a translation unit for example, a fixed unit that is set, such as for each word, for each one or two or more phrases, can be cited.
  • the translation unit may be dynamically set according to, for example, a language (first language) corresponding to the utterance.
  • the translation unit may be changeable based on, for example, a user setting operation.
  • the reliability of the translation result is, for example, an index indicating the certainty of the translation result. For example, 0 [%] (indicating that the reliability is the lowest) to 100 [%] (indicating that the reliability is the highest) (Shown).
  • the reliability of the translation result is obtained by using an arbitrary machine learning result such as a machine learning result using a feedback result with respect to the translation result. Note that the reliability of the translation result is not limited to being obtained using machine learning, but may be obtained by any method capable of obtaining the certainty of the translation result.
  • the information processing apparatus can perform, for example, one or both of the following (i) and (ii) as the translation processing.
  • the information processing apparatus performs translation processing when it is determined that a set translation processing exclusion condition is satisfied. Absent.
  • Exceptional conditions for translation processing according to the present embodiment include, for example, conditions relating to gesture detection.
  • the information processing apparatus according to the present embodiment determines that the translation process is satisfied when a predetermined gesture that has been set is detected.
  • the predetermined gesture related to the translation processing may be a fixed gesture set in advance, or may be added, deleted, or changed based on a user operation or the like.
  • Examples of the fixed gesture set in advance include gestures and hand gestures related to non-verbal communication such as hand signs.
  • the information processing apparatus performs image processing on a captured image obtained by imaging with an imaging device, estimates a motion based on a detection result of a motion sensor such as an acceleration sensor or an angular velocity sensor, and the like. Then, it is determined whether or not a predetermined gesture related to the translation process has been performed.
  • exclusion conditions for the translation processing according to the present embodiment are not limited to the conditions relating to gesture detection as described above.
  • the exclusion condition for the translation process according to the present embodiment is that “an operation for invalidating the function for performing the translation process, such as pressing a button for invalidating the function for performing the translation process, is detected” or
  • An arbitrary condition set as an exclusion condition for translation processing such as “the processing load of the information processing apparatus according to the present embodiment has become larger than a set threshold”, may be used.
  • the exclusion conditions for the translation processing according to the present embodiment may be the same conditions as the summary exclusion conditions according to the above-described embodiment, or may be different conditions.
  • the information processing apparatus can also retranslate content translated into another language into a language before translation.
  • the information processing apparatus when an operation for performing re-translation processing is detected, such as when a button for performing re-translation is pressed, for example, , Retranslate to the language before translation.
  • the retranslation trigger is not limited to the detection of the operation for performing the retranslation processing as described above.
  • the information processing apparatus according to the present embodiment can automatically perform retranslation based on the reliability of the translation result set for each translation unit.
  • the information processing apparatus according to the present embodiment performs retranslation when the reliability of the translation result set for each translation unit is less than or equal to the set threshold value or less than the set threshold value. Re-translate as a trigger.
  • the information processing apparatus may perform a summary process using the result of the re-translation.
  • the information processing apparatus for example, in the case where there are words included in the content after retranslation in the content of the utterance indicated by the voice information acquired after retranslation Include the words contained in the re-translated content in the summarized utterance content.
  • the summarization process using the result of retranslation as described above, for example, “When the same words as before retranslation appear in the content uttered by the user, the summary corresponding to this utterance Is adjusted so that the same words are not deleted.
  • the information processing apparatus is configured to display the utterance content indicated by the voice information summarized by the summary process according to the first information processing method. Notify me.
  • the information processing apparatus notifies the translation result.
  • the information processing apparatus notifies the notification content by one or both of notification by a visual method and notification by an auditory method, for example.
  • FIG. 10 is an explanatory diagram showing an example of notification by a visual method realized by the notification control process according to the second information processing method.
  • FIG. 10 shows an example when the information processing apparatus according to the present embodiment displays the translation result on the display screen of the smartphone.
  • the information processing apparatus can perform one or more of the following processes (I) to (VII) as the notification control process, for example.
  • the information processing apparatus according to the present embodiment notifies the translation result will be described as an example.
  • the information processing apparatus according to the present embodiment can also notify the content of the summarized utterance before translation in the same manner as when the translation result is notified.
  • FIG. 11 to FIG. 21 are explanatory diagrams for explaining an example of the notification control processing according to the second information processing method.
  • an example of the notification control process according to the second information processing method will be described with reference to FIGS. 11 to 21 as appropriate.
  • Notification in word order of translated language The information processing apparatus according to the present embodiment notifies the translation result in a word order corresponding to another translated language.
  • the word order corresponding to the other translated languages may be a fixed word order set in advance, or may be changeable based on a user operation or the like.
  • the information processing apparatus notifies the translation result based on the reliability for each translation unit by performing one or both of the following processes (II-1) and (II-2), for example. .
  • the information processing apparatus when displaying the translation result visually on the display screen of the display device, the information processing apparatus according to the present embodiment gives a priority notification of the translation result with high reliability depending on how to display the translation result. Realize.
  • the information processing apparatus when the translation result is audibly notified by voice from the voice output device, the information processing apparatus according to the present embodiment realizes a high-priority translation result notification according to the order of notification, for example. May be.
  • the notification realized by the notification control process based on the reliability for each translation unit according to the first example taking the case where the translation result is displayed visually on the display screen of the display device as an example. An example will be described.
  • FIG. 11 shows a first example in the case where the translation result is displayed on the display screen of the display device, and shows an example in which the translation result with high reliability is notified preferentially.
  • “Recommendation”, “Sightseeing”, “Directions”, “Tell me”, and “Asakusa” correspond to the translation results for each translation unit.
  • FIG. 11 shows an example in which lower reliability is set in the order of “recommended”, “sightseeing”, “direction”, “tell me”, and “Asakusa”.
  • the information processing apparatus displays the translation results for each translation unit so that the translation results for each translation unit are hierarchically displayed in an order of high reliability. And display it on the display screen.
  • hierarchical display is realized by threshold processing using, for example, reliability for each translation unit and one or more thresholds related to determination of the hierarchy to be displayed.
  • the threshold for hierarchical display may be a fixed value set in advance, or may be a variable value that can be changed based on a user operation or the like.
  • the information processing apparatus when displaying the translation results for each of a plurality of translation units on the same hierarchy, the information processing apparatus according to the present embodiment, for example, “from left to right in the display screen area corresponding to the hierarchy”
  • the translation results for each of the plurality of translation units are displayed in a predetermined order such as “Arrange in order of higher reliability”.
  • the information processing apparatus when there are a plurality of translation results whose reliability is greater than the predetermined threshold or a plurality of translation results whose reliability is equal to or higher than the predetermined threshold as a result of the threshold processing, the information processing apparatus according to the present embodiment For example, as shown in FIG. 11B, a plurality of existing translation results may be displayed together in a predetermined area of the display screen.
  • the predetermined threshold include one or more thresholds among one or more thresholds used for threshold processing.
  • examples of the predetermined area include “display screen area corresponding to a hierarchy associated with the threshold processing based on the predetermined threshold”.
  • the translation result for each translation unit for which “high reliability (corresponding to a score) in translation processing” is set is displayed at the top and the reliability is high.
  • a predetermined threshold value is exceeded, “translation results for each translation unit are displayed together” is realized.
  • the display example in the case where the translation result with high reliability is preferentially notified is not limited to the example shown in FIG.
  • the information processing apparatus when the translation result is displayed visually on the display screen of the display device, the information processing apparatus according to the present embodiment realizes the notice emphasized according to the reliability depending on the display method.
  • the information processing apparatus when the translation result is audibly notified by voice from the voice output device, the information processing apparatus according to the present embodiment changes the reliability by changing the sound pressure, the volume, and the like of the voice based on the reliability, for example.
  • a notice that is emphasized according to the above may be realized.
  • the notification realized by the notification control process based on the reliability for each translation unit according to the second example will be described by taking as an example the case where the translation result is displayed visually on the display screen of the display device. An example will be described.
  • the information processing apparatus emphasizes and displays the translation result according to the reliability by, for example, “displaying each translation result for each translation unit in a size corresponding to the reliability”.
  • FIG. 12 shows a second example when the translation result is displayed on the display screen of the display device, and shows a first example when the translation result is displayed in an emphasized manner according to the reliability. .
  • “Recommendation”, “Sightseeing”, “Directions”, “Tell me”, and “Asakusa” correspond to the translation results for each translation unit.
  • FIG. 12 shows an example in which lower reliability is set in the order of “recommended”, “sightseeing”, “direction”, “tell me”, and “Asakusa”.
  • FIG. 12 shows the information processing apparatus according to this embodiment in addition to the notification control process based on the reliability for each translation unit according to the first example, in addition to the translation results for each translation unit.
  • An example in which the size is displayed in accordance with is shown. Note that, in the case of performing the notification control process based on the reliability for each translation unit according to the second example, the information processing apparatus according to the present embodiment is reliable as in the hierarchical display shown in FIG. Needless to say, it is not necessary to preferentially notify the translation result having a high degree.
  • the information processing apparatus displays each translation result for each translation unit in a size corresponding to the reliability, for example, as shown in FIG.
  • the information processing apparatus according to the present embodiment refers to, for example, “a table (or database) in which reliability is associated with a display size when displaying a translation result for each translation unit on the display screen”.
  • the translation result for each translation unit is displayed in a size corresponding to the reliability.
  • the translation result for each translation unit having a high reliability (corresponding to a score) in the translation process is displayed at the top and displayed at the top.
  • the size is changed prominently ”.
  • the display example when displaying each translation result for each translation unit in a size corresponding to the reliability is not limited to the example shown in FIG.
  • the information processing apparatus provides reliability by “displaying each translation result for each translation unit so that a translation result with high reliability is displayed in the foreground”. Depending on the case, the translation result may be highlighted and displayed.
  • FIG. 13 shows a third example when the translation result is displayed on the display screen of the display device, and shows a second example when the translation result is displayed with emphasis according to the reliability. .
  • “Recommendation”, “Sightseeing”, “Direction”, “Tell me”, “Asakusa”, etc. correspond to the translation results for each translation unit.
  • FIG. 13 shows an example in which lower reliability is set in the order of “recommendation”, “sightseeing”, “direction”, “tell me”, “Asakusa”,.
  • FIG. 13 shows that the information processing apparatus according to the present embodiment displays a translation result with higher reliability on the display screen in addition to the notification control process based on the reliability for each translation unit according to the first example.
  • the example displayed on the near side is shown.
  • the information processing apparatus according to the present embodiment displays a hierarchical display as illustrated in FIG. Needless to say, it is not necessary to preferentially notify a translation result with a high degree of reliability.
  • the information processing apparatus displays a translation result with high reliability on the display screen.
  • the information processing apparatus according to the present embodiment is, for example, “a table (or database) in which the reliability and the coordinate value in the depth direction when displaying the translation result for each translation unit on the display screen are associated with each other”.
  • each translation result for each translation unit is displayed so that the translation result with high reliability is displayed on the front side of the display screen.
  • the translation result for each translation unit having a high reliability (corresponding to a score) in the translation process is displayed on the front in the depth direction on the display screen.
  • the result of translation for each translation unit for which high reliability is set is more conspicuous.
  • the example of a display in the case of displaying each translation result for every translation unit so that a translation result with high reliability is displayed in the foreground on the display screen is not limited to the example shown in FIG. Needless to say.
  • the information processing apparatus for example, “displays each translation result for each translation unit in one or both of the color according to the reliability and the transparency according to the reliability”,
  • the translation result may be highlighted and displayed according to the reliability.
  • FIG. 14 shows a fourth example when the translation result is displayed on the display screen of the display device, and shows a third example when the translation result is displayed in an emphasized manner according to the reliability.
  • “recommendation”, “tourism”, “direction”, “tell me”, and “Asakusa” correspond to the translation results for each translation unit.
  • FIG. 14 shows an example in which lower reliability is set in the order of “recommendation”, “tourism”, “direction”, “tell me”, and “Asakusa”.
  • FIG. 14 shows that the information processing apparatus according to the present embodiment further adds the translation result for each translation unit to the reliability in addition to the notification control process based on the reliability for each translation unit according to the first example.
  • An example is shown in which one or both of the color according to the degree and the transparency according to the reliability are displayed.
  • the information processing apparatus according to the present embodiment displays a hierarchical display as illustrated in FIG. Needless to say, it is not necessary to preferentially notify a translation result with a high degree of reliability.
  • the information processing apparatus displays each translation result for each translation unit in a color corresponding to the reliability.
  • the information processing apparatus according to the present embodiment may display each translation result for each translation unit with a transparency according to the reliability.
  • the information processing apparatus according to the present embodiment can display, for example, each translation result for each translation unit with a color according to the reliability and a transparency according to the reliability.
  • the information processing apparatus has, for example, “reliability, color when displaying the translation result for each translation unit on the display screen, and transparency when displaying the translation result for each translation unit on the display screen.
  • each translation result for each translation unit is displayed in one or both of a color corresponding to the reliability and a transparency corresponding to the reliability. .
  • Notification control processing based on audio information
  • the information processing apparatus is Based on the voice information, the display method of the notification content is controlled.
  • the information processing apparatus controls how the notification content is displayed based on the audio information, for example, by “displaying the notification content in a size corresponding to the sound pressure or volume specified from the audio information”. To do.
  • the information processing apparatus refers to, for example, “a table (or database) in which sound pressure or volume, display size when displaying divided text, and font size are associated with each other”.
  • the notification content is displayed in a size corresponding to the sound pressure or volume specified from the sound information.
  • the information processing apparatus controls how to display the notification content. Similarly to the above, it is possible to control the display method of the translation result based on the voice information.
  • FIG. 15 shows a fifth example in which the translation result is displayed on the display screen of the display device, and shows an example in which the translation result is displayed with emphasis based on the audio information.
  • “recommendation”, “tourism”, “direction”, “tell me”, and “Asakusa” correspond to the translation results for each translation unit.
  • FIG. 15 shows an example when the sound pressure or volume is lower in the order of “Tell me”, “Direction”, “Recommendation”, “Sightseeing”, and “Asakusa”, for example.
  • the information processing apparatus converts the translation result for each translation unit (the contents of the translated summary utterance) into a sound pressure or volume specified from the sound information. Display in the appropriate size.
  • the information processing apparatus according to the present embodiment for example, “a table (or database) in which the sound pressure or volume, the display size when displaying the translation result for each translation unit, and the font size” are associated with each other.
  • the translation result is displayed in a size corresponding to the sound pressure or sound volume specified from the sound information.
  • the display as shown in FIG. 15 when the display as shown in FIG. 15 is performed, “the font and the display size are displayed in a large size so that the one with a high sound pressure (or volume) is more conspicuous” is realized.
  • the display example in the case of controlling the display method based on the audio information is not limited to the example shown in FIG.
  • the operations performed on the display screen include, for example, operations using operation input devices such as buttons, direction keys, a mouse, and a keyboard, and operations on the display screen (when the display device is a touch panel).
  • operation input devices such as buttons, direction keys, a mouse, and a keyboard
  • operations on the display screen when the display device is a touch panel.
  • Arbitrary operations that can be performed on the screen are listed.
  • the information processing apparatus performs, for example, one or both of the following processes (IV-1) and (IV-2) to display the display screen based on an operation performed on the display screen. Change the displayed contents.
  • (IV-1) First Example of Notification Control Processing Based on Operation Performed on Display Screen
  • the information processing apparatus is displayed on the display screen based on the operation performed on the display screen. Change the contents. Examples of changing the content displayed on the display screen according to the present embodiment include one or more of the examples shown below. -Change the display position of the notification content on the display screen (or change the display position of the translation result on the display screen) ⁇ Deleting part of the notification content displayed on the display screen (or deleting part of the translation result displayed on the display screen)
  • the information processing apparatus changes the display position of the notification content on the display screen (or the display position of the translation result on the display screen) based on an operation performed on the display screen, for example,
  • the content to be presented to the communication partner can be changed manually.
  • the information processing apparatus may display a part of the notification content displayed on the display screen (or the translation result displayed on the display screen based on an operation performed on the display screen. For example, it is possible to manually delete a translation result in which a mistranslation has occurred.
  • FIGS. 16A to 16C show examples of display screens in the case where contents displayed on the display screen are changed based on operations performed on the display screen.
  • FIG. 16A shows an example of a display when the translation result for each translation unit by the translation process is re-translated.
  • FIG. 16B shows an example of display in the case where a part of the translation result (translated summarized utterance content) for each translation unit displayed on the display screen is deleted.
  • FIG. 16C shows an example of display when the display position of the translation result (translated summarized utterance content) for each translation unit displayed on the display screen is changed.
  • a case where the user desires to delete “recommendation” which is a part of the translation result for each translation unit displayed on the display screen is taken as an example.
  • a window W for selecting whether or not to delete is displayed as shown in A of FIG. 16B.
  • “Recommendation” which is a part of the translation result is deleted as shown in B of FIG. 16B.
  • the example of deleting a part of the translation result for each translation unit displayed on the display screen is not limited to the example shown in FIG. 16B.
  • a case where the user desires to change the display position of “Recommend” and “Tell me” in the translation results for each translation unit displayed on the display screen will be described as an example.
  • the user selects “Tell me” as indicated by reference numeral O1 in A of FIG. 16C and then designates the position indicated by reference numeral O2 in B of FIG. 16C by a drag operation, as shown in B of FIG. 16C.
  • the display positions of “recommend” and “tell me” are switched.
  • the example of changing the display position of the translation result for each translation unit displayed on the display screen is not limited to the example shown in FIG. 16C.
  • the information processing apparatus when one part of the notification content is displayed on the display screen, the information processing apparatus according to the present embodiment is displayed on the display screen based on an operation performed on the display screen. Change the contents.
  • the information processing apparatus changes the content displayed on the display screen, for example, by changing the notification content displayed on the display screen from the one part to the other part.
  • FIGS. 17 and 18 are examples of display screens in the case of changing the translation result (translated summarized utterance content) for each translation unit based on the operation performed on the display screen. Show.
  • FIG. 17 shows an example of a display screen in which the content displayed on the display screen can be changed by a slider-type UI as shown in FIG.
  • FIG. 18 shows an example of a display screen in which the content displayed on the display screen can be changed by a revolver type UI whose display changes by rotating in the depth direction of the display screen.
  • a case where the user desires to display the content displayed on the display screen is taken as an example.
  • the user operates the slider-type UI by touching an arbitrary part of the slider shown in FIG. 17A, for example, to display the translation result displayed on the display screen from one part to another part. To change.
  • a case where the user desires to display the contents displayed on the display screen is taken as an example.
  • the user operates the revolver-type UI, for example, by performing a flick operation as indicated by reference numeral O1 in FIG. 18 to change the translation result displayed on the display screen from one part to another part. Change it.
  • the example of changing the translation result displayed on the display screen is not limited to the examples shown in FIGS.
  • the information processing apparatus audibly transmits a translation result from a voice output device based on voice operation. You may be notified.
  • FIG. 19 shows an example of the case where the translation result is audibly notified based on the voice operation.
  • FIG. 19 shows an example in which the content to be notified to the communication partner is selected from the translation results for each translation unit by the translation processing based on the operation by voice.
  • the information processing apparatus when the translation results for each translation unit by the translation processing are “recommendation”, “tourism”, “direction”, and “tell me”, the information processing apparatus according to the present embodiment is shown in FIG. As shown, the retranslated result is notified by voice as indicated by reference numeral “I1” in A of FIG. At this time, the information processing apparatus according to the present embodiment may insert a sound feedback as indicated by a symbol “S” in FIG. 19A at the division of the divided text as shown in FIG. 19A. .
  • a voice selection operation as indicated by the symbol “O” is detected in B of FIG. 19, the information processing apparatus according to the present embodiment As indicated by reference numeral “I2”, a voice indicating a translation result corresponding to the selection operation by the voice is output from the voice output device.
  • B in FIG. 19 shows an example of a selection operation by voice for designating a number to be notified to the communication partner.
  • the example of the selection operation by voice according to the present embodiment is not limited to the example described above.
  • FIG. 20 shows another example in the case where the translation result is audibly notified based on the voice operation.
  • FIG. 20 shows an example in which the content to be notified to the communication partner is excluded from the translation results for each translation unit by the translation processing based on the voice operation.
  • the information processing apparatus when the translation results for each translation unit by the translation processing are “recommendation”, “tourism”, “direction”, and “tell me”, the information processing apparatus according to the present embodiment is shown in FIG. As shown, the re-translated result is notified by voice as indicated by reference numeral “I1” in A of FIG. Note that the information processing apparatus according to the present embodiment may insert sound feedback at the division of the divided text, as in A of FIG.
  • the information processing apparatus After the re-translation result is notified by voice, when an excluding operation by voice as indicated by the symbol “O” is detected in B of FIG. 20, the information processing apparatus according to the present embodiment As indicated by reference numeral “I2”, a voice indicating a translation result corresponding to the selection operation by the voice is output from the voice output device.
  • B in FIG. 20 shows an example of an excluding operation by voice for designating a number that does not require notification to the communication partner.
  • the example of the audio exclusion operation according to the present embodiment is not limited to the example described above.
  • the information processing apparatus can also dynamically control the notification order of notification contents. It is.
  • the information processing apparatus controls the notification order of notification contents based on at least one of information corresponding to the first user and information corresponding to the second user, for example.
  • the information corresponding to the first user includes, for example, at least one of information regarding the first user, information regarding the application, and information regarding the device.
  • the information corresponding to the second user includes at least one of information on the second user, information on the application, and information on the device.
  • the information related to the first user indicates, for example, one or both of the situation where the first user is placed and the state of the first user.
  • the information regarding a 2nd user shows the one or both of the condition where the 2nd user is placed, and the state of a 2nd user, for example.
  • the information related to the application indicates, for example, the execution state of the application.
  • the information about the device indicates one or both of the device type and the device state, for example.
  • the process of estimating the situation where the user is placed may be performed by the information processing apparatus according to the present embodiment, or may be performed by an external device of the information processing apparatus according to the present embodiment.
  • the user state is an arbitrary behavior estimation process using one or more of the user's biological information, the detection result of the motion sensor, the captured image captured by the imaging device, and the like.
  • it is estimated by an arbitrary emotion estimation process.
  • FIG. 21 shows an example of display when the notification order is dynamically controlled.
  • FIG. 21A shows an example of the case where the translation result (translated summarized utterance content) for each translation unit is displayed based on the state of the user.
  • B of FIG. 21 has shown an example when the translation result for every translation unit by a translation process is displayed based on the execution state of an application.
  • FIG. 21C shows an example of a case where the translation result for each translation unit by the translation process is displayed based on the situation where the user is placed.
  • FIG. 21A shows an example of display based on the state of the user when the translation results for each translation unit are “recommended”, “tourist”, “direction”, and “tell me”.
  • the information processing apparatus uses a verb as illustrated in FIG. Is displayed preferentially, such as by displaying on the leftmost side of the display screen.
  • the information processing apparatus specifies the notification order by referring to, for example, “a table (or database) in which the user status and information indicating the display order are associated with each other”.
  • FIG. 21B shows an example of display based on the execution state of the application when the translation results for each translation unit are “Hokkaido”, “Origin”, “Delicious”, and “Fish”.
  • the information processing apparatus when the type of application being executed is recognized as “meal browser”, the information processing apparatus according to the present embodiment 21 preferentially displays adjectives by displaying the adjectives on the leftmost side of the display screen as shown in FIG.
  • the information processing apparatus according to the present embodiment specifies the notification order by referring to, for example, “a table (or database) in which an application type and information indicating a display order are associated with each other”.
  • FIG. 21C shows a display based on the situation where the user is placed when the translation result for each translation unit is “Hurry”, “Shibuya”, “Collecting”, and “No time”. An example is shown.
  • the information processing apparatus when noise detected from voice information (for example, sound other than voice based on speech) is larger than a set threshold, the information processing apparatus according to the present embodiment recognizes that the user is in a noisy situation. To do. Then, the information processing apparatus according to the present embodiment preferentially displays the noun (or proper noun) by displaying the noun (or proper noun) on the leftmost side of the display screen as shown in FIG. 21C. .
  • the information processing apparatus according to the present embodiment specifies the notification order by referring to, for example, “a table (or database) in which an environment where a user is placed and information indicating a display order are associated”. .
  • the information processing apparatus when the notification order is dynamically controlled based on two or more of the situation where the user is placed, the user's state, and the application execution state (the notification order is dynamically changed based on a plurality of pieces of information) As an example of control, the information processing apparatus according to the present embodiment has the priority (or priority) set for each of the situation where the user is placed, the state of the user, and the execution state of the application. Based on this, the notification order is specified.
  • the information processing apparatus causes notification contents corresponding to an index having a high priority (or priority) to be preferentially notified.
  • FIG. 21 shows an example of the notification by the visual method.
  • the information processing apparatus according to the present embodiment can also perform the notification by the auditory method.
  • the information processing apparatus can also dynamically control the notification order based on each piece of information about the device.
  • dynamically controlling the notification order based on the information about the device for example, dynamically controlling the notification order according to the processing load of the processor can be cited.
  • the information processing apparatus can also dynamically control the information content of notification content. It is.
  • the information processing apparatus is, for example, notification content information based on one or more of summary information, information corresponding to the first user, information corresponding to the second user, and voice information.
  • the amount is dynamically controlled. Examples of the dynamic change of the information amount include, for example, the following (VII-1) to (VII-5). Needless to say, examples of dynamically changing the information amount are not limited to the examples shown in the following (VII-1) to (VII-5).
  • VII-1 Example of dynamic change of notification content based on summary information
  • the information processing apparatus includes, for example, “that” and “it” in the content of the summarized utterance indicated by the summary information. Are included, the instruction word (or the translation result of the instruction word) is not notified.
  • the information processing apparatus for example, if the content of the summarized utterance indicated by the summary information includes a word corresponding to the greeting (or corresponding to the greeting) (Translation result of words to be) is not notified.
  • (VII-2) Example of dynamic change of notification content based on information corresponding to first user-In the information processing apparatus according to the present embodiment, for example, when the facial expression of the first user is determined to be laughter The amount of information when notifying the notification content is reduced.
  • the information processing apparatus according to the present embodiment notifies, for example, notification contents when it is determined that the first user's line of sight is facing upward (an example when it is determined that the first user is close to monologue). I won't let you.
  • the information processing apparatus according to the present embodiment displays a notification content when a gesture (for example, a pointing gesture) corresponding to an instruction word such as “that”, “it”, “this” is detected. , Do not let me know.
  • the information processing apparatus according to the present embodiment for example, notifies all of the notification contents when it is determined that the first user is in a situation where noise is high.
  • VII-3 Example of dynamic change of notification content based on information corresponding to second user-In the information processing apparatus according to the present embodiment, for example, when the facial expression of the second user is determined to be laughter The amount of information when notifying the notification content is reduced.
  • the information processing apparatus according to the present embodiment determines that the second user may not understand the utterance content (for example, the second user When it is determined that the user's line of sight is not suitable for the first user), the amount of information when the notification content is notified is increased.
  • the information processing apparatus determines, for example, that the second user is yawning (for example, determines that the second user is bored) Etc.), the amount of information when the notification content is notified is reduced.
  • the information processing apparatus increases the amount of information when notifying the notification content when, for example, it is determined that the second user has nodded or consulted. .
  • the information processing apparatus when it is determined that the size of the pupil of the second user is larger than a predetermined size, or When it is determined that the size is equal to or greater than the predetermined size (an example when it is determined that the user is interested), the amount of information when the notification content is notified is increased.
  • the information processing apparatus determines that the second user may not understand the utterance content (for example, the second user For example, when it is determined that the user's hand is not moving), the amount of information when the notification content is notified is increased.
  • the information processing apparatus when it is determined that the inclination of the body of the second user is tilted forward (determined as interested) An example of when the notification is made) increases the amount of information when the notification content is notified.
  • the information processing apparatus for example, notifies all of the notification contents when it is determined that the second user is in a situation where noise is high.
  • the information processing apparatus has a volume of utterances detected from voice information larger than a predetermined threshold, or When the volume of the utterance is equal to or higher than the predetermined threshold, the notification content is not notified.
  • the information processing apparatus for example, if the volume of the utterance detected from the voice information is greater than a predetermined threshold or the volume of the utterance is greater than or equal to the predetermined threshold Notify some or all of
  • FIG. 5 An example of dynamic change of notification content based on a combination of a plurality of pieces of information-The information processing apparatus according to the present embodiment is, for example, when the first user and the second user are different, When it is determined that the user's line of sight matches the line of sight of the second user, the amount of information to be notified is increased (information corresponding to the first user and corresponding to the second user) Example of dynamic change of notification content based on information).
  • FIGS. 22 to 33 are flowcharts showing an example of processing related to the information processing method according to the present embodiment.
  • an example of processing according to the information processing method according to the present embodiment will be described with reference to FIGS. 22 to 33 as appropriate.
  • the information processing apparatus sets a weight related to summarization (hereinafter, sometimes referred to as “weight for summarization function” or simply “weight”) (S100, presetting).
  • the information processing apparatus determines a weight related to the summary by determining a weight related to the summary and holding the weight in a recording medium such as a storage unit (described later).
  • An example of the process in step S100 is the process shown in FIG.
  • the information processing apparatus acquires data indicating schedule contents from a schedule application (S200).
  • the information processing apparatus is a table (hereinafter referred to as “behavior information summary weight table”) for identifying the behavior recognized from the data indicating the acquired schedule content and the type of weight related to the summary illustrated in FIG.
  • the type of weight related to the summary is determined (S202).
  • the information processing apparatus may indicate a type of weight related to the summary determined in step S202 and a table for specifying the weight related to the summary illustrated in FIG. 6 (hereinafter referred to as “summary table”). )), The weight for the summary is determined (S204).
  • the information processing apparatus performs, for example, the process illustrated in FIG. 23 as the process of step S100 in FIG. Needless to say, the process of step S100 in FIG. 22 is not limited to the process shown in FIG.
  • the information processing apparatus validates voice input by, for example, starting an application related to voice input (S102).
  • the information processing apparatus determines whether audio information has been acquired (S104). If it is not determined in step S104 that the voice information has been acquired, the information processing apparatus according to the present embodiment does not proceed with the processes in and after step S106 until it is determined that the voice information has been acquired, for example.
  • the information processing apparatus analyzes the voice information (S106).
  • the information processing apparatus according to the present embodiment obtains, for example, sound pressure, pitch, average frequency band, and the like by analyzing audio information.
  • the information processing apparatus according to the present embodiment holds the audio information in a recording medium such as a storage unit (described later) (S108).
  • the information processing apparatus sets a weight related to summarization based on voice information or the like (S110).
  • An example of the process in step S110 is the process shown in FIG.
  • the information processing apparatus sets weights related to summarization based on, for example, the average frequency of the voice indicated by the voice information (hereinafter sometimes referred to as “input voice”). (S300).
  • An example of the process in step S300 is the process shown in FIG.
  • step S302 shows an example in which the process of step S302 is performed after the process of step S300
  • the process of step S110 of FIG. 22 is not limited to the process shown in FIG.
  • the information processing apparatus according to the present embodiment can perform the process of step S304 after the process of step S302.
  • the process of S300 and the process of step S302 can also be performed in parallel.
  • the information processing apparatus determines whether or not the average frequency band of voice is 300 [Hz] to 550 [Hz] (S400).
  • step S400 If it is determined in step S400 that the average frequency band of the voice is 300 [Hz] to 550 [Hz], the information processing apparatus according to the present embodiment selects “male” as the type of weight related to the summary. Is determined (S402).
  • step S400 If it is not determined in step S400 that the average frequency band of the voice is 300 [Hz] to 550 [Hz], the information processing apparatus according to the present embodiment has an average frequency band of the voice. , 400 [Hz] to 700 [Hz] is determined (S404).
  • step S404 If it is determined in step S404 that the average frequency band of the voice is 400 [Hz] to 700 [Hz], the information processing apparatus according to the present embodiment selects “female” as the type of weight related to the summary. Is determined (S406).
  • step S404 If it is not determined in step S404 that the average frequency band of the voice is 400 [Hz] to 700 [Hz], the information processing apparatus according to the present embodiment does not determine the weight related to the summary.
  • the information processing apparatus performs, for example, the process shown in FIG. 25 as the process of step S300 of FIG. Needless to say, the process of step S300 in FIG. 24 is not limited to the process shown in FIG.
  • step S110 in FIG. 22 sets a weight related to the summary based on the sound pressure of the sound indicated by the sound information (S302).
  • An example of the processing in step S302 is the processing shown in FIG.
  • the information processing apparatus determines a threshold value related to sound pressure based on the distance between the user of the speaker and the communication partner (S500).
  • a threshold value related to sound pressure based on the distance between the user of the speaker and the communication partner (S500).
  • An example of the process in step S500 is the process shown in FIG.
  • the information processing apparatus acquires the distance D between the current communication partner and the image recognition based on the captured image captured by the imaging device (S600).
  • the information processing apparatus performs, for example, the following mathematical formula 2 (S602).
  • the information processing apparatus performs the calculation of Equation 3 below, and determines the threshold value related to sound pressure by adjusting the threshold value VPWR_thresh_upper related to sound pressure and the threshold value VPWR_thresh_lowre related to sound pressure. (S604).
  • the information processing apparatus performs, for example, the process shown in FIG. 27 as the process of step S500 of FIG. Needless to say, the process of step S500 in FIG. 26 is not limited to the process shown in FIG.
  • the information processing apparatus determines whether or not the sound pressure of the sound indicated by the sound information is greater than or equal to a threshold VPWR_thresh_upper related to the sound pressure (S502).
  • step S502 When it is determined in step S502 that the sound pressure of the sound indicated by the sound information is equal to or higher than the threshold VPWR_thresh_upper related to the sound pressure, the information processing apparatus according to the present embodiment selects “anger” as the weight type related to the summary. “Joy” is determined (S504).
  • step S502 when it is not determined that the sound pressure of the sound indicated by the sound information is equal to or higher than the threshold VPWR_thresh_upper related to the sound pressure, the information processing apparatus according to the present embodiment has the sound pressure of the sound indicated by the sound information. Then, it is determined whether or not the threshold value VPWR_thresh_lowre relating to the sound pressure is below (S506).
  • step S506 If it is determined in step S506 that the sound pressure of the sound indicated by the sound information is equal to or lower than the threshold VPWR_thresh_lowre related to the sound pressure, the information processing apparatus according to the present embodiment selects “sadness” as the weight type related to the summary. “Uncomfortable”, “pain”, and “anxiety” are determined (S508).
  • Step S506 when it is not determined that the sound pressure of the sound indicated by the sound information is equal to or lower than the threshold VPWR_thresh_lowre related to the sound pressure, the information processing apparatus according to the present embodiment does not determine the weight regarding the summary.
  • the information processing apparatus performs, for example, the process shown in FIG. 26 as the process of step S302 of FIG. Needless to say, the process of step S302 in FIG. 24 is not limited to the process shown in FIG.
  • step S110 in FIG. 22 an example of the process of step S110 in FIG. 22 will be described.
  • the information processing apparatus analyzes voice information and holds the number of mora and the location of the accent (S304). Note that the process of step S304 may be performed in the process of step S106 of FIG.
  • the information processing apparatus performs, for example, the process shown in FIG. 24 as the process of step S110 of FIG. Needless to say, the process of step S110 in FIG. 22 is not limited to the process shown in FIG.
  • the information processing apparatus performs voice recognition on voice information (S112).
  • the voice text information is acquired by performing the process of step S112.
  • step S112 When the process of step S112 is performed, the information processing apparatus according to the present embodiment sets a weight related to the summary based on the speech recognition result and the like (S114).
  • An example of the process in step S114 is the process shown in FIG.
  • the information processing apparatus sets a weight for summarization based on the language of the character string indicated by the speech text information (S700).
  • An example of the process in step S700 is the process shown in FIG.
  • FIG. 28 shows an example in which the processing of steps S704 to S710 is performed after the processing of steps S700 and S702, but the processing of step S114 of FIG. 22 is not limited to the processing shown in FIG.
  • the information processing apparatus since the processes of steps S700 and S702 and the processes of steps S704 to S710 are independent processes, the information processing apparatus according to the present embodiment performs the processes of steps S700 and S702 after the processes of steps SS704 to S710. Alternatively, the processes in steps S700 and S702 and the processes in steps S704 to S710 can be performed in parallel.
  • the information processing apparatus estimates the language of the character string indicated by the voice text information (S800).
  • the information processing apparatus estimates a language by a process related to an arbitrary method capable of estimating a language from a character string, such as estimation based on matching with a language dictionary.
  • the information processing apparatus determines whether or not the estimated language is Japanese (S802).
  • step S802 If it is determined in step S802 that the estimated language is Japanese, the information processing apparatus according to the present embodiment determines the weight related to the summary so that the weight of the “Japanese verb” is high. (S804).
  • step S802 if it is not determined that the estimated language is Japanese, the information processing apparatus according to the present embodiment determines whether the estimated language is English (S806).
  • step S806 If it is determined in step S806 that the estimated language is English, the information processing apparatus according to the present embodiment determines the weight related to the summary so that the weight of “English nouns and verbs” increases. (S808).
  • step S806 if it is not determined that the estimated language is English, the information processing apparatus according to the present embodiment does not determine the weight regarding the summary.
  • the information processing apparatus performs, for example, the process shown in FIG. 29 as the process of step S700 of FIG. Needless to say, the process of step S700 of FIG. 28 is not limited to the process shown in FIG.
  • step S114 in FIG. 22 an example of the process in step S114 in FIG. 22 will be described.
  • the information processing apparatus analyzes voice information and holds the number of mora and the location of the accent (S702). Note that the process of step S702 may be performed in the process of step S106 of FIG.
  • the information processing apparatus divides a character string indicated by the speech text information (hereinafter, may be referred to as “speech text result”) into morpheme units by natural language processing, and analyzes the corresponding speech information.
  • the results are linked (S704).
  • the information processing apparatus estimates an emotion based on the analysis result of the voice information linked in units of morphemes in step S704 (S706).
  • the information processing apparatus can estimate an emotion by using an analysis result of audio information, such as a method of using a table in which an analysis result of audio information and an emotion are associated with each other. Estimate emotions by any method.
  • the information processing apparatus determines the strength of the weight related to the summary (the strength of the weight related to the emotion) based on the analysis result of the speech information linked in units of morphemes in step S704 ( S708).
  • the information processing apparatus determines the strength of the weight related to the summary based on the change rate of the fundamental frequency, the change rate of the sound, and the change rate of the utterance time in the analysis result of the speech information.
  • the information processing apparatus according to the present embodiment uses, for example, a method of using a table in which an analysis result of speech information is associated with a strength of the summary weight, and the weight related to the summary by using the analysis result of the speech information.
  • the strength of the weight for the summary is determined by any method that can determine the strength of the summary.
  • the information processing apparatus determines a summary weight based on the emotion estimated in step S706 (S710). Further, the information processing apparatus according to the present embodiment may adjust the weight related to the summary determined based on the estimated emotion by the strength of the weight related to the summary determined in step S708.
  • the information processing apparatus performs, for example, the process shown in FIG. 28 as the process of step S114 of FIG. Needless to say, the process of step S114 in FIG. 22 is not limited to the process shown in FIG.
  • the information processing apparatus performs a summarization process based on the weights related to the summaries determined in steps S100, S110, and S114 (S116).
  • step S116 determines whether or not to perform translation processing (S118).
  • step S118 If it is not determined in step S118 that the translation process is to be performed, the information processing apparatus according to the present embodiment notifies the summary result by the notification control process (S120).
  • step S118 If it is determined in step S118 that the translation process is to be performed, the information processing apparatus according to the present embodiment performs the translation process on the summary result and notifies the translation result by the notification control process (S122).
  • An example of the process of step S122 is the process shown in FIG.
  • the information processing apparatus performs morphological analysis, for example, by performing natural language processing on the summary result (S900).
  • the information processing apparatus generates a divided text in which the main part of speech (noun, verb, adjective, adverb) and other morphemes are combined until there is no unprocessed summary result (S902).
  • the information processing apparatus determines whether or not the language of the summary result is English (S904).
  • step S904 If it is not determined in step S904 that the language of the summary result is English, the information processing apparatus according to the present embodiment performs the process of step S908 described later.
  • step S904 If it is determined in step S904 that the language of the summary result is English, the information processing apparatus according to the present embodiment sets a word corresponding to 5W1H as a divided text (S906).
  • step S904 When it is not determined in step S904 that the language of the summary result is English, or when the process of step S906 is performed, the information processing apparatus according to the present embodiment performs a translation process on each divided text, and the translation result And the original part-of-speech information before translation are linked and held (S908).
  • the information processing apparatus determines whether or not the language of the divided translation text (an example of the translation result) is English (S910).
  • step S910 When it is determined in step S910 that the language of the divided translation text is English, the information processing apparatus according to the present embodiment determines the notification order in English (S912).
  • An example of the process in step S912 is the process shown in FIG.
  • the information processing apparatus determines whether there is a divided translated text to be processed (S1000).
  • the divided translation text to be processed in step S1000 corresponds to an unprocessed translation result among the translation results for each translation unit.
  • the information processing apparatus determines, for example, that there is a divided translation text to be processed when there is an unprocessed translation result, and processes the divided translation text when there is no unprocessed translation result. Is determined not to exist.
  • step S1000 When it is determined in step S1000 that there is a divided translation text to be processed, the information processing apparatus according to the present embodiment acquires a divided translation text to be processed next (S1002).
  • the information processing apparatus determines whether or not the divided translated text to be processed includes a noun (S1004).
  • step S1004 When it is determined in step S1004 that the divided translated text to be processed includes a noun, the information processing apparatus according to the present embodiment sets the priority to the maximum value “5” (S1006). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1000.
  • step S1004 determines whether the divided translation text to be processed includes a noun. If it is not determined in step S1004 that the divided translation text to be processed includes a noun, the information processing apparatus according to the present embodiment determines whether the divided translation text to be processed includes a verb (S1008). .
  • step S1008 when it is determined that the divided translated text to be processed includes a verb, the information processing apparatus according to the present embodiment sets the priority to “4” (S1010). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1000.
  • step S1008 determines whether the divided translated text to be processed includes a verb. If it is not determined in step S1008 that the divided translated text to be processed includes a verb, the information processing apparatus according to the present embodiment determines whether the divided translated text to be processed includes an adjective (S1012). .
  • step S1012 if it is determined that the divided translated text to be processed includes an adjective, the information processing apparatus according to the present embodiment sets the priority to “3” (S1014). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1000.
  • step S1012 if it is not determined that the divided translation text to be processed includes an adjective, the information processing apparatus according to the present embodiment determines whether the divided translation text to be processed includes an adverb (S1016). .
  • step S1016 If it is determined in step S1016 that the divided translated text to be processed includes an adverb, the information processing apparatus according to the present embodiment sets the priority to “2” (S1018). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1000.
  • step S1016 if it is not determined that the divided translation text to be processed includes an adverb, the information processing apparatus according to the present embodiment sets the priority to the minimum value “1” (S1020). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1000.
  • step S1000 If it is not determined in step S1000 that there is a divided translated text to be processed, the information processing apparatus according to the present embodiment sorts the notification order according to the set priority (S1022).
  • the information processing apparatus performs, for example, the process illustrated in FIG. 31 as the process in step S912 in FIG. Needless to say, the process of step S912 in FIG. 30 is not limited to the process shown in FIG.
  • step S910 determines the notification order in Japanese (S914).
  • An example of the process in step S914 is the process shown in FIG.
  • the information processing apparatus determines whether or not there is a divided translated text to be processed, similar to step S1100 of FIG. 31 (S1100).
  • the divided translation text to be processed in step S1100 corresponds to an unprocessed translation result among the translation results for each translation unit.
  • step S1100 When it is determined in step S1100 that there is a divided translation text to be processed, the information processing apparatus according to the present embodiment acquires a divided translation text to be processed next (S1102).
  • the information processing apparatus determines whether or not the divided translated text to be processed includes a verb (S1104).
  • step S1104 When it is determined in step S1104 that the divided translated text to be processed includes a verb, the information processing apparatus according to the present embodiment sets the priority to the maximum value “5” (S1106). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1100.
  • step S1104 determines whether or not the divided translation text to be processed includes a verb. If it is not determined in step S1104 that the divided translation text to be processed includes a verb, the information processing apparatus according to the present embodiment determines whether or not the divided translation text to be processed includes a noun (S1108). .
  • step S1108 If it is determined in step S1108 that the divided translated text to be processed includes a noun, the information processing apparatus according to the present embodiment sets the priority to “4” (S1110). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1100.
  • step S1108 determines whether the divided translation text to be processed includes a noun. If it is not determined in step S1108 that the divided translation text to be processed includes a noun, the information processing apparatus according to the present embodiment determines whether the divided translation text to be processed includes an adjective (S1112). .
  • step S1112 If it is determined in step S1112 that the divided translated text to be processed includes an adjective, the information processing apparatus according to the present embodiment sets the priority to “3” (S1114). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1100.
  • step S1112 determines whether the divided translation text to be processed includes an adjective. If it is not determined in step S1112 that the divided translation text to be processed includes an adjective, the information processing apparatus according to the present embodiment determines whether the divided translation text to be processed includes an adverb (S1116). .
  • step S1116 If it is determined in step S1116 that the divided translated text to be processed includes an adverb, the information processing apparatus according to the present embodiment sets the priority to “2” (S1118). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1100.
  • step S1116 If it is not determined in step S1116 that the divided translated text to be processed includes an adverb, the information processing apparatus according to the present embodiment sets the priority to the minimum value “1” (S1120). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1100.
  • the information processing apparatus sorts the notification order according to the set priority (S1122).
  • the information processing apparatus performs, for example, the process illustrated in FIG. 32 as the process of step S914 in FIG. Needless to say, the process of step S914 in FIG. 30 is not limited to the process shown in FIG.
  • step S912 the information processing apparatus according to the present embodiment causes the notification control process to notify the divided translated text for which the notification order is determined (S916).
  • An example of the process in step S916 is the process shown in FIG.
  • the information processing apparatus determines whether or not there is a divided translated text to be processed, similar to step S1000 in FIG. 31 (S1200).
  • the divided translation text to be processed in step S1200 corresponds to an unprocessed translation result among the translation results for each translation unit.
  • step S1200 When it is determined in step S1200 that there is a divided translation text to be processed, the information processing apparatus according to the present embodiment acquires a divided translation text to be processed next (S1202).
  • the information processing apparatus acquires the sound pressure from the speech information corresponding to the divided translated text to be processed, and increases the sound pressure of the divided translated text to be processed for output (S1204).
  • the information processing apparatus determines whether or not the divided translated text output in step S1204 is the last divided translated text (S1206).
  • the information processing apparatus determines, for example, that there is an unprocessed translation result and determines that it is not the last divided translated text, and if there is no unprocessed translation result, the last divided translated text It is determined that
  • step S1206 If it is not determined in step S1206 that the text is the last divided translated text, the information processing apparatus according to the present embodiment outputs a “beep” sound as sound feedback for notifying that it will continue thereafter (S1208). ). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1200.
  • step S1206 If it is determined in step S1206 that the text is the last divided translated text, the information processing apparatus according to the present embodiment outputs a “beep” sound as sound feedback for notifying the end. (S1210). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1200.
  • step S1200 If it is not determined in step S1200 that there is a divided translated text to be processed, the information processing apparatus according to the present embodiment ends the process of FIG.
  • the information processing apparatus performs, for example, the process shown in FIG. 33 as the process of step S916 in FIG. Needless to say, the process of step S916 in FIG. 30 is not limited to the process shown in FIG.
  • the use cases described with reference to FIGS. 1 to 5 can be realized by performing the processes shown in FIGS. Needless to say, the processing related to the information processing method according to the present embodiment is not limited to the processing shown in FIGS.
  • FIG. 34 is a block diagram illustrating an example of the configuration of the information processing apparatus 100 according to the present embodiment.
  • the information processing apparatus 100 includes, for example, a communication unit 102 and a control unit 104.
  • the information processing apparatus 100 is operated by, for example, a ROM (Read Only Memory. Not shown), a RAM (Random Access Memory. Not shown), a storage unit (not shown), or a user of the information processing apparatus 100.
  • a possible operation unit (not shown), a display unit (not shown) for displaying various screens on the display screen, and the like may be provided.
  • the information processing apparatus 100 connects the above constituent elements by, for example, a bus as a data transmission path.
  • the information processing apparatus 100 is driven by, for example, power supplied from an internal power supply such as a battery provided in the information processing apparatus 100, or power supplied from a connected external power supply.
  • a ROM (not shown) stores control data such as a program used by the control unit 104 and calculation parameters.
  • a RAM (not shown) temporarily stores a program executed by the control unit 104.
  • the storage unit (not shown) is a storage unit included in the information processing apparatus 100.
  • the storage unit (not shown) includes various data such as a table for setting weights related to the summary, data related to the information processing method according to the present embodiment, various applications, Memorize data.
  • examples of the storage unit (not shown) include a magnetic recording medium such as a hard disk, and a non-volatile memory such as a flash memory. Further, the storage unit (not shown) may be detachable from the information processing apparatus 100.
  • an operation input device to be described later can be cited.
  • a display part (not shown), the display device mentioned later is mentioned.
  • FIG. 35 is an explanatory diagram illustrating an example of a hardware configuration of the information processing apparatus 100 according to the present embodiment.
  • the information processing apparatus 100 includes, for example, an MPU 150, a ROM 152, a RAM 154, a recording medium 156, an input / output interface 158, an operation input device 160, a display device 162, and a communication interface 164.
  • the information processing apparatus 100 connects each component with a bus 166 as a data transmission path, for example.
  • the MPU 150 is composed of, for example, one or two or more processors configured by an arithmetic circuit such as an MPU, various processing circuits, and the like, and functions as the control unit 104 that controls the information processing apparatus 100 as a whole. Further, the MPU 150 plays a role of, for example, the processing unit 110 described later in the information processing apparatus 100.
  • the processing unit 110 may be configured with a dedicated (or general-purpose) circuit (for example, a processor separate from the MPU 150) that can realize the processing of the processing unit 110.
  • the ROM 152 stores programs used by the MPU 150, control data such as calculation parameters, and the like.
  • the RAM 154 temporarily stores a program executed by the MPU 150, for example.
  • the recording medium 156 functions as a storage unit (not shown), and stores various data such as data related to the information processing method according to the present embodiment such as a table for setting weights related to summarization and various applications. To do.
  • examples of the recording medium 156 include a magnetic recording medium such as a hard disk and a non-volatile memory such as a flash memory. Further, the recording medium 156 may be detachable from the information processing apparatus 100.
  • the input / output interface 158 connects, for example, the operation input device 160 and the display device 162.
  • the operation input device 160 functions as an operation unit (not shown)
  • the display device 162 functions as a display unit (not shown).
  • examples of the input / output interface 158 include a USB (Universal Serial Bus) terminal, a DVI (Digital Visual Interface) terminal, an HDMI (High-Definition Multimedia Interface) (registered trademark) terminal, and various processing circuits. .
  • the operation input device 160 is provided on the information processing apparatus 100, for example, and is connected to the input / output interface 158 inside the information processing apparatus 100.
  • Examples of the operation input device 160 include a button, a direction key, a rotary selector such as a jog dial, or a combination thereof.
  • the display device 162 is provided on the information processing apparatus 100, for example, and is connected to the input / output interface 158 inside the information processing apparatus 100.
  • Examples of the display device 162 include a liquid crystal display (Liquid Crystal Display), an organic EL display (Organic Electro-Luminescence Display, or an OLED display (Organic Light Emitting Diode Display)), and the like.
  • the input / output interface 158 can be connected to an external device such as an operation input device (for example, a keyboard or a mouse) external to the information processing apparatus 100 or an external display device.
  • the display device 162 may be a device capable of display and user operation, such as a touch panel.
  • the communication interface 164 is a communication unit included in the information processing apparatus 100, and is a communication unit 102 for performing wireless or wired communication with, for example, an external device or an external device via a network (or directly). Function as.
  • the communication interface 164 for example, a communication antenna and an RF (Radio Frequency) circuit (wireless communication), an IEEE 802.15.1 port and a transmission / reception circuit (wireless communication), an IEEE 802.11 port and a transmission / reception circuit (wireless communication). ), Or a LAN (Local Area Network) terminal and a transmission / reception circuit (wired communication).
  • RF Radio Frequency
  • the information processing apparatus 100 performs a process related to the information processing method according to the present embodiment, for example, with the configuration illustrated in FIG. Note that the hardware configuration of the information processing apparatus 100 according to the present embodiment is not limited to the configuration illustrated in FIG.
  • the information processing apparatus 100 may not include the communication interface 164 when communicating with an external apparatus or the like via a connected external communication device.
  • the communication interface 164 may be configured to be able to communicate with one or more external devices or the like by a plurality of communication methods.
  • the information processing apparatus 100 can have a configuration that does not include the recording medium 156, the operation input device 160, and the display device 162, for example.
  • the information processing apparatus 100 includes, for example, one or more of various sensors such as a motion sensor and a biological sensor, a voice input device such as a microphone, a voice output device such as a speaker, a vibration device, and an imaging device. Furthermore, you may provide.
  • part or all of the configuration shown in FIG. 35 may be realized by one or two or more ICs.
  • the communication unit 102 is a communication unit included in the information processing apparatus 100, and performs wireless or wired communication with, for example, an external apparatus or an external device via a network (or directly).
  • the communication of the communication unit 102 is controlled by the control unit 104, for example.
  • examples of the communication unit 102 include a communication antenna and an RF circuit, a LAN terminal, and a transmission / reception circuit, but the configuration of the communication unit 102 is not limited to the above.
  • the communication unit 102 can have a configuration corresponding to an arbitrary standard capable of performing communication such as a USB terminal and a transmission / reception circuit, or an arbitrary configuration capable of communicating with an external device via a network.
  • the communication unit 102 may be configured to be able to communicate with one or more external devices or the like by a plurality of communication methods.
  • the control unit 104 is configured by, for example, an MPU and plays a role of controlling the entire information processing apparatus 100.
  • the control unit 104 includes, for example, a processing unit 110 and plays a role of leading the processing related to the information processing method according to the present embodiment.
  • the processing unit 110 plays a role of leading one or both of the processing related to the first information processing method and the processing related to the second information processing method.
  • the processing unit 110 When performing the process related to the first information processing method described above, the processing unit 110 performs a summarization process for summarizing the content of the utterance indicated by the voice information, based on the acquired information indicating the weight related to the summary.
  • the processing unit 110 performs, for example, the process described in [3-1] as the summary process.
  • the processing unit 110 When performing the processing related to the second information processing method described above, the processing unit 110 performs notification control processing for controlling notification of notification contents based on the summary information.
  • the processing unit 110 performs, for example, the process described in [3-3] as the notification control process.
  • processing unit 110 may further perform a translation process for translating the content of the utterance summarized by the summarization process into another language.
  • the processing unit 110 performs, for example, the process described in [3-2] as the translation process.
  • the processing unit 110 can notify the translation result by the notification control process.
  • the processing unit 110 performs, for example, a process related to speech recognition, a process related to speech analysis, a process related to estimation of a user's state, a process related to estimation of a distance between a user and a communication partner, and the like.
  • Various processes related to the information processing method according to the embodiment can also be performed.
  • Various processes related to the information processing method according to the present embodiment may be performed in an external device of the information processing apparatus 100.
  • the information processing apparatus 100 has, for example, the configuration shown in FIG. 34 to perform processing related to the information processing method according to this embodiment (for example, “summarization processing related to the first information processing method and notification control related to the second information processing method” One or both of the processing ”,“ one or both of the summary processing according to the first information processing method and the notification control processing according to the second information processing method, and the translation processing ”).
  • the information processing apparatus 100 may summarize the content of the utterance with the configuration illustrated in FIG. 34, for example. it can.
  • the information processing apparatus 100 uses the configuration illustrated in FIG. Can be notified.
  • the information processing apparatus 100 can achieve the effects that are achieved by performing the processing related to the information processing method according to the present embodiment as described above.
  • the configuration of the information processing apparatus according to the present embodiment is not limited to the configuration shown in FIG.
  • the information processing apparatus can include the processing unit 110 illustrated in FIG. 34 separately from the control unit 104 (for example, realized by another processing circuit). Further, for example, the summary processing according to the first information processing method, the notification control processing according to the second information processing method, and the translation processing according to the present embodiment may be performed in a distributed manner by a plurality of processing circuits.
  • the summary processing according to the first information processing method, the notification control processing according to the second information processing method, and the translation processing according to the present embodiment define the processing according to the information processing method according to the present embodiment. It is a thing. Therefore, the configuration for realizing the processing according to the information processing method according to the present embodiment is not limited to the configuration illustrated in FIG. 34, and the configuration according to the method of dividing the processing according to the information processing method according to the present embodiment is taken. It is possible.
  • the information processing apparatus when communicating with an external device via an external communication device having the same function and configuration as the communication unit 102, does not include the communication unit 102. Also good.
  • the information processing apparatus has been described as the present embodiment, but the present embodiment is not limited to such a form.
  • the present embodiment is used by being worn on a user's body, such as a “computer such as a personal computer (PC) or a server” or an “eyewear type device, a clock type device, a bracelet type device, etc.”
  • Processing related to the information processing method according to the present embodiment such as “any wearable device”, “communication device such as a smartphone”, “tablet-type device”, “game machine”, “mobile object such as an automobile”, etc.
  • the present invention can be applied to various devices capable of performing one or both of the processing related to the first information processing method and the processing related to the second information processing method.
  • the present embodiment can be applied to a processing IC that can be incorporated in the above-described device, for example.
  • the information processing apparatus may be applied to a processing system that is premised on connection to a network (or communication between apparatuses), such as cloud computing.
  • a processing system in which processing according to the information processing method according to the present embodiment is performed for example, “summary processing and translation processing according to the first information processing method are performed by one apparatus configuring the processing system, And a system in which notification control processing according to the second information processing method is performed by another device constituting the system.
  • Program according to this embodiment [I] Program according to first information processing method (computer program) A program for causing a computer to function as the information processing apparatus according to the present embodiment that performs processing according to the first information processing method (for example, “summarization processing according to the first information processing method” or “first information A program capable of executing processing related to the first information processing method, such as “summarization processing related to the processing method and translation processing related to the present embodiment”, is executed by a processor or the like in the computer.
  • the contents of can be summarized.
  • a program for causing a computer to function as the information processing apparatus according to the present embodiment that performs processing according to the first information processing method is executed by a processor or the like in the computer, whereby the first information processing described above is performed.
  • the effect produced by the process according to the method can be produced.
  • [II] Program Related to Second Information Processing Method A program for causing a computer to function as the information processing apparatus according to the present embodiment that performs processing related to the second information processing method (for example, “second information processing method” A program capable of executing processing related to the second information processing method such as “notification control processing related to the above”, “translation processing related to the present embodiment, and notification control processing related to the second information processing method”)
  • second information processing method A program capable of executing processing related to the second information processing method such as “notification control processing related to the above”, “translation processing related to the present embodiment, and notification control processing related to the second information processing method”
  • the contents of the summarized utterance can be notified by being executed by a processor or the like in the computer.
  • a program for causing a computer to function as the information processing apparatus according to the present embodiment that performs processing according to the second information processing method is executed by a processor or the like in the computer, whereby the second information processing described above is performed.
  • the effect produced by the process according to the method can be produced.
  • Program related to information processing method includes a program related to the first information processing method and a program related to the second information processing method. Both of them may be included.
  • a program for causing a computer to function as the information processing apparatus according to the present embodiment (one or both of the processing related to the first information processing method and the processing related to the second information processing method are executed)
  • the present embodiment can also provide a recording medium in which the program is stored.
  • An information processing apparatus (2) including a processing unit that performs a summarization process for summarizing the content of the utterance indicated by the voice information based on the user's utterance based on the acquired information indicating the weight related to the summary.
  • the information processing apparatus according to (1) wherein the processing unit performs the digest process when it is determined that a predetermined start condition is satisfied.
  • the start condition is a condition related to a non-speech period in which a state in which no speech is made continues.
  • the processing unit determines that the start condition is satisfied when the non-speech period exceeds a predetermined period or when the non-speech period becomes equal to or greater than the predetermined period, (2) Information processing device.
  • the start condition is a condition related to a state of speech recognition for acquiring the content of an utterance from the speech information, The information processing apparatus according to (2) or (3), wherein the processing unit determines that the start condition is satisfied based on detection of the voice recognition stop request.
  • the start condition is a condition related to a state of speech recognition for acquiring the content of an utterance from the speech information, The information processing apparatus according to any one of (2) to (4), wherein the processing unit determines that the start condition is satisfied based on detection of completion of the voice recognition.
  • the start condition is a condition related to the content of the utterance
  • the processing unit determines that the start condition is satisfied based on detection of a predetermined word from the content of the utterance indicated by the audio information, according to any one of (2) to (5) Information processing device.
  • the start condition is a condition related to the content of the utterance, The information processing apparatus according to any one of (2) to (6), wherein the processing unit determines that the start condition is satisfied based on detection of stagnation based on the audio information.
  • the start condition is a condition related to an elapsed time after the voice information is obtained, The processing unit determines that the start condition is satisfied when the elapsed time exceeds a predetermined period or when the elapsed time is equal to or longer than the predetermined period.
  • (2) to (7) The information processing apparatus according to any one of the above.
  • the summary exclusion condition is a condition related to gesture detection, The information processing apparatus according to (9), wherein the processing unit determines that the summary exclusion condition is satisfied when a predetermined gesture is detected.
  • the processing unit changes a summary level of the content of the utterance based on at least one of an utterance period specified based on the voice information and a number of characters specified based on the voice information.
  • the information processing apparatus according to any one of (10) to (10).
  • the processing unit sets a weight for the summary based on at least one of the audio information, information about a user, information about an application, information about an environment, and information about a device, (1) to (12) The information processing apparatus according to any one of the above.
  • the information processing apparatus according to (13), wherein the information about the user includes at least one of the user status information and the user operation information.
  • the processing unit further performs a translation process for translating the content of the speech summarized by the summary process into another language.
  • the processing unit does not perform the translation process when it is determined that a predetermined translation exclusion condition is satisfied.
  • the processor is The content translated into another language by the translation process is re-translated into the language before translation, If there is a word included in the re-translated content in the utterance content indicated by the speech information acquired after re-translation, the words included in the re-translated content are summarized.
  • the information processing apparatus according to (15) or (16), which is included in the content of the uttered speech.
  • the information processing apparatus according to any one of (1) to (17), wherein the processing unit further performs notification control processing for controlling notification of the content of the summarized utterance.
  • An information processing method executed by an information processing apparatus comprising: performing a summarization process for summarizing the content of an utterance indicated by voice information based on a user's utterance based on information indicating a weight related to the acquired summary.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided is an information processing device provided with a processing unit which, on the basis of acquired information indicating the importance of summarization, performs summarization processing in which utterance content indicated by speech information based on an utterance of a user is summarized.

Description

情報処理装置、情報処理方法、およびプログラムInformation processing apparatus, information processing method, and program
 本開示は、情報処理装置、情報処理方法、およびプログラムに関する。 The present disclosure relates to an information processing apparatus, an information processing method, and a program.
 電子文書を要約する技術が開発されている。電子文書を要約し、作成した要約文に著作権情報を示すタグを付加する技術としては、例えば下記の特許文献1に記載の技術が挙げられる。 Technology for summarizing electronic documents has been developed. As a technique for summarizing an electronic document and adding a tag indicating copyright information to the created summary sentence, for example, a technique described in Patent Document 1 below can be cited.
特開2001-167114号公報JP 2001-167114 A
 発話を行う者(以下、「発話者」と示す。)が発話する場合、発話者が伝えたい内容だけを発話することは、困難である。 When a person who speaks (hereinafter referred to as “speaker”) speaks, it is difficult to speak only what the speaker wants to convey.
 そのため、例えば発話によりコミュニケーションがとられる場合を想定すると、“発話者が伝えたい内容に加えて、発話者が伝えたい内容以外の内容(すなわち、不要な内容)が、コミュニケーションをとる相手に伝わること”が多い。よって、発話によりコミュニケーションがとられる場合には、“コミュニケーションをとる相手が、発話者が伝えたい内容を理解するのに時間を要すること”が、起こりうる。 Therefore, for example, assuming that communication can be made by utterances, “In addition to the content that the speaker wants to convey, content other than the content that the speaker wants to convey (ie, unnecessary content) is transmitted to the other party that is communicating. "There are many. Therefore, when communication is achieved by utterance, it may happen that “the other party who communicates takes time to understand what the speaker wants to convey”.
 また、発話の内容を他の言語に翻訳する場合を想定すると、例えば“発話者が伝えたい内容に加えて、発話者が伝えたい内容以外の内容が、発話者により発話されること”に起因して、“翻訳に時間を要すること”や“発話者が意図した翻訳結果とならないこと”などが、起こりうる。 Also, assuming that the content of the utterance is translated into another language, for example, “In addition to the content that the speaker wants to convey, content other than the content that the speaker wants to convey is caused by the speaker” Thus, “translation takes time” and “translation result not intended by the speaker” may occur.
 ここで、“コミュニケーションをとる相手が、発話者が伝えたい内容を理解するのに時間を要すること”や“翻訳に時間を要すること”などの、“発話者が伝えたい内容だけを発話することが困難であることに起因する事象”が生じる可能性をより低減する方法としては、発話者の発話の内容をより簡潔にする方法が、考えられる。 Here, “speaking only what the speaker wants to communicate, such as“ the person with whom it takes communication takes time to understand what the speaker wants to communicate ”or“ it takes time to translate ” As a method of further reducing the possibility of the occurrence of an “event due to difficulty of”, a method of simplifying the content of the speaker's utterance can be considered.
 本開示では、発話の内容を要約することが可能な、新規かつ改良された情報処理装置、情報処理方法、およびプログラムを提案する。 This disclosure proposes a new and improved information processing apparatus, information processing method, and program capable of summarizing the content of an utterance.
 本開示によれば、取得した要約に関する重みを示す情報に基づいて、ユーザの発話に基づく音声情報が示す発話の内容を要約する要約処理を行う処理部を備える、情報処理装置が提供される。 According to the present disclosure, there is provided an information processing apparatus including a processing unit that performs a summarization process for summarizing the content of an utterance indicated by voice information based on a user's utterance based on information indicating a weight related to the acquired summary.
 また、本開示によれば、取得した要約に関する重みを示す情報に基づいて、ユーザの発話に基づく音声情報が示す発話の内容を要約する要約処理を行うステップを有する、情報処理装置により実行される情報処理方法が提供される。 Moreover, according to this indication, based on the information which shows the weight regarding the acquired summary, it is performed by the information processing apparatus which has a step which performs the summarization process which summarizes the content of the speech which the audio | voice information based on a user's speech shows An information processing method is provided.
 また、本開示によれば、取得した要約に関する重みを示す情報に基づいて、ユーザの発話に基づく音声情報が示す発話の内容を要約する要約処理を行う機能を、コンピュータに実現させるためのプログラムが提供される。 Further, according to the present disclosure, there is provided a program for causing a computer to implement a function of performing a summarization process for summarizing the content of utterances indicated by voice information based on a user's utterances based on information indicating weights related to the acquired summaries. Provided.
 本開示によれば、発話の内容を要約することができる。 れ ば According to the present disclosure, the content of utterances can be summarized.
 なお、上記の効果は必ずしも限定的なものではなく、上記の効果とともに、または上記の効果に代えて、本明細書に示されたいずれかの効果、または本明細書から把握されうる他の効果が奏されてもよい。 Note that the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
本実施形態に係る情報処理方法が適用されるユースケースの一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the use case to which the information processing method which concerns on this embodiment is applied. 本実施形態に係る情報処理方法が適用されるユースケースの一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the use case to which the information processing method which concerns on this embodiment is applied. 本実施形態に係る情報処理方法が適用されるユースケースの一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the use case to which the information processing method which concerns on this embodiment is applied. 本実施形態に係る情報処理方法が適用されるユースケースの一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the use case to which the information processing method which concerns on this embodiment is applied. 本実施形態に係る情報処理方法が適用されるユースケースの一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the use case to which the information processing method which concerns on this embodiment is applied. 本実施形態に係る要約に関する重みを設定するためのテーブルの一例を示す説明図である。It is explanatory drawing which shows an example of the table for setting the weight regarding the summary which concerns on this embodiment. 本実施形態に係る要約に関する重みを設定するためのテーブルの一例を示す説明図である。It is explanatory drawing which shows an example of the table for setting the weight regarding the summary which concerns on this embodiment. 本実施形態に係る要約に関する重みを設定するためのテーブルの一例を示す説明図である。It is explanatory drawing which shows an example of the table for setting the weight regarding the summary which concerns on this embodiment. 第1の情報処理方法に係る要約処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the summary process which concerns on a 1st information processing method. 第1の情報処理方法に係る要約処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the summary process which concerns on a 1st information processing method. 第1の情報処理方法に係る要約処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the summary process which concerns on a 1st information processing method. 第2の情報処理方法に係る通知制御処理により実現される視覚的な方法による通知の一例を示す説明図である。It is explanatory drawing which shows an example of the notification by the visual method implement | achieved by the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the notification control process which concerns on a 2nd information processing method. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。It is a flowchart which shows an example of the process which concerns on the information processing method which concerns on this embodiment. 本実施形態に係る情報処理装置の構成の一例を示すブロック図である。It is a block diagram which shows an example of a structure of the information processing apparatus which concerns on this embodiment. 本実施形態に係る情報処理装置のハードウェア構成の一例を示す説明図である。It is explanatory drawing which shows an example of the hardware constitutions of the information processing apparatus which concerns on this embodiment.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
 また、以下では、下記に示す順序で説明を行う。
  1.本実施形態に係る情報処理方法
  2.本実施形態に係る情報処理装置
  3.本実施形態に係るプログラム
In the following, description will be given in the following order.
1. 1. Information processing method according to this embodiment 2. Information processing apparatus according to this embodiment Program according to this embodiment
(本実施形態に係る情報処理方法)
 まず、本実施形態に係る情報処理方法について説明する。以下では、本実施形態に係る情報処理方法に係る処理を、本実施形態に係る情報処理装置が行う場合を例に挙げる。
(Information processing method according to this embodiment)
First, an information processing method according to the present embodiment will be described. Below, the case where the information processing apparatus concerning this embodiment performs the process concerning the information processing method concerning this embodiment is mentioned as an example.
 なお、以下では、本実施形態に係る情報処理方法を、第1の情報処理方法と、第2の情報処理方法とに分けて説明する。また、以下では、同一の情報処理装置が、第1の情報処理方法に係る処理と第2の情報処理方法に係る処理との双方を行う場合を主に説明するが、第1の情報処理方法に係る処理を行う情報処理装置と、第2の情報処理方法に係る処理を行う情報処理装置とは、異なっていてもよい。 In the following, the information processing method according to the present embodiment will be described by dividing it into a first information processing method and a second information processing method. In the following description, a case where the same information processing apparatus performs both the processing related to the first information processing method and the processing related to the second information processing method will be mainly described. The information processing apparatus that performs the process according to the above may be different from the information processing apparatus that performs the process according to the second information processing method.
 また、以下では、本実施形態に係る情報処理方法に係る処理の対象となる者を「ユーザ」と示す。本実施形態に係るユーザとしては、例えば、“発話者(または、発話者となりうる者)”(後述する第1の情報処理方法が行われる場合)や“通知に係る操作デバイスの操作者”(後述する第2の情報処理方法が行われる場合)などが、挙げられる。 In the following, a person who is a target of processing related to the information processing method according to the present embodiment is indicated as “user”. As a user according to the present embodiment, for example, “speaker (or a person who can be a speaker)” (when a first information processing method described later is performed) or “operator of an operation device related to notification” ( And a second information processing method to be described later).
[1]本実施形態に係る情報処理方法の概要
[1-1]第1の情報処理方法の概要
 上述したように、“発話者が伝えたい内容だけを発話することが困難であることに起因する事象”が生じる可能性をより低減する方法としては、発話者の発話の内容をより簡潔にする方法が、考えられる。
[1] Outline of information processing method according to this embodiment [1-1] Outline of first information processing method As described above, “because it is difficult to utter only the content that the speaker wants to convey As a method for further reducing the possibility of the “event to occur”, a method for simplifying the content of the utterance of the speaker can be considered.
 そこで、本実施形態に係る情報処理装置は、第1の情報処理方法に係る処理として、発話の内容を要約する処理(以下、「要約処理」と示す。)を行う。本実施形態に係る情報処理装置は、取得した要約に関する重みを示す情報に基づいて、ユーザの発話に基づく音声情報が示す発話の内容を要約する。本実施形態に係る要約としては、例えば、要約に関する重みに基づいて発話の内容を選別すること、または、要約に関する重みに基づいて発話の内容から一部を抽出することが、挙げられる。 Therefore, the information processing apparatus according to the present embodiment performs processing for summarizing the content of the utterance (hereinafter referred to as “summarization processing”) as processing related to the first information processing method. The information processing apparatus according to the present embodiment summarizes the content of the utterance indicated by the voice information based on the user's utterance based on the information indicating the weight related to the acquired summary. As the summary according to the present embodiment, for example, the content of an utterance is selected based on the weight related to the summary, or a part is extracted from the content of the utterance based on the weight related to the summary.
 要約に関する重みを示す情報としては、例えば、後述する要約に関する重みを設定するためのテーブル(または、データベース。以下、同様とする。)に記憶される、要約に関する重みを示すデータが挙げられる。また、要約に関する重みを示す情報は、要約に関する重みが相対的に大きいまたは小さいということを示すデータであってもよい。要約に関する重みを示す情報は、例えば、後述する要約に関する重みを設定するためのテーブルを参照することなどによって、取得される。 The information indicating the weight related to the summary includes, for example, data indicating the weight related to the summary, which is stored in a table (or database, hereinafter the same applies) for setting the weight related to the summary described later. Further, the information indicating the weight regarding the summary may be data indicating that the weight regarding the summary is relatively large or small. The information indicating the weight related to the summary is acquired by referring to a table for setting the weight related to the summary described later, for example.
 ここで、本実施形態に係る音声情報は、発話者の発話に基づく音声を含む音声データである。本実施形態に係る音声情報は、例えば、マイクロホンなどの音声入力デバイスが、発話者の発話に基づく音声を拾うことにより生成される。また、本実施形態に係る音声情報は、音声入力デバイスが拾った音声に応じて生成したアナログ信号が、AD(Analog-to-Digital)コンバータによりデジタル信号に変換されたものであってもよい。また、上記音声入力デバイス(または、上記音声入力デバイスと上記ADコンバータ)とは、本実施形態に係る情報処理装置が備えていてもよいし、本実施形態に係る情報処理装置の外部のデバイスであってもよい。 Here, the voice information according to the present embodiment is voice data including voice based on the utterance of the speaker. The voice information according to the present embodiment is generated when a voice input device such as a microphone picks up voice based on the utterance of the speaker. Also, the audio information according to the present embodiment may be information obtained by converting an analog signal generated according to the audio picked up by the audio input device into a digital signal by an AD (Analog-to-Digital) converter. The voice input device (or the voice input device and the AD converter) may be included in the information processing apparatus according to the present embodiment, or may be a device external to the information processing apparatus according to the present embodiment. There may be.
 音声情報が示す発話の内容としては、例えば、音声情報に対して任意の音声認識処理が行われた結果得られたテキストデータ(以下、「音声テキスト情報」と示す。)が示す文字列が挙げられる。本実施形態に係る情報処理装置は、音声テキスト情報が示す文字列を音声情報が示す発話の内容として認識し、音声テキスト情報が示す文字列を要約する。 The content of the utterance indicated by the voice information includes, for example, a character string indicated by text data (hereinafter, referred to as “voice text information”) obtained as a result of arbitrary voice recognition processing performed on the voice information. It is done. The information processing apparatus according to the present embodiment recognizes the character string indicated by the voice text information as the content of the utterance indicated by the voice information, and summarizes the character string indicated by the voice text information.
 ここで、音声情報に対する音声認識処理は、本実施形態に係る情報処理装置が行ってもよいし、本実施形態に係る情報処理装置の外部装置において行われてもよい。本実施形態に係る情報処理装置が音声認識処理を行う場合、本実施形態に係る情報処理装置は、取得された音声情報に対して音声認識処理を行った結果得られた音声テキスト情報が示す文字列を、要約する。また、本実施形態に係る情報処理装置の外部装置が音声認識処理を行う場合、本実施形態に係る情報処理装置は、当該外部装置から取得された音声テキスト情報が示す文字列を、要約する。 Here, the voice recognition processing for the voice information may be performed by the information processing apparatus according to the present embodiment, or may be performed by an external device of the information processing apparatus according to the present embodiment. When the information processing apparatus according to the present embodiment performs the speech recognition process, the information processing apparatus according to the present embodiment indicates the character indicated by the speech text information obtained as a result of performing the speech recognition process on the acquired speech information. Summarize the column. When the external device of the information processing apparatus according to the present embodiment performs speech recognition processing, the information processing apparatus according to the present embodiment summarizes the character string indicated by the speech text information acquired from the external device.
 また、本実施形態に係る情報処理装置または外部装置において、音声認識処理は、例えば、定期的/非定期的に繰り返し行われてもよいし、音声情報が取得されたタイミングなどの所定のトリガに応じて行われてもよい。また、本実施形態に係る情報処理装置または外部装置において、音声認識処理は、例えば、要約に係る音声認識の開始操作などの所定の操作が行われたときに、行われてもよい。 Further, in the information processing apparatus or the external apparatus according to the present embodiment, the voice recognition process may be repeatedly performed, for example, periodically / non-periodically, or at a predetermined trigger such as a timing when the voice information is acquired. It may be done accordingly. In the information processing apparatus or the external apparatus according to the present embodiment, the voice recognition process may be performed when a predetermined operation such as a voice recognition start operation related to the summary is performed, for example.
 本実施形態に係る要約に関する重みとは、音声情報が示す発話の内容から、より重要な言葉(換言すると、発話者がより伝えたいと考えているであろう言葉)を抽出するための指標である。本実施形態に係る要約に関する重みに基づいて、音声情報が示す発話の内容が要約されることによって、要約に関する重みに対応するより重要な言葉が、要約された発話の内容に含まれることとなる。 The weight related to the summary according to the present embodiment is an index for extracting more important words (in other words, words that the speaker will want to convey) from the content of the utterance indicated by the voice information. is there. Based on the weight related to the summary according to the present embodiment, the content of the utterance indicated by the voice information is summarized, so that more important words corresponding to the weight related to the summary are included in the content of the summarized utterance. .
 本実施形態に係る要約に関する重みは、例えば下記に示すような、音声情報、ユーザに関する情報、アプリケーションに関する情報、環境に関する情報、およびデバイスに関する情報のうちの少なくとも1つ(これらのうちの1または2以上)に基づいて、設定される。 The weight related to the summary according to the present embodiment is at least one of audio information, information about the user, information about the application, information about the environment, and information about the device (1 or 2 of these) as shown below, for example. Is set based on the above.
 ここで、本実施形態に係るユーザに関する情報には、例えば、ユーザの状態を示すユーザの状態情報と、ユーザの操作に基づくユーザの操作情報とのうちの少なくとも1つが含まれる。 Here, the information on the user according to the present embodiment includes, for example, at least one of user status information indicating the user status and user operation information based on the user operation.
 ユーザの状態としては、例えば、ユーザがとっている行動(ジェスチャなどの動作も含む。)、ユーザの感情の状態などが挙げられる。ユーザの状態は、例えば、任意の生体センサなどから得られるユーザの生体情報、速度センサや角速度センサなどの動きセンサの検出結果、撮像デバイスにより撮像された撮像画像などのうちの1または2以上を用いた、任意の行動推定処理または任意の感情推定処理によって、推定される。ユーザの状態の推定に係る処理は、本実施形態に係る情報処理装置が行ってもよいし、本実施形態に係る情報処理装置の外部装置において行われてもよい。また、ユーザの操作としては、例えば、要約に係る音声認識の開始操作、所定のアプリケーションを起動させる操作など、様々な操作が挙げられる。 As the user state, for example, an action taken by the user (including an operation such as a gesture), a state of emotion of the user, and the like can be mentioned. For example, the user state is one or more of user biometric information obtained from an arbitrary biosensor, a detection result of a motion sensor such as a velocity sensor or an angular velocity sensor, and a captured image captured by an imaging device. It is estimated by an arbitrary action estimation process or an arbitrary emotion estimation process used. The processing related to the estimation of the user state may be performed by the information processing apparatus according to the present embodiment, or may be performed by an external device of the information processing apparatus according to the present embodiment. In addition, examples of user operations include various operations such as a speech recognition start operation related to summarization and an operation for starting a predetermined application.
 また、アプリケーションに関する情報は、例えば、アプリケーションの実行状態を示す。 Also, the information about the application indicates, for example, the execution state of the application.
 また、環境に関する情報は、例えば、ユーザの周囲の状況(または、ユーザがおかれている状況)を示す。環境に関する情報としては、例えば、ユーザの周囲の雑音のレベルを示すデータなどが挙げられる。ユーザの周囲の雑音のレベルは、例えば、マイクロホンにより生成された音声情報から発話以外を抽出し、レベル分けのための1または2以上の閾値を用いた閾値処理により特定される。上記のような環境に関する情報の取得に係る処理は、本実施形態に係る情報処理装置が行ってもよいし、本実施形態に係る情報処理装置の外部装置において行われてもよい。 In addition, the information regarding the environment indicates, for example, a situation around the user (or a situation where the user is placed). Examples of the environmental information include data indicating the level of noise around the user. The level of noise around the user is specified by threshold processing using one or two or more thresholds for level classification, for example, by extracting non-speech from voice information generated by a microphone. The processing related to the acquisition of information related to the environment as described above may be performed by the information processing apparatus according to the present embodiment, or may be performed by an external apparatus of the information processing apparatus according to the present embodiment.
 また、デバイスに関する情報は、例えば、デバイスの種類とデバイスの状態との一方または双方を示す。デバイスの状態としては、例えば、デバイスが備えるプロセッサの処理負荷などが挙げられる。 Also, the information about the device indicates, for example, one or both of the device type and the device state. Examples of the state of the device include a processing load of a processor included in the device.
 要約に関する重みの設定に係る処理の具体例については、後述する。 A specific example of processing related to the setting of weights related to summarization will be described later.
 第1の情報処理方法に係る要約処理が行われることによって、音声情報が示す発話の内容が要約される。よって、音声情報が示す発話者の発話の内容をより簡潔にすることができる。 The content of the utterance indicated by the voice information is summarized by performing the summarization process according to the first information processing method. Therefore, the content of the utterance of the speaker indicated by the voice information can be simplified.
 また、第1の情報処理方法に係る要約処理では、例えば上記のように設定された要約に関する重みに基づき発話の内容が要約されるので、要約に関する重みに対応するより重要な言葉が、要約された発話の内容に含まれる。 In the summarization process according to the first information processing method, for example, the content of the utterance is summarized based on the weights related to the summary set as described above, so that more important words corresponding to the weights related to the summary are summarized. Included in the content of the utterance.
 したがって、第1の情報処理方法に係る要約処理が行われることによって、“コミュニケーションをとる相手が、発話者が伝えたい内容を理解するのに時間を要すること”や“翻訳に時間を要すること”などの、“発話者が伝えたい内容だけを発話することが困難であることに起因する事象”が生じる可能性をより低減することが可能な、要約の結果を得ることが、可能となる。 Therefore, when the summarization process according to the first information processing method is performed, “the communication partner needs time to understand what the speaker wants to convey” or “translation takes time”. It is possible to obtain a summary result that can further reduce the possibility of occurrence of “an event caused by difficulty in speaking only the content that the speaker wants to convey”.
[1-2]第2の情報処理方法の概要
 上記第1の情報処理方法に係る要約処理が行われることによって、要約された音声情報が示す発話の内容を、得ることが可能である。
[1-2] Outline of Second Information Processing Method By performing the summarization process according to the first information processing method, it is possible to obtain the content of the utterance indicated by the summarized voice information.
 本実施形態に係る情報処理装置は、第2の情報処理方法に係る処理として、要約情報に基づいて、通知内容の通知を制御する処理(以下「通知制御処理」と示す。)を行う。 The information processing apparatus according to the present embodiment performs processing for controlling notification of notification content (hereinafter referred to as “notification control processing”) based on summary information as processing related to the second information processing method.
 ここで、本実施形態に係る要約情報は、第1のユーザの発話に基づく音声情報に対応する、要約された発話の内容を示す。要約情報は、例えば、上記第1の情報処理方法に係る要約処理が行われることによって得られる。なお、要約情報が示す要約された発話の内容は、上記に限られず、ユーザの発話に基づく音声情報が示す発話の内容を要約することが可能な任意の方法により要約されたものであってもよい。以下では、要約情報が、上記第1の情報処理方法に係る要約処理が行われることによって得られた要約された発話の内容を示す場合を例に挙げる。 Here, the summary information according to the present embodiment indicates the content of the summarized utterance corresponding to the voice information based on the utterance of the first user. The summary information is obtained, for example, by performing summary processing according to the first information processing method. Note that the content of the summarized utterance indicated by the summary information is not limited to the above, and may be summarized by any method capable of summarizing the utterance content indicated by the speech information based on the user's utterance. Good. Hereinafter, a case will be described as an example where the summary information indicates the content of the summarized utterance obtained by performing the summary processing according to the first information processing method.
 そして、本実施形態に係る情報処理装置は、第2のユーザに対する通知内容の通知を制御する。ここで、第2のユーザに対する通知内容は、例えば、要約情報が示す要約された発話の内容そのものであってもよいし、要約された発話の内容と通知順序が異なるもの、あるいは、要約された発話の内容が翻訳されたものなど、要約情報が示す要約された発話の内容そのものでなくてもよい。また、本実施形態に係る第1のユーザと本実施形態に係る第2のユーザとは、異なっていてもよいし、同一であってもよい。第1のユーザと第2のユーザとが異なる場合の例としては、第1のユーザが発話者であり、第2のユーザがコミュニケーションをとる相手である場合が挙げられる。また、第1のユーザと第2のユーザとが同一である場合の例としては、第1のユーザ、および第2のユーザが同一の発話者である場合が挙げられる。 And the information processing apparatus according to the present embodiment controls the notification of the notification content to the second user. Here, the notification content for the second user may be, for example, the content of the summarized utterance indicated by the summary information, or the content of the notification that is different from the content of the summarized utterance, or the summary The content of the summarized utterance indicated by the summary information, such as the translated content of the utterance, may not be used. Further, the first user according to the present embodiment and the second user according to the present embodiment may be different or the same. As an example of the case where the first user and the second user are different, there is a case where the first user is a speaker and the second user is a communication partner. Moreover, the case where a 1st user and a 2nd user are the same as an example as a case where a 1st user and a 2nd user are the same speaker is mentioned.
 本実施形態に係る情報処理装置は、例えば、視覚的な方法による通知と聴覚的な方法による通知との一方または双方によって、通知内容を通知させる。 The information processing apparatus according to the present embodiment causes notification contents to be notified by one or both of notification by a visual method and notification by an auditory method, for example.
 視覚的な方法による通知を行わせる場合、本実施形態に係る情報処理装置は、例えば、通知内容を、表示デバイスの表示画面に表示させることにより通知させる。本実施形態に係る情報処理装置は、例えば、通知内容に対応する表示データと、表示命令とを含む表示制御信号を、表示デバイスに対して送信することによって、通知内容を表示デバイスの表示画面に表示させる。 In the case where the notification is performed by a visual method, the information processing apparatus according to the present embodiment, for example, causes the notification content to be displayed on the display screen of the display device. The information processing apparatus according to the present embodiment transmits, for example, a display control signal including display data corresponding to the notification content and a display command to the display device, so that the notification content is displayed on the display screen of the display device. Display.
 ここで、通知内容を表示させる表示画面としては、例えば、本実施形態に係る情報処理装置が備える表示部(後述する)を構成する表示デバイス、または、本実施形態に係る情報処理装置の外部の表示デバイスが挙げられる。通知内容を表示させる表示画面が外部の表示デバイスである場合、本実施形態に係る情報処理装置は、例えば、本実施形態に係る情報処理装置が備える通信部(後述する)、または、本実施形態に係る情報処理装置の外部の通信デバイスに、上記表示制御信号を、外部の表示デバイスに対して送信させる。 Here, as a display screen for displaying the notification content, for example, a display device that constitutes a display unit (described later) included in the information processing apparatus according to the present embodiment, or an external device of the information processing apparatus according to the present embodiment. Display devices. When the display screen for displaying the notification content is an external display device, the information processing apparatus according to the present embodiment includes, for example, a communication unit (described later) included in the information processing apparatus according to the present embodiment, or the present embodiment. The display control signal is transmitted to an external display device to a communication device external to the information processing apparatus.
 また、聴覚的な方法による通知を行わせる場合、本実施形態に係る情報処理装置は、例えば、通知内容を、スピーカなどの音声出力デバイスから音声(音楽が含まれていてもよい。)で出力させることにより通知させる。本実施形態に係る情報処理装置は、例えば、通知内容に対応する音声を示す音声データと、音声出力命令とを含む音声出力制御信号を、音声出力デバイスに対して送信することによって、通知内容を、音声出力デバイスから音声で出力させる。 Further, in the case where notification by an audible method is performed, the information processing apparatus according to the present embodiment outputs, for example, the notification content from a sound output device such as a speaker as sound (music may be included). To let you know. For example, the information processing apparatus according to the present embodiment transmits the audio output control signal including the audio data indicating the audio corresponding to the notification content and the audio output command to the audio output device. , Voice output from the audio output device.
 ここで、通知内容を音声で出力させる音声出力デバイスは、例えば、本実施形態に係る情報処理装置が備える音声出力デバイスであってもよいし、本実施形態に係る情報処理装置の外部の音声出力デバイスであってもよい。通知内容を音声で出力させる音声出力デバイスが外部の音声出力デバイスである場合、本実施形態に係る情報処理装置は、例えば、本実施形態に係る情報処理装置が備える通信部(後述する)、または、本実施形態に係る情報処理装置の外部の通信デバイスに、上記音声出力制御信号を、外部の音声出力デバイスに対して送信させる。 Here, the audio output device that outputs the notification content by audio may be, for example, an audio output device included in the information processing apparatus according to the present embodiment, or an audio output outside the information processing apparatus according to the present embodiment. It may be a device. When the audio output device that outputs the notification content by voice is an external audio output device, the information processing apparatus according to the present embodiment includes, for example, a communication unit (described later) included in the information processing apparatus according to the present embodiment, or The audio output control signal is transmitted to the external audio output device to the communication device external to the information processing apparatus according to the present embodiment.
 なお、本実施形態に係る情報処理装置における通知内容の通知方法は、上記のような視覚的な方法による通知方法と聴覚的な方法による通知方法との一方または双方に限られない。例えば、本実施形態に係る情報処理装置は、通知内容における区切りを、例えば振動デバイスを振動させることなどによる触覚的な通知方法によって、通知させることも可能である。 Note that the notification method of the notification content in the information processing apparatus according to the present embodiment is not limited to one or both of the notification method using the visual method and the notification method using the auditory method. For example, the information processing apparatus according to the present embodiment can also notify a break in the notification content by a tactile notification method, for example, by vibrating a vibration device.
 第2の情報処理方法に係る通知制御処理が行われることによって、例えば上記第1の情報処理方法に係る要約処理により得られた要約された発話の内容に基づく通知内容が、通知される。 By performing the notification control process according to the second information processing method, for example, the notification content based on the summarized utterance content obtained by the summarization process according to the first information processing method is notified.
 ここで、上記第1の情報処理方法に係る要約処理により得られた要約された発話の内容は、上述したように、“発話者が伝えたい内容だけを発話することが困難であることに起因する事象”が生じる可能性をより低減することが可能な要約の結果に、該当する。 Here, as described above, the content of the summarized utterance obtained by the summary processing according to the first information processing method is “because it is difficult to utter only the content that the speaker wants to convey” This is a summary result that can further reduce the possibility of the “event to occur”.
 したがって、第2の情報処理方法に係る要約処理が行われることにより、通知内容が通知されることによって、“コミュニケーションをとる相手が、発話者が伝えたい内容を理解するのに時間を要すること”や“翻訳に時間を要すること”などの、“発話者が伝えたい内容だけを発話することが困難であることに起因する事象”が生じる可能性をより低減することが可能となる。 Therefore, when the summarization process according to the second information processing method is performed, the notification content is notified, so that “the communication partner needs time to understand the content that the speaker wants to convey”. It is possible to further reduce the possibility of occurrence of “an event caused by difficulty in speaking only the content that the speaker wants to convey” such as “it takes time for translation”.
[1-3]本実施形態に係る情報処理方法に係る他の処理
 なお、本実施形態に係る情報処理方法に係る処理は、上記第1の情報処理方法に係る要約処理と上記第2の情報処理方法に係る通知制御処理とに限られない。
[1-3] Other Processes Related to Information Processing Method According to the Present Embodiment The processes related to the information processing method according to the present embodiment include the summary process related to the first information processing method and the second information described above. It is not restricted to the notification control process which concerns on a processing method.
 例えば、本実施形態に係る情報処理方法に係る処理には、第1の情報処理方法に係る要約処理により要約された発話の内容を他の言語に翻訳する処理(以下、「翻訳処理」と示す。)がさらに含まれていてもよい。翻訳処理が行われることによって、要約された発話の内容が、発話に基づく音声情報に対応する第1の言語から、当該第1の言語とは異なる第2の言語に翻訳される。以下では、翻訳処理が行われることによって得られる、翻訳された要約された発話の内容を、「翻訳結果」と示す。 For example, the process related to the information processing method according to the present embodiment includes a process (hereinafter referred to as “translation process”) for translating the content of the utterance summarized by the summary process according to the first information processing method into another language. .) May be further included. By performing the translation process, the content of the summarized utterance is translated from the first language corresponding to the speech information based on the utterance into a second language different from the first language. Hereinafter, the content of the translated summary utterance obtained by performing the translation process is referred to as “translation result”.
 ここで、本実施形態に係る翻訳処理は、第1の情報処理方法に係る処理の一環として行われてもよいし、第2の情報処理方法に係る処理の一環として行われてもよい。 Here, the translation processing according to the present embodiment may be performed as part of the processing according to the first information processing method, or may be performed as part of the processing according to the second information processing method.
 また、本実施形態に係る情報処理方法に係る処理には、上記第1の情報処理方法に係る要約処理の結果と、上記本実施形態に係る翻訳処理の結果との一方または双方を、任意の記録媒体に記録させる記録制御処理が、さらに含まれていてもよい。 In addition, in the processing related to the information processing method according to the present embodiment, one or both of the result of the summary processing according to the first information processing method and the result of the translation processing according to the present embodiment may be arbitrarily set. A recording control process for recording on the recording medium may be further included.
 また、記録制御処理では、例えば、“上記第1の情報処理方法に係る要約処理の結果と、上記本実施形態に係る翻訳処理の結果との一方または双方”と、“ユーザに対応する位置情報(後述する)、任意の生体センサなどから得られるユーザの生体情報などの、ユーザに関する情報”とが対応付けられ、ログとして記録されてもよい。上記のようなログが記録媒体に記憶されることによって、例えば、“ユーザが、旅行などの記録を事後的に振り返ること”などが、実現される。 In the recording control process, for example, “one or both of the result of the summary process according to the first information processing method and the result of the translation process according to the embodiment” and “position information corresponding to the user” Information related to the user such as the user's biometric information obtained from an arbitrary biosensor (described later) may be associated and recorded as a log. By storing the log as described above in the recording medium, for example, “the user looks back on the record of the trip etc. afterwards” is realized.
[2]本実施形態に係る情報処理方法が適用されるユースケースの一例
 次に、本実施形態に係る情報処理方法が適用されるユースケースの一例を説明しつつ、本実施形態に係る情報処理方法に係る処理の一例を説明する。以下では、本実施形態に係る情報処理方法が適用されるユースケースとして、本実施形態に係る情報処理方法が、「会話支援」(後述するように、翻訳が行われる場合も含む。)に適用される場合を説明する。
[2] An example of a use case to which the information processing method according to the present embodiment is applied Next, while describing an example of a use case to which the information processing method according to the present embodiment is applied, the information processing according to the present embodiment An example of processing according to the method will be described. Hereinafter, as a use case to which the information processing method according to the present embodiment is applied, the information processing method according to the present embodiment is applied to “conversation support” (including a case where translation is performed as described later). The case where it will be described.
 なお、本実施形態に係る情報処理方法が適用されるユースケースは、「会話支援」に限られない。例えば、本実施形態に係る情報処理方法は、下記に示すような、音声情報が示す発話の内容の要約がされうる、任意のユースケースに適用することが可能である。
  ・IC(Integrated Circuit)レコーダなどにより生成された、会議の音声を示す音声情報が示す発話の内容を要約することにより実現される「会議文字お越し」
  ・テレビジョン番組における音声を示す音声情報が示す発話の内容を要約することにより実現される「番組テロップ自動作成」
  ・テレビジョン会議における音声を示す音声情報が示す発話の内容を要約することにより実現される、「会議テロップ自動作成」と「会議文字お越し」との一方または双方
The use case to which the information processing method according to this embodiment is applied is not limited to “conversation support”. For example, the information processing method according to the present embodiment can be applied to any use case in which the content of the utterance indicated by the audio information can be summarized as described below.
・ "Meeting with meeting letters" realized by summarizing the utterances indicated by the voice information indicating the voice of the meeting, generated by an IC (Integrated Circuit) recorder, etc.
・ "Program telop automatic creation" realized by summarizing the content of utterances indicated by audio information indicating audio in a television program
・ One or both of “automatic conference telop creation” and “come to conference text” realized by summarizing the utterances indicated by audio information indicating audio in a video conference
 図1~図5は、本実施形態に係る情報処理方法が適用されるユースケースの一例を説明するための説明図である。 1 to 5 are explanatory diagrams for explaining an example of a use case to which the information processing method according to this embodiment is applied.
 図1、図2、図5において“U1”で示される者が、本実施形態に係るユーザに該当する。また、図2、図5において“U2”で示される者が、ユーザU1がコミュニケーションをとる相手に該当する。以下では、図1、図2、図5において“U1”で示される者を「ユーザU1」と示し、また、図2、図5において“U2”で示される者を、「コミュニケーション相手U2」と示す。また、以下では、コミュニケーション相手U2の母国語が日本語である場合を例に挙げる。 In FIG. 1, FIG. 2, and FIG. 5, the person indicated by “U1” corresponds to the user according to the present embodiment. 2 and 5, the person indicated by “U2” corresponds to the partner with whom the user U1 communicates. In the following, the person indicated by “U1” in FIGS. 1, 2 and 5 is indicated as “user U1”, and the person indicated by “U2” in FIGS. 2 and 5 is indicated as “communication partner U2”. Show. In the following, a case where the native language of the communication partner U2 is Japanese is taken as an example.
 図1、図2、図5では、ユーザU1は、表示画面を有するアイウェア型の装置を装着している例を示している。また、図1、図2、図5に示すユーザU1が装着しているアイウェア型の装置には、マイクロホンなどの音声入力デバイスと、スピーカなどの音声出力デバイスと、撮像デバイスとが接続されている。 FIG. 1, FIG. 2, and FIG. 5 show an example in which the user U1 is wearing an eyewear type device having a display screen. In addition, an audio input device such as a microphone, an audio output device such as a speaker, and an imaging device are connected to the eyewear type apparatus worn by the user U1 shown in FIGS. Yes.
 また、以下に示すユースケースの一例において、本実施形態に係る情報処理装置としては、例えば、図1に示すアイウェア型の装置のようなユーザU1の身体に装着して用いられるウェアラブル装置や、スマートフォンなどの通信装置、サーバなどのコンピュータなどが挙げられる。なお、本実施形態に係る情報処理装置は、上記に示す例に限られない。本実施形態に係る情報処理装置の適用例については、後述する。 Moreover, in an example of the use case shown below, as the information processing apparatus according to the present embodiment, for example, a wearable apparatus used by being worn on the body of the user U1, such as the eyewear type apparatus shown in FIG. Examples include a communication device such as a smartphone and a computer such as a server. Note that the information processing apparatus according to the present embodiment is not limited to the example described above. An application example of the information processing apparatus according to the present embodiment will be described later.
 以下、図1~図5を適宜参照して、本実施形態に係る情報処理方法が適用されるユースケースの一例を説明する。 Hereinafter, an example of a use case to which the information processing method according to the present embodiment is applied will be described with reference to FIGS. 1 to 5 as appropriate.
 英語を話すユーザU1が、日本の空港へ飛行機で到着した場合を想定する。 Suppose an English-speaking user U1 arrives at a Japanese airport by plane.
(a)要約に関する重みの設定に係る処理の一例
 本実施形態に係る情報処理装置は、例えば、要約に関する重みを設定するためのテーブルを用いることによって、要約に関する重みを設定する。ここで、要約に関する重みを設定するためのテーブルは、本実施形態に係る情報処理装置が備える記憶部(後述する)に記憶されていてもよいし、本実施形態に係る情報処理装置の外部の記録媒体に記憶されていてもよい。本実施形態に係る情報処理装置は、例えば、記憶部(後述する)または外部の記録媒体を適宜参照することによって、要約に関する重みを設定するためのテーブルを用いる。
(A) Example of processing related to setting of weight related to summary The information processing apparatus according to the present embodiment sets a weight related to a summary, for example, by using a table for setting a weight related to the summary. Here, the table for setting the weight regarding the summary may be stored in a storage unit (described later) included in the information processing apparatus according to the present embodiment, or may be stored outside the information processing apparatus according to the present embodiment. It may be stored in a recording medium. The information processing apparatus according to the present embodiment uses, for example, a table for setting a weight related to summarization by appropriately referring to a storage unit (described later) or an external recording medium.
 また、本実施形態に係る情報処理装置は、例えば、要約に関する重みを決定するための任意のアルゴリズムにより要約に関する重みを決定することによって、要約に関する重みを設定することも可能である。 In addition, the information processing apparatus according to the present embodiment can set the weight related to the summary by determining the weight related to the summary using an arbitrary algorithm for determining the weight related to the summary, for example.
 図6~図8は、本実施形態に係る要約に関する重みを設定するためのテーブルの一例を示す説明図である。 6 to 8 are explanatory diagrams showing examples of tables for setting the weights related to the summary according to the present embodiment.
 図6は、要約に関する重みを特定するためのテーブルの一例を示しており、登録されている語彙それぞれに対して、要約に関する重みの種類ごとに重み付けがされているテーブルの一例を示している。ここで、図6では、要約に関する重みの種類と語彙との組み合わせのうち、値が“1”で示される組み合わせが、重み付けがされている組み合わせに該当する。また、図6では、要約に関する重みの種類と語彙との組み合わせのうち、値が“0”で示される組み合わせが、重み付けがされていない組み合わせに該当する。 FIG. 6 shows an example of a table for specifying the weight related to the summary, and shows an example of the table weighted for each type of weight related to the summary for each registered vocabulary. Here, in FIG. 6, among the combinations of the weight type and vocabulary regarding the summary, the combination indicated by the value “1” corresponds to the weighted combination. In FIG. 6, among the combinations of the weight type and vocabulary regarding the summary, the combination indicated by the value “0” corresponds to the combination that is not weighted.
 また、図7、図8は、要約に関する重みの種類を特定するためのテーブルの一例をそれぞれ示している。図7では、スケジュールアプリケーションの状態から特定されるスケジュール内容(または、スケジュールアプリケーションの状態から推定されるスケジュール内容)と、要約に関する重みの種類とが対応付けられているテーブルの一例を示している。また、図8では、ユーザの行動(ユーザの状態の一例)と、要約に関する重みの種類とが対応付けられているテーブルの一例を示している。 7 and 8 show examples of tables for specifying the types of weights related to the summary. FIG. 7 shows an example of a table in which schedule contents specified from the state of the schedule application (or schedule contents estimated from the state of the schedule application) and weight types related to the summary are associated with each other. FIG. 8 shows an example of a table in which user behavior (an example of a user state) is associated with a summary weight type.
 本実施形態に係る情報処理装置は、例えば図7、図8に示すような要約に関する重みの種類を特定するためのテーブルと、図6に示すような要約に関する重みを特定するためのテーブルとの双方を、要約に関する重みを設定するためのテーブルとして用いることによって、要約に関する重みを設定する。 The information processing apparatus according to the present embodiment includes, for example, a table for specifying the type of weight related to the summary as shown in FIGS. 7 and 8 and a table for specifying the weight related to the summary as shown in FIG. By using both as tables for setting the weight for the summary, the weight for the summary is set.
 なお、本実施形態に係る要約に関する重みの種類を特定するためのテーブルの例が、図7、図8に示す例に限られないこと、および要約に関する重みを特定するためのテーブルの例が、図6に示す例に限られないことは、言うまでもない。また、本実施形態に係る要約に関する重みを設定するためのテーブルは、例えば、日本語、英語、中国語などの言語ごとに設けられていてもよい。 In addition, the example of the table for specifying the kind of weight regarding the summary which concerns on this embodiment is not restricted to the example shown to FIG. 7, FIG. 8, and the example of the table for specifying the weight regarding a summary is, It goes without saying that the present invention is not limited to the example shown in FIG. Moreover, the table for setting the weight regarding the summary which concerns on this embodiment may be provided for every languages, such as Japanese, English, Chinese, for example.
 また、本実施形態に係る情報処理装置は、例えば、音声情報、ユーザに関する情報、アプリケーションに関する情報、環境に関する情報、およびデバイスに関する情報のうちの少なくとも1つに基づいて、要約に関する重みの種類を決定する場合には、図6に示すような要約に関する重みを特定するためのテーブルのみを用いて、要約に関する重みを設定することが、可能である。 In addition, the information processing apparatus according to the present embodiment determines the type of weight related to summarization based on at least one of, for example, audio information, information about a user, information about an application, information about an environment, and information about a device. In this case, it is possible to set the weight for the summary using only the table for specifying the weight for the summary as shown in FIG.
 本実施形態に係る情報処理装置は、例えば、音声情報、ユーザに関する情報、アプリケーションに関する情報、環境に関する情報、およびデバイスに関する情報のうちの少なくとも1つに基づく認識結果に基づいて、図6に示すような要約に関する重みを特定するためのテーブルから、認識結果に関連する要約に関する重みの種類を選択することによって、要約に関する重みの種類を決定する。そして、本実施形態に係る情報処理装置は、例えば、図6に示すような要約に関する重みを特定するためのテーブルを参照して、決定された要約に関する重みの種類と語彙との組み合わせのうちの、値が“1”で示される組み合わせに対応する語彙に対して、重み付けを設定する。 The information processing apparatus according to the present embodiment, for example, as illustrated in FIG. 6 based on a recognition result based on at least one of audio information, information about a user, information about an application, information about an environment, and information about a device. From the table for specifying the weight for the summary, the type of the weight for the summary is determined by selecting the type of the weight for the summary related to the recognition result. Then, the information processing apparatus according to the present embodiment refers to, for example, a table for specifying the weight related to the summary as illustrated in FIG. , A weight is set for the vocabulary corresponding to the combination indicated by the value “1”.
 具体例を挙げると、本実施形態に係る情報処理装置は、例えば、下記の(a-1)~下記の(a-5)のいずれかの処理を行うことによって、要約に関する重みを設定する。 As a specific example, the information processing apparatus according to the present embodiment sets a weight related to the summary by performing, for example, any one of the following processes (a-1) to (a-5).
 なお、要約に関する重みの設定に係る例は、下記の(a-1)~下記の(a-5)に示す例に限られない。例えば、本実施形態に係る情報処理装置は、音声情報に基づき認識された言語に応じて、要約に関する重みを設定することも可能である。言語に応じた要約に関する重みの設定の一例としては、例えば、“音声情報に基づき認識された言語が日本語であった場合には、動詞の重みを高めること”や、“音声情報に基づき認識された言語が英語であった場合には、名詞の重みを高めること”などが、挙げられる。また、本実施形態に係る情報処理装置は、例えば、環境に関する情報が示すユーザの周囲の状況に応じた要約に関する重み、デバイスに関する情報が示す内容(例えば、デバイスの種類など)に応じた要約に関する重みを、それぞれ設定してもよい。 Note that examples relating to the setting of weights related to summarization are not limited to the examples shown in (a-1) to (a-5) below. For example, the information processing apparatus according to the present embodiment can set a weight related to summarization according to a language recognized based on voice information. Examples of weight settings for summarization according to language include, for example, “if the language recognized based on speech information is Japanese, increase the verb weight” or “recognize based on speech information. If the language used is English, increase the weight of the nouns ”. In addition, the information processing apparatus according to the present embodiment relates to, for example, a weight related to the summary according to the situation around the user indicated by the information related to the environment, and a summary corresponding to the content (for example, device type) indicated by the information related to the device Each weight may be set.
(a-1)要約に関する重みの設定の第1の例:ユーザに関する情報に含まれるユーザの状態情報が示すユーザの状態に基づく要約に関する重みの設定の一例
 例えば、ユーザU1が、スマートフォンなどの装置を操作してスケジュールアプリケーションを起動し、目的地を確認すると、本実施形態に係る情報処理装置は、ユーザU1が目的地に対する移動中であると認識する。そして、本実施形態に係る情報処理装置は、要約に関する重みを設定するためのテーブルを参照することによって、認識結果に対応する要約に関する重みを設定する。
(A-1) First example of setting of weight related to summary: an example of setting of weight related to summary based on user status indicated by user status information included in information related to user For example, user U1 is a device such as a smartphone When the schedule application is activated to confirm the destination, the information processing apparatus according to the present embodiment recognizes that the user U1 is moving with respect to the destination. Then, the information processing apparatus according to the present embodiment sets the weight related to the summary corresponding to the recognition result by referring to the table for setting the weight related to the summary.
 具体例を挙げると、本実施形態に係る情報処理装置は、上記のように得られたユーザU1が目的地に対する移動中であるという認識結果に基づいて、図8に示す要約に関する重みの種類を特定するためのテーブルから、行動“移動中”に対応する「時間」を、要約に関する重みの種類として特定する。そして、本実施形態に係る情報処理装置は、図6に示す要約に関する重みを特定するためのテーブルを参照して、特定された要約に関する重みの種類と語彙との組み合わせのうちの、値が“1”で示される組み合わせに対応する語彙に対して、重み付けを設定する。図6に示す要約に関する重みを特定するためのテーブルが用いられる場合には、語彙“午前”、“いつ”、…に対して、重み付けが設定されることとなる。 As a specific example, the information processing apparatus according to the present embodiment determines the type of weight related to the summary illustrated in FIG. 8 based on the recognition result that the user U1 obtained as described above is moving to the destination. From the table for specifying, “time” corresponding to the action “moving” is specified as the type of weight related to the summary. Then, the information processing apparatus according to the present embodiment refers to the table for specifying the weight related to the summary illustrated in FIG. 6, and the value of the combination of the weight type and the vocabulary related to the specified summary is “ A weight is set for the vocabulary corresponding to the combination indicated by 1 ″. When the table for specifying the weight related to the summary shown in FIG. 6 is used, weights are set for the vocabulary “AM”, “when”,.
 また、ユーザU1が、スマートフォンなどの装置を操作して、ゲームアプリケーションを起動している場合には、本実施形態に係る情報処理装置は、ユーザU1がゲーム中であると認識する。そして、本実施形態に係る情報処理装置は、要約に関する重みを設定するためのテーブルを参照することによって、認識結果に対応する要約に関する重みを設定する。 Further, when the user U1 operates a device such as a smartphone and starts a game application, the information processing apparatus according to the present embodiment recognizes that the user U1 is playing a game. Then, the information processing apparatus according to the present embodiment sets the weight related to the summary corresponding to the recognition result by referring to the table for setting the weight related to the summary.
 例えば、本実施形態に係る情報処理装置は、上記のように得られたユーザU1がゲーム中であるという認識結果に基づいて、図8に示す要約に関する重みの種類を特定するためのテーブルから、行動“ゲーム中”に対応する「ゲーム用語」を、要約に関する重みの種類として特定する。そして、本実施形態に係る情報処理装置は、図6に示す要約に関する重みを特定するためのテーブルを参照して、決定された要約に関する重みの種類と語彙との組み合わせのうちの、値が“1”で示される組み合わせに対応する語彙に対して、重み付けを設定する。 For example, the information processing apparatus according to the present embodiment, based on the recognition result that the user U1 obtained as described above is playing a game, from the table for specifying the type of weight related to the summary illustrated in FIG. The “game term” corresponding to the action “in game” is specified as the type of weight related to the summary. The information processing apparatus according to the present embodiment refers to the table for specifying the weights related to the summary illustrated in FIG. 6, and the value of the combination of the determined weight type and vocabulary related to the summary is “ A weight is set for the vocabulary corresponding to the combination indicated by 1 ″.
 また、本実施形態に係る情報処理装置は、上記のように得られたユーザU1がゲーム中であるという認識結果に基づいて、図6に示す要約に関する重みを特定するためのテーブルに含まれる「ゲーム用語」などの、認識結果に関連する要約に関する重みの種類を、要約に関する重みの種類として決定することも可能である。そして、本実施形態に係る情報処理装置は、図6に示す要約に関する重みを特定するためのテーブルを参照して、決定された要約に関する重みの種類と語彙との組み合わせのうちの、値が“1”で示される組み合わせに対応する語彙に対して、重み付けを設定する。 In addition, the information processing apparatus according to the present embodiment is included in the table for specifying the weight related to the summary illustrated in FIG. 6 based on the recognition result that the user U1 obtained as described above is in the game. It is also possible to determine the type of weight related to the summary related to the recognition result such as “game term” as the type of weight related to the summary. The information processing apparatus according to the present embodiment refers to the table for specifying the weights related to the summary illustrated in FIG. 6, and the value of the combination of the determined weight type and vocabulary related to the summary is “ A weight is set for the vocabulary corresponding to the combination indicated by 1 ″.
 また、本実施形態に係る情報処理装置は、例えば、ユーザU1が用いているスマートフォンなどの装置が備える、加速度センサや角速度センサなどの動きセンサの検出結果に基づき推定されたユーザU1の状態の認識結果に基づいて、要約に関する重みを設定することも可能である。 In addition, the information processing apparatus according to the present embodiment recognizes the state of the user U1 estimated based on the detection result of a motion sensor such as an acceleration sensor or an angular velocity sensor provided in an apparatus such as a smartphone used by the user U1, for example. It is also possible to set a weight for the summary based on the result.
 例えば、動きセンサの検出結果に基づいてユーザU1が食事中であるという認識結果が得られた場合には、図8に示す要約に関する重みの種類を特定するためのテーブルから、行動“食事中”に対応する「料理」を、要約に関する重みの種類として特定する。そして、本実施形態に係る情報処理装置は、図6に示す要約に関する重みを特定するためのテーブルを参照して、決定された要約に関する重みの種類と語彙との組み合わせのうちの、値が“1”で示される組み合わせに対応する語彙に対して、重み付けを設定する。 For example, when the recognition result that the user U1 is eating is obtained based on the detection result of the motion sensor, the action “meal” is selected from the table for specifying the type of weight related to the summary illustrated in FIG. The “cooking” corresponding to is specified as a weight type related to the summary. The information processing apparatus according to the present embodiment refers to the table for specifying the weights related to the summary illustrated in FIG. 6, and the value of the combination of the determined weight type and vocabulary related to the summary is “ A weight is set for the vocabulary corresponding to the combination indicated by 1 ″.
(a-2)要約に関する重みの設定の第2の例:音声情報に基づく要約に関する重みの設定の一例
 本実施形態に係る情報処理装置は、音声情報に基づいて、要約に関する重みを設定する。
(A-2) Second example of weight setting for summarization: an example of setting weight for summarization based on voice information The information processing apparatus according to the present embodiment sets weights for summarization based on voice information.
 本実施形態に係る情報処理装置は、音声情報に基づいて、例えば下記のように要約に関する重みの種類を決定する。
  ・音声情報が示す音声の平均的な周波数帯域が、例えば300~550[Hz]の場合:要約に関する重みの種類として、「男性」が決定される。
  ・音声情報が示す音声の平均的な周波数帯域が、例えば400~700[Hz]の場合:要約に関する重みの種類として、「女性」が決定される。
  ・音声情報が示す音声の音圧、音量が設定されている第1の閾値以上である場合、または、音声情報が示す音声の音圧、音量が第1の閾値より大きい場合:要約に関する重みの種類として、「怒り」と「喜び」との一方または双方が決定される。
  ・音声情報が示す音声の音圧、音量が設定されている第2閾値以下の場合、または、音声情報が示す音声の音圧、音量が第2の閾値より小さい場合:要約に関する重みの種類として、「悲しみ」、「不快」、「苦痛」、「不安」のうちの1または2以上が決定される。
  ・音声情報が示す音声のピッチ(音の高さ)あるいは発話速度(単位時間当たりの音素の量)が、設定されている第3の閾値より大きい場合、または、音声情報が示す音声のピッチあるいは発話速度が、第3の閾値以上である場合:要約に関する重みの種類として、「興奮」が決定される。
  ・音声情報が示す音声のピッチあるいは発話速度が、設定されている第4の閾値より小さい場合、または、音声情報が示す音声のピッチあるいは発話速度が、第4の閾値以下である場合:要約に関する重みの種類として、「平静」が決定される。
The information processing apparatus according to the present embodiment determines the type of weight related to the summary, for example, as follows based on the audio information.
When the average frequency band of the voice indicated by the voice information is, for example, 300 to 550 [Hz]: “Male” is determined as the type of weight related to the summary.
When the average frequency band of the voice indicated by the voice information is, for example, 400 to 700 [Hz]: “Women” is determined as the type of weight regarding the summary.
-When the sound pressure and volume of the sound indicated by the sound information are equal to or higher than the set first threshold value, or when the sound pressure and sound volume of the sound indicated by the sound information are larger than the first threshold value: One or both of “anger” and “joy” is determined as the type.
-When the sound pressure and volume of the sound indicated by the sound information are less than or equal to the set second threshold value, or when the sound pressure and sound volume of the sound indicated by the sound information are smaller than the second threshold value: , “Sadness”, “discomfort”, “pain”, and “anxiety” are determined.
When the pitch (sound pitch) or utterance speed (phoneme amount per unit time) indicated by the voice information is greater than the set third threshold, or the voice pitch indicated by the voice information or When the speaking rate is equal to or higher than the third threshold value: “excitement” is determined as the type of weight related to the summary.
When the voice pitch or speech speed indicated by the voice information is smaller than the set fourth threshold or when the voice pitch or speech speed indicated by the voice information is equal to or lower than the fourth threshold: “Silence” is determined as the weight type.
 上記第1の閾値としては、例えば、72[dB]などの固定の値が挙げられる。また、上記第2の閾値としては、例えば、54[dB]などの固定の値が挙げられる。なお、上記第1の閾値と上記第2の閾値とは、例えば、ユーザU1のようなユーザと、コミュニケーション相手U2のようなコミュニケーションをとる相手との間の距離によって、動的に変わってもよい。上記第1の閾値と上記第2の閾値とを動的に変える例としては、例えば“上記距離が0.5[m]近づくごとに閾値を6[dB]上げ、0.5[m]遠ざかるごとに6[dB]下げること”が、挙げられる。上記距離は、例えば、撮像デバイスにより撮像された撮像画像に対する任意の画像処理によって推定されてもよいし、距離センサによって取得されてもよい。上記距離が推定される場合、上記距離の推定に係る処理は、本実施形態に係る情報処理装置が行ってもよいし、本実施形態に係る情報処理装置の外部装置において行われてもよい。 As the first threshold value, for example, a fixed value such as 72 [dB] can be cited. Moreover, as said 2nd threshold value, fixed values, such as 54 [dB], are mentioned, for example. Note that the first threshold value and the second threshold value may change dynamically depending on the distance between a user such as the user U1 and a communication partner such as the communication partner U2. . As an example of dynamically changing the first threshold value and the second threshold value, for example, “every time the distance approaches 0.5 [m], the threshold value is increased by 6 [dB] and moved away by 0.5 [m]. Every 6 [dB] lowering ”. The distance may be estimated by, for example, arbitrary image processing on a captured image captured by an imaging device, or may be acquired by a distance sensor. When the distance is estimated, the process related to the distance estimation may be performed by the information processing apparatus according to the present embodiment or may be performed by an external apparatus of the information processing apparatus according to the present embodiment.
 また、上記第3の閾値と上記第4の閾値とは、予め設定されている固定値であってもよいし、ユーザの操作などに基づき変更可能な可変値であってもよい。 Further, the third threshold value and the fourth threshold value may be fixed values set in advance, or may be variable values that can be changed based on a user operation or the like.
 なお、音声情報に基づき決定される要約に関する重みの種類は、上記に示す例に限られない。 Note that the type of weight related to the summary determined based on the audio information is not limited to the example shown above.
 例えば、音声情報から得られるモーラ数とアクセントの場所との一方または双方に基づいて、感情(例えば、怒り、喜び、悲しみなど)を推定し、推定された感情に対応する要約に関する重みの種類を設定することが可能である。推定された感情に対応する要約に関する重みの種類を設定する場合には、本実施形態に係る情報処理装置は、例えば、音声情報から得られる基本周波数の変化率、音の変化率、発話期間の変化率などに基づいて、感情に関する重みの強さを変えてもよい。 For example, the emotion (eg, anger, joy, sadness) is estimated based on one or both of the number of mora obtained from the voice information and the location of the accent, and the type of weight related to the summary corresponding to the estimated emotion is determined. It is possible to set. When setting the type of weight related to the summary corresponding to the estimated emotion, the information processing apparatus according to the present embodiment, for example, the rate of change of the fundamental frequency obtained from the speech information, the rate of change of the sound, Based on the rate of change or the like, the strength of the weight related to emotion may be changed.
 ここで、本実施形態に係る情報処理装置は、上記(a-1)に示す第1の例と同様に、図7、図8に示すような要約に関する重みの種類を特定するためのテーブルを用いて要約に関する重みの種類を決定してもよいし、図6に示すような要約に関する重みを特定するためのテーブルのみを用いて、要約に関する重みを決定してもよい。 Here, as in the first example shown in (a-1) above, the information processing apparatus according to the present embodiment provides a table for specifying the types of weights related to summarization as shown in FIGS. The type of weight related to the summary may be determined using the table, or the weight related to the summary may be determined using only the table for specifying the weight related to the summary as shown in FIG.
 要約に関する重みが決定されると、本実施形態に係る情報処理装置は、例えば上記(a-1)に示す第1の例と同様に、図6に示すような要約に関する重みを特定するためのテーブルを参照して、特定された要約に関する重みの種類と語彙との組み合わせのうちの、値が“1”で示される組み合わせに対応する語彙に対して、重み付けを設定する。 When the weight related to the summary is determined, the information processing apparatus according to the present embodiment specifies the weight related to the summary as illustrated in FIG. 6 as in the first example illustrated in (a-1), for example. With reference to the table, weighting is set for the vocabulary corresponding to the combination whose value is indicated by “1” among the combinations of the weight type and the vocabulary regarding the specified summary.
(a-3)要約に関する重みの設定の第3の例:アプリケーションに関する情報が示すアプリケーションの実行状態に基づく要約に関する重みの設定の一例
 本実施形態に係る情報処理装置は、アプリケーションの実行状態に基づいて、要約に関する重みを設定する。
(A-3) Third example of weight setting for summarization: an example of setting weight for summarization based on the execution state of the application indicated by the information regarding the application The information processing apparatus according to this embodiment is based on the execution state of the application. To set the weight for the summary.
 例えば、ユーザU1が、スマートフォンなどの装置を操作してスケジュールアプリケーションを起動し、目的地を確認した場合には、本実施形態に係る情報処理装置は、スケジュールアプリケーションの実行状態に基づいて、図7に示す要約に関する重みの種類を特定するためのテーブルから、スケジュール内容“場所移動(biz)”に対応する「時間」、「場所」を、要約に関する重みの種類として特定する。そして、本実施形態に係る情報処理装置は、図6に示す要約に関する重みを特定するためのテーブルを参照して、特定された要約に関する重みの種類と語彙との組み合わせのうちの、値が“1”で示される組み合わせに対応する語彙に対して、重み付けを設定する。図6に示す要約に関する重みを特定するためのテーブルが用いられる場合には、語彙“午前”、“渋谷”、“いつ”、“どこで”、…に対して、重み付けが設定されることとなる。 For example, when the user U1 operates a device such as a smartphone to start a schedule application and confirms a destination, the information processing apparatus according to the present embodiment is based on the execution state of the schedule application, as shown in FIG. From the table for specifying the type of weight related to the summary shown in (2), “time” and “location” corresponding to the schedule content “place move (biz)” are specified as the types of weight related to the summary. Then, the information processing apparatus according to the present embodiment refers to the table for specifying the weight related to the summary illustrated in FIG. 6, and the value of the combination of the weight type and the vocabulary related to the specified summary is “ A weight is set for the vocabulary corresponding to the combination indicated by 1 ″. When the table for specifying the weights related to the summary shown in FIG. 6 is used, weights are set for the vocabularies “AM”, “Shibuya”, “when”, “where”,. .
 また、本実施形態に係る情報処理装置は、例えば下記のように、実行されているアプリケーションのプロパティに基づいて要約に関する重みの種類を決定して、要約に関する重みを設定することも可能である。
  ・地図アプリケーションが実行されている場合:要約に関する重みの種類として、「時間」、「場所」、「人名」などが決定される。
  ・乗換案内アプリケーションが実行されている場合:要約に関する重みの種類として、「時間」、「場所」、「電車」などが決定される。
  ・日本のことを聞くための質問を円滑に進めるためのアプリケーションが実行されている場合:要約に関する重みの種類として、「質問」、「日本」などが決定される。
Also, the information processing apparatus according to the present embodiment can determine the type of weight related to the summary based on the properties of the application being executed, for example, and set the weight related to the summary as described below.
When the map application is executed: “Time”, “Location”, “Person name”, etc. are determined as the types of weights related to the summary.
When the transfer guidance application is executed: “Time”, “Place”, “Train”, etc. are determined as the types of weights related to the summary.
When an application for smoothly advancing questions for hearing about Japan is being executed: “Question”, “Japan”, etc. are determined as the types of weights related to summarization.
(a-4)要約に関する重みの設定の第4の例:ユーザに関する情報に含まれるユーザの操作情報が示すユーザの操作に基づく要約に関する重みの設定の一例
 本実施形態に係る情報処理装置は、ユーザの操作に基づいて、要約に関する重みを設定する。
(A-4) Fourth example of setting of weight related to summary: an example of setting of weight related to summary based on user operation indicated by user operation information included in information related to user The information processing apparatus according to this embodiment includes: A summary weight is set based on the user's operation.
 本実施形態に係る情報処理装置は、例えば、要約に関する重みの種類を選択する操作(ユーザの操作の一例)によって選択された要約に関する重みの種類を、要約に関する重みの設定に用いる要約に関する重みの種類として、決定する。 The information processing apparatus according to the present embodiment, for example, uses the type of weight related to the summary selected by the operation of selecting the type of weight related to the summary (an example of the user's operation) to set the weight related to the summary. Decide as the type.
 また、本実施形態に係る情報処理装置は、例えば、要約に係る音声認識の開始操作などの、所定の操作が行われたときに、当該所定の操作に予め対応付けられている要約に関する重みの種類を自動的に設定してもよい。一例を挙げると、要約に係る音声認識の開始操作が行われた場合には、要約に関する重みの種類として、「質問」などが決定される。 Further, the information processing apparatus according to the present embodiment, for example, when a predetermined operation such as a speech recognition start operation related to the summary is performed, weights related to the summary that are associated in advance with the predetermined operation. The type may be set automatically. As an example, when a speech recognition start operation related to a summary is performed, “question” or the like is determined as the type of weight related to the summary.
 要約に関する重みが決定されると、本実施形態に係る情報処理装置は、例えば上記(a-1)に示す第1の例と同様に、図6に示すような要約に関する重みを特定するためのテーブルを参照して、特定された要約に関する重みの種類と語彙との組み合わせのうちの、値が“1”で示される組み合わせに対応する語彙に対して、重み付けを設定する。 When the weight related to the summary is determined, the information processing apparatus according to the present embodiment specifies the weight related to the summary as illustrated in FIG. 6 as in the first example illustrated in (a-1), for example. With reference to the table, weighting is set for the vocabulary corresponding to the combination whose value is indicated by “1” among the combinations of the weight type and the vocabulary regarding the specified summary.
(a-5)要約に関する重みの設定の第5の例
 本実施形態に係る情報処理装置は、上記(a-1)~上記(a-4)のうちの2以上を組み合わせることによって、要約に関する重みを設定することが、可能である。
(A-5) Fifth Example of Weight Setting for Summarization The information processing apparatus according to the present embodiment relates to summarization by combining two or more of (a-1) to (a-4) above. It is possible to set a weight.
(b)第1の情報処理方法に係る要約処理の一例
 例えば、ユーザU1が、目的地に向かう移動中に駅でゴミを捨てることを望むとき、駅にゴミ箱がないことから、“駅にゴミ箱がない理由”をコミュニケーション相手U2に英語で尋ねるケースを想定する(図1、図2)。
(B) An example of summary processing according to the first information processing method For example, when the user U1 wants to throw away trash at the station while moving toward the destination, there is no trash box at the station. Assume a case where the communication partner U2 is inquired in English about the reason why there is not (FIGS. 1 and 2).
 ここで、コミュニケーション相手U2が英語を十分に理解することができない場合には、コミュニケーション相手U2は、ユーザU1が訪ねている内容を、十分に理解することができない可能性が高い。 Here, when the communication partner U2 cannot sufficiently understand English, it is highly likely that the communication partner U2 cannot fully understand the contents visited by the user U1.
 そこで、本実施形態に係る情報処理装置は、第1の情報処理方法に係る要約処理を行い、例えば図1に示すアイウェア型の装置に接続されているマイクロホンにより生成された音声情報が示す発話の内容を、要約する。本実施形態に係る情報処理装置は、上述したように、例えば、音声情報に基づく音声テキスト情報が示す文字列を要約する。 Therefore, the information processing apparatus according to the present embodiment performs a summarization process according to the first information processing method, and, for example, an utterance indicated by voice information generated by a microphone connected to the eyewear type apparatus illustrated in FIG. The contents of are summarized. As described above, the information processing apparatus according to the present embodiment summarizes a character string indicated by voice text information based on voice information, for example.
 より具体的には、本実施形態に係る情報処理装置は、例えば下記に数式1に示すような、上記(a)に示す処理により設定された要約に関する重みを用いた目的関数によって、発話の内容を要約する。 More specifically, the information processing apparatus according to the present embodiment uses, for example, the content of the utterance by the objective function using the weight related to the summary set by the processing shown in (a) as shown in Equation 1 below. To summarize.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ここで、上記数式1に示す“W”は、要約に関する重みである。また、上記数式1に示す“a”は、要約に関する重みそれぞれの寄与率を調整するパラメータであり、例えば0~1の実数をとる。また、上記数式1に示す“zyi”は、句yが含まれれば“1”を示し、句yが含まれなければ“0”を示す2値変数である。 Here, “W” shown in Equation 1 is a weight related to the summary. Further, “a i ” shown in Equation 1 is a parameter for adjusting the contribution rate of each weight related to the summary, and takes a real number from 0 to 1, for example. Also, it is shown in Equation 1 "z yi" is, if included phrase y i "1" indicates, unless contains phrase y i is a binary variable indicating the "0".
 なお、本実施形態に係る情報処理装置は、上記数式1に示す要約に関する重みを用いた目的関数を用いる方法に限られず、設定された要約に関する重みを用いて、発話の内容を要約することが可能な、任意の方法を用いることが可能である。 Note that the information processing apparatus according to the present embodiment is not limited to the method using the objective function using the weight related to the summarization shown in Equation 1, but can summarize the content of the utterance using the set weight related to the summarization. Any possible method can be used.
 図3は、第1の情報処理方法に係る要約処理の結果の一例を示している。図3のAは、要約される前の発話の内容の一例を示している。また、図3のBは、要約された発話の内容の一例を示しており、図3のCは、要約された発話の内容の他の例を示している。 FIG. 3 shows an example of the result of the summary process according to the first information processing method. FIG. 3A shows an example of the content of an utterance before it is summarized. 3B shows an example of the content of the summarized utterance, and C of FIG. 3 shows another example of the content of the summarized utterance.
 図3のBに示すように発話の内容が要約されることによって、発話の内容が要約される前よりも簡略化される。よって、図3のBに示すように発話の内容が要約されることにより、コミュニケーション相手U2が英語を十分に理解することができない場合であっても、コミュニケーション相手U2が、ユーザU1が訪ねている内容を理解することができる可能性をより高めることが、可能となる。 As shown in FIG. 3B, the content of the utterance is summarized, so that the content of the utterance is simplified before the content is summarized. Therefore, even if the communication partner U2 cannot fully understand English by summarizing the content of the utterance as shown in FIG. 3B, the communication partner U2 is visited by the user U1. It is possible to increase the possibility of understanding the content.
 また、図3のCは、“本実施形態に係る情報処理装置が、図3のBに示す要約結果に対してさらに形態素解析を行い、形態素解析された結果に基づく形態素を組み合わせた単位で、図3のBに示す要約結果を分割した分割テキストを、要約された発話の内容とした例”を示している。 In addition, C in FIG. 3 indicates that “the information processing apparatus according to the present embodiment further performs morphological analysis on the summary result illustrated in B in FIG. 3 and combines the morphemes based on the result of the morphological analysis. An example in which the divided text obtained by dividing the summary result shown in B of FIG. 3 is used as the content of the summarized utterance ”is shown.
 例えば、発話の内容に対応する音声テキスト情報が示す文字列の言語が、日本語である場合には、本実施形態に係る情報処理装置は、主要品詞(名詞、動詞、形容詞、副詞)とそれ以外の形態素を組み合わせた単位で、分割テキストを生成する。また、例えば、発話の内容に対応する音声テキスト情報が示す文字列の言語が、英語である場合には、本実施形態に係る情報処理装置は、さらに5W1Hを分割テキストとする。 For example, when the language of the character string indicated by the speech text information corresponding to the content of the utterance is Japanese, the information processing apparatus according to this embodiment includes the main part of speech (noun, verb, adjective, adverb) Split text is generated in units that combine morphemes other than. For example, when the language of the character string indicated by the speech text information corresponding to the content of the utterance is English, the information processing apparatus according to the present embodiment further sets 5W1H as the divided text.
 図3のCに示すように発話の内容が要約されることによって、発話の内容は、図3のBに示す要約結果よりも簡略化される。よって、図3のCに示すように発話の内容が要約されることにより、コミュニケーション相手U2が英語を十分に理解することができない場合であっても、コミュニケーション相手U2が、ユーザU1が訪ねている内容を理解することができる可能性を、図3のBに示す要約結果を得る場合よりもさらに高めることが、できる。 As the content of the utterance is summarized as shown in FIG. 3C, the content of the utterance is simplified more than the summary result shown in B of FIG. Therefore, even if the communication partner U2 cannot fully understand English by summarizing the content of the utterance as shown in FIG. 3C, the communication partner U2 is visited by the user U1. The possibility that the contents can be understood can be further increased as compared with the case of obtaining the summary result shown in FIG.
(c)翻訳処理の一例
 本実施形態に係る情報処理装置は、例えば上記(b)に示す要約処理により要約された発話の内容を、さらに他の言語に翻訳してもよい。本実施形態に係る情報処理装置は、上述したように、発話に対応する第1の言語を、第1の言語と異なる第2の言語に翻訳する。
(C) Example of Translation Process The information processing apparatus according to this embodiment may further translate the content of the utterance summarized by the summary process shown in (b) above into another language, for example. As described above, the information processing apparatus according to the present embodiment translates the first language corresponding to the utterance into a second language different from the first language.
 本実施形態に係る情報処理装置は、例えば、ユーザU1が存在している位置を特定し、発話の内容に対応する音声テキスト情報が示す文字列の言語が、特定された位置における公用語と異なる場合に、要約された発話の内容を、当該公用語に翻訳する。ユーザU1が存在している位置は、例えば、図1に示すアイウェア型の装置ようなユーザU1が装着しているウェアラブル装置や、ユーザU1が所持しているスマートフォンなどの通信装置などから取得される位置情報に基づき特定される。位置情報としては、例えば、GNSS(Global Navigation Satellite System)デバイスなどの位置を特定することが可能なデバイスの検出結果を示すデータ(または、任意の方式により位置を推定することが可能なデバイスの推定結果を示すデータ)が、挙げられる。 The information processing apparatus according to the present embodiment specifies, for example, the position where the user U1 exists, and the language of the character string indicated by the speech text information corresponding to the content of the utterance is different from the official language at the specified position. In some cases, the content of the summarized utterance is translated into the official language. The position where the user U1 exists is acquired from, for example, a wearable device worn by the user U1 such as the eyewear-type device shown in FIG. 1 or a communication device such as a smartphone possessed by the user U1. Specified based on location information. As the location information, for example, data indicating the detection result of a device capable of specifying a location such as a GNSS (Global Navigation Satellite System) device (or estimation of a device capable of estimating the location by an arbitrary method) Data showing the results).
 また、本実施形態に係る情報処理装置は、例えば、発話の内容に対応する音声テキスト情報が示す文字列の言語が、設定されている言語と異なる場合に、要約された発話の内容を、当該設定されている言語に翻訳してもよい。 Further, the information processing apparatus according to the present embodiment, for example, if the language of the character string indicated by the speech text information corresponding to the utterance content is different from the set language, the summarized utterance content is You may translate into the set language.
 本実施形態に係る情報処理装置は、他の言語に翻訳することが可能な任意のアルゴリズムの処理によって、要約された発話の内容を他の言語に翻訳する。 The information processing apparatus according to the present embodiment translates the content of the summarized utterance into another language by processing of an arbitrary algorithm that can be translated into another language.
 図4は、本実施形態に係る翻訳処理の結果の一例を示している。図4のAは、翻訳される前の要約された発話の内容の一例として、図3のCに示す要約結果を示している。また、図4のBは、翻訳処理により図3のCに示す要約結果が他の言語に翻訳された内容の一例として、図3のCに示す要約結果が日本語に翻訳された翻訳結果の一例を示している。以下では、図3のCに示す要約結果のような分割テキストが翻訳された翻訳結果を、「分割翻訳テキスト」と示す場合がある。 FIG. 4 shows an example of the result of the translation processing according to this embodiment. FIG. 4A shows the summary result shown in FIG. 3C as an example of the content of the summarized speech before being translated. 4B shows an example of the content of the summary result shown in FIG. 3C translated into another language by the translation process. The summary result shown in FIG. 3C is translated into Japanese. An example is shown. Hereinafter, the translation result obtained by translating the divided text such as the summary result shown in FIG. 3C may be referred to as “divided translation text”.
 図4のBに示すように要約された発話の内容が、コミュニケーション相手U2の母国語である日本語に翻訳されることによって、コミュニケーション相手U2が、ユーザU1が訪ねている内容を理解することができる可能性を、要約された発話の内容が翻訳されない場合よりも、さらに高めることが可能となる。 The content of the utterance summarized as shown in B of FIG. 4 is translated into Japanese, which is the native language of the communication partner U2, so that the communication partner U2 can understand the content that the user U1 is visiting. The possibility of being able to be made can be further increased as compared with the case where the content of the summarized utterance is not translated.
(d)第2の情報処理方法に係る通知制御処理の一例
 本実施形態に係る情報処理装置は、上記(b)に示す要約処理によって要約された、音声情報が示す発話の内容を、通知させる。また、上記(c)に示す翻訳処理がさらに行われることにより、要約された発話の内容が他の言語に翻訳された場合には、本実施形態に係る情報処理装置は、翻訳結果を通知させる。
(D) An example of the notification control process according to the second information processing method The information processing apparatus according to the present embodiment notifies the content of the utterance indicated by the voice information summarized by the summarization process shown in (b) above. . Further, when the content of the summarized utterance is translated into another language by further performing the translation processing shown in (c) above, the information processing apparatus according to the present embodiment notifies the translation result. .
 上述したように、本実施形態に係る情報処理装置は、例えば、視覚的な方法による通知と聴覚的な方法による通知との一方または双方によって、要約された発話の内容(または、翻訳結果)を、通知内容として通知させる。 As described above, the information processing apparatus according to the present embodiment, for example, summarizes the content of an utterance (or a translation result) by one or both of a notification by a visual method and a notification by an auditory method. , Let the notification content.
 図5は、本実施形態に係る通知制御処理の結果の一例を示している。図5では、“ユーザU1が装着しているアイウェア型の装置に接続されている音声出力デバイスから、翻訳結果を示す音声が出力されることにより、翻訳結果が聴覚的に通知される例”を、示している。また、図5では、図4のBに示す翻訳結果が通知される例を示している。 FIG. 5 shows an example of the result of the notification control process according to the present embodiment. In FIG. 5, “an example in which the translation result is audibly notified by outputting a voice indicating the translation result from the voice output device connected to the eyewear type device worn by the user U1” Is shown. FIG. 5 shows an example in which the translation result shown in B of FIG. 4 is notified.
 図5では、音声情報に基づいて、音圧が強い発話箇所に対応する箇所(図5に示す“なぜ”の部分)の音圧を、他の箇所よりも強くさせた例を示している。 FIG. 5 shows an example in which the sound pressure at the location corresponding to the speech location where the sound pressure is strong (the “why” portion shown in FIG. 5) is made stronger than the other locations based on the voice information.
 また、図5では、翻訳結果を示す音声を出力させる際に、分割テキストの区切りを、図5において符号“S”で示すようなサウンドフィードバックを挿入することによって、通知させている例を示している。 Also, FIG. 5 shows an example in which when a voice indicating the translation result is output, the division of the divided text is notified by inserting a sound feedback as indicated by a symbol “S” in FIG. Yes.
 なお、第2の情報処理方法に係る通知制御処理により実現される通知の例は、図5に示す例に限られない。第2の情報処理方法に係る通知制御処理により実現される通知の他の例については、後述する。 Note that the example of notification realized by the notification control process according to the second information processing method is not limited to the example shown in FIG. Another example of the notification realized by the notification control process according to the second information processing method will be described later.
 例えば図5に示すように、コミュニケーション相手U2の母国語である日本語に翻訳された要約された発話の内容(翻訳結果)が、通知内容として音声出力デバイスから音声によって出力されることによって、ユーザU1が訪ねている内容を、コミュニケーション相手U2に理解させることが、より容易となる。 For example, as shown in FIG. 5, the content (translation result) of the summarized utterance translated into Japanese, which is the native language of the communication partner U2, is output as a notification content from the voice output device by voice, thereby allowing the user to It becomes easier for the communication partner U2 to understand the content that U1 is visiting.
 本実施形態に係る情報処理方法が適用されるユースケースとしては、上記のような「会話支援」(翻訳が行われる場合も含む。)のユースケースが挙げられる。なお、上述したように、本実施形態に係る情報処理方法が適用されるユースケースが、上記のような「会話支援」に限られないことは、言うまでもない。 As a use case to which the information processing method according to the present embodiment is applied, there is a use case of “conversation support” (including a case where translation is performed) as described above. As described above, it goes without saying that the use case to which the information processing method according to the present embodiment is applied is not limited to the “conversation support” as described above.
[3]本実施形態に係る情報処理方法に係る処理
 次に、本実施形態に係る情報処理方法に係る処理について、より具体的に説明する。以下では、第1の情報処理方法に係る要約処理と、本実施形態に係る翻訳処理と、第2の情報処理方法に係る通知制御処理とについて、説明する。
[3] Processing related to the information processing method according to the present embodiment Next, processing related to the information processing method according to the present embodiment will be described more specifically. Below, the summary process which concerns on a 1st information processing method, the translation process which concerns on this embodiment, and the notification control process which concerns on a 2nd information processing method are demonstrated.
[3-1]第1の情報処理方法に係る要約処理
 本実施形態に係る情報処理装置は、要約に関する重みを示す情報に基づいて、ユーザの発話に基づく音声情報が示す発話の内容を要約する。
[3-1] Summarization Processing According to First Information Processing Method The information processing apparatus according to the present embodiment summarizes the utterance content indicated by the voice information based on the user's utterance based on the information indicating the weight related to the summary. .
 上述したように、要約に関する重みは、例えば、音声情報、ユーザの状態、アプリケーションの実行状態、およびユーザの操作のうちの1または2以上に基づいて、設定される。また、上述したように、本実施形態に係る情報処理装置は、例えば、上記数式1に示すような設定された要約に関する重みを用いた目的関数によって、発話の内容を要約する。 As described above, the weight regarding the summary is set based on, for example, one or more of the voice information, the user state, the application execution state, and the user operation. Further, as described above, the information processing apparatus according to the present embodiment summarizes the content of an utterance by an objective function using a weight related to a set summary as shown in Equation 1 above, for example.
 また、本実施形態に係る情報処理装置は、要約処理として、例えば下記の(1)~(3)の処理のうちの、1または2以上を行うことが可能である。 Also, the information processing apparatus according to the present embodiment can perform one or more of the following processes (1) to (3), for example, as the summary process.
(1)要約処理の第1の例:要約処理の開始タイミング
 本実施形態に係る情報処理装置は、設定されている所定の開始条件を満たしたと判定した場合に、要約処理を行う。
(1) First example of summary processing: start timing of summary processing The information processing apparatus according to the present embodiment performs summary processing when it is determined that a predetermined start condition that has been set is satisfied.
 本実施形態に係る要約処理の開始条件としては、例えば下記に示す例が挙げられる。
  ・発話がされていない状態が継続する無発話期間に関する条件
  ・音声情報から発話の内容を取得するための音声認識の状態に関する条件
  ・発話の内容に関する条件
  ・音声情報が得られてからの経過時間に関する条件
Examples of the start conditions for the summary processing according to the present embodiment include the following examples.
・ Conditions related to the no-speech period during which no utterance continues ・ Conditions related to the state of speech recognition for acquiring the utterance content from speech information ・ Conditions related to the utterance content ・ Elapsed time since the speech information was obtained Conditions
 図9A~図9Cは、第1の情報処理方法に係る要約処理の一例を説明するための説明図であり、要約処理の開始タイミングの概要を示している。以下、図9A~図9Cを適宜参照しつつ、各開始条件における処理の一例を説明する。 FIGS. 9A to 9C are explanatory diagrams for explaining an example of the summary processing according to the first information processing method, and show an outline of the start timing of the summary processing. Hereinafter, an example of processing in each start condition will be described with reference to FIGS. 9A to 9C as appropriate.
(1-1)開始条件の第1の例:開始条件が無発話期間に関する条件である場合における例
 無発話期間に関する条件としては、例えば、無発話期間の長さに係る条件が挙げられる。所定の開始条件が、無発話期間に関する条件である場合、本実施形態に係る情報処理装置は、無発話期間が設定されている所定の期間を越えた場合、または、無発話期間が設定されている所定の期間以上となった場合に、開始条件を満たしたと判定する。
(1-1) First example of start condition: an example in which the start condition is a condition related to a non-speech period Examples of the condition related to a non-speech period include a condition related to the length of a non-speech period. When the predetermined start condition is a condition related to a non-speech period, the information processing apparatus according to the present embodiment is configured so that the non-speech period is set or the non-speech period is set. When the predetermined period is exceeded, it is determined that the start condition is satisfied.
 ここで、開始条件の第1の例に係る期間は、予め設定されている固定の期間であってもよいし、ユーザの操作などに基づき変更可能な可変の期間であってもよい。 Here, the period according to the first example of the start condition may be a fixed period that is set in advance, or may be a variable period that can be changed based on a user operation or the like.
 図9AのAを参照すると、図9AのAに示す“無音区間”が、無発話期間に該当する。 Referring to A of FIG. 9A, the “silent period” shown in A of FIG. 9A corresponds to the silent period.
 本実施形態に係る情報処理装置は、例えば、音声情報に基づき音声が存在する音声区間を検出する。そして、本実施形態に係る情報処理装置は、上記音声区間が検出された後、設定されている時間を越えて無音区間が検出された場合、または、設定されている時間以上の無音区間が検出された場合を、要約処理の開始トリガ(以下、「要約トリガ」と示す。)として、要約処理を開始する。 The information processing apparatus according to the present embodiment detects, for example, a voice section in which voice is present based on voice information. Then, the information processing apparatus according to the present embodiment detects the silent period exceeding the set time after the voice section is detected, or detects the silent period longer than the set time. If this is done, the summarization process is started as a summarization process start trigger (hereinafter referred to as “summary trigger”).
(1-2)開始条件の第2の例:開始条件が、音声認識の状態に関する第1の条件である場合における例
 音声認識の状態に関する第1の条件としては、音声認識の停止要求の検出に係る条件が挙げられる。所定の開始条件が、音声認識の状態に関する第1の条件である場合、本実施形態に係る情報処理装置は、音声認識の停止要求が検出されたことに基づいて、開始条件を満たしたと判定する。本実施形態に係る情報処理装置は、例えば、音声認識の停止要求が検出された場合に、開始条件を満たしたと判定する。
(1-2) Second example of start condition: Example in which start condition is first condition related to voice recognition state As a first condition related to voice recognition state, detection of a voice recognition stop request is detected. The conditions concerning are mentioned. When the predetermined start condition is the first condition related to the voice recognition state, the information processing apparatus according to the present embodiment determines that the start condition is satisfied based on the detection of the voice recognition stop request. . The information processing apparatus according to the present embodiment determines that the start condition is satisfied, for example, when a voice recognition stop request is detected.
 図9AのBを参照すると、本実施形態に係る情報処理装置は、例えば、図9AのBに示す“音声認識の開始操作”に基づき音声認識が開始された後に、図9AのBに示す“音声認識の停止操作”に基づく、音声認識の停止命令を含む音声認識の停止要求が検出された場合を、要約トリガとして、要約処理を開始する。ここで、上記音声認識の開始操作と上記音声認識の停止操作としては、例えば、音声認識に係る任意のUI(User Interface)に対する操作が挙げられる。 Referring to B of FIG. 9A, the information processing apparatus according to the present embodiment, for example, after the voice recognition is started based on the “speech recognition start operation” illustrated in B of FIG. 9A, “ When a speech recognition stop request including a speech recognition stop command based on the “speech recognition stop operation” is detected, the summary processing is started as a summary trigger. Here, examples of the voice recognition start operation and the voice recognition stop operation include an operation on an arbitrary UI (User Interface) related to voice recognition.
 なお、本実施形態に係る音声認識の停止要求は、音声認識の停止操作に基づき得られることに限られない。例えば、音声認識の停止要求は、音声認識処理の最中にエラーが発生した場合や、音声認識処理の最中に割り込み処理が入った場合などにおいて、音声認識処理を行っている装置などにより生成されてもよい。 Note that the speech recognition stop request according to the present embodiment is not limited to being obtained based on the speech recognition stop operation. For example, a speech recognition stop request is generated by a device that performs speech recognition processing when an error occurs during speech recognition processing or when an interrupt processing is entered during speech recognition processing. May be.
(1-3)開始条件の第3の例:開始条件が、音声認識の状態に関する第2の条件である場合における例
 音声認識の状態に関する第2の条件としては、音声認識の完了に係る条件が挙げられる。所定の開始条件が、音声認識の状態に関する第2の条件である場合、本実施形態に係る情報処理装置は、音声認識の完了が検出されたことに基づいて、開始条件を満たしたと判定する。本実施形態に係る情報処理装置は、例えば、音声認識の完了が検出された場合に、開始条件を満たしたと判定する。
(1-3) Third example of start condition: an example in which the start condition is a second condition related to the state of speech recognition The second condition related to the state of speech recognition includes a condition related to completion of speech recognition Is mentioned. When the predetermined start condition is the second condition related to the voice recognition state, the information processing apparatus according to the present embodiment determines that the start condition is satisfied based on the completion of the voice recognition. The information processing apparatus according to the present embodiment determines that the start condition is satisfied, for example, when the completion of voice recognition is detected.
 図9BのAを参照すると、本実施形態に係る情報処理装置は、例えば、図9BのAにおいて“音声認識結果取得”と示すように、音声認識処理の結果が得られた場合を、要約トリガとして、要約処理を開始する。 Referring to A of FIG. 9B, the information processing apparatus according to the present embodiment, for example, displays a summary trigger when the result of the speech recognition process is obtained, as indicated by “acquisition of speech recognition result” in A of FIG. As a result, the summarization process is started.
(1-4)開始条件の第4の例:開始条件が、発話の内容に関する第1の条件である場合における例
 発話の内容に関する第1の条件としては、音声情報が示す発話の内容からの所定の言葉の検出に係る条件が挙げられる。所定の開始条件が、発話の内容に関する第1の条件である場合、本実施形態に係る情報処理装置は、音声情報が示す発話の内容から所定の言葉が検出されたことに基づいて、開始条件を満たしたと判定する。本実施形態に係る情報処理装置は、例えば、音声情報が示す発話の内容から所定の言葉が検出された場合に、開始条件を満たしたと判定する。
(1-4) Fourth example of start condition: an example in which the start condition is the first condition related to the content of the utterance The first condition related to the content of the utterance is based on the content of the utterance indicated by the voice information. Examples include conditions relating to detection of a predetermined word. When the predetermined start condition is the first condition regarding the content of the utterance, the information processing apparatus according to the present embodiment starts the condition based on the detection of the predetermined word from the content of the utterance indicated by the voice information. Is determined to be satisfied. The information processing apparatus according to the present embodiment determines that the start condition is satisfied, for example, when a predetermined word is detected from the utterance content indicated by the audio information.
 発話の内容に関する第1の条件に係る所定の言葉としては、例えば、フィラーワードと呼ばれる言葉が挙げられる。発話の内容に関する第1の条件に係る所定の言葉は、予め設定された、追加、削除、変更などができない固定の言葉であってもよいし、ユーザの操作などに基づいて追加、削除、変更が可能であってもよい。 Examples of the predetermined word relating to the first condition regarding the content of the utterance include a word called a filler word. The predetermined words related to the first condition relating to the content of the utterance may be preset fixed words that cannot be added, deleted, or changed, or may be added, deleted, or changed based on a user operation or the like. May be possible.
 図9BのBを参照すると、図9BのBに示す“えっと”が、フィラーワードの一例(所定の言葉の一例)に該当する。 Referring to B of FIG. 9B, “Et” shown in B of FIG. 9B corresponds to an example of filler word (an example of a predetermined word).
 本実施形態に係る情報処理装置は、例えば、音声情報に基づき得られた音声テキスト情報が示す文字列から、フィラーワードが検出された場合を、要約トリガとして、要約処理を開始する。 The information processing apparatus according to the present embodiment starts the summarization process using, for example, a summary trigger as a case where a filler word is detected from a character string indicated by voice text information obtained based on voice information.
(1-5)開始条件の第5の例:開始条件が、発話の内容に関する第2の条件である場合における例
 発話の内容に関する第2の条件としては、音声情報が示す発話の内容からの言いよどみの検出に係る条件が挙げられる。所定の開始条件が、発話の内容に関する第2の条件である場合、本実施形態に係る情報処理装置は、音声情報に基づき言いよどみが検出されたことに基づいて、開始条件を満たしたと判定する。本実施形態に係る情報処理装置は、例えば、音声情報に基づき言いよどみが検出された場合に、開始条件を満たしたと判定する。
(1-5) Fifth example of start condition: an example in which the start condition is the second condition related to the content of the utterance The second condition related to the content of the utterance is based on the utterance content indicated by the voice information. A condition related to the detection of stagnation is given. When the predetermined start condition is the second condition regarding the content of the utterance, the information processing apparatus according to the present embodiment determines that the start condition is satisfied based on the detection of stagnation based on the voice information. The information processing apparatus according to the present embodiment determines that the start condition is satisfied, for example, when stagnation is detected based on audio information.
 本実施形態に係る情報処理装置は、例えば、“音声情報から有声休止(音節の引き延ばしも含む。)を検出する方法”や、“音声情報に基づき得られた音声テキスト情報が示す文字列から、言いよどみに対応付けられている言葉を検出する方法”など、音声情報に基づき言いよどみを検出すること、または、音声情報に基づき言いよどみを推定することが可能な、任意の方法によって、音声情報に基づき言いよどみを検出する。 The information processing apparatus according to the present embodiment, for example, from “a method of detecting voiced pause (including syllable extension) from voice information” or “a character string indicated by voice text information obtained based on voice information, Say stagnation based on speech information by any method that can detect stagnation based on speech information or estimate speech based on speech information, such as "Method of detecting words associated with sloppyness" Is detected.
 図9CのAを参照すると、本実施形態に係る情報処理装置は、例えば、言いよどみがあると推定された場合を、要約トリガとして、要約処理を開始する。 Referring to A of FIG. 9C, the information processing apparatus according to the present embodiment starts the summarization process using, for example, a summarization trigger when it is estimated that there is stagnation.
(1-6)開始条件の第6の例:開始条件が、音声情報が得られてからの経過時間に関する条件である場合における例
 音声情報が得られてからの経過時間に関する条件としては、経過時間の長さに係る条件が挙げられる。所定の開始条件が、音声情報が得られてからの経過時間に関する条件である場合、本実施形態に係る情報処理装置は、経過時間が設定されている所定の期間を越えた場合、または、経過時間が設定されている所定の期間以上となった場合に、開始条件を満たしたと判定する。
(1-6) Sixth example of start condition: an example in which the start condition is a condition related to the elapsed time since the voice information was obtained. The condition related to the elapsed time after the voice information was obtained is the elapsed time. A condition related to the length of time is given. When the predetermined start condition is a condition related to the elapsed time since the voice information is obtained, the information processing apparatus according to the present embodiment is configured to operate when the elapsed time exceeds a predetermined period, or When the time is equal to or longer than a predetermined period, it is determined that the start condition is satisfied.
 ここで、開始条件の第6の例に係る期間は、予め設定されている固定の期間であってもよいし、ユーザの操作などに基づき変更可能な可変の期間であってもよい。 Here, the period according to the sixth example of the start condition may be a fixed period set in advance or a variable period that can be changed based on a user operation or the like.
 図9CのBを参照すると、本実施形態に係る情報処理装置は、例えば、音声情報が得られたことが検出されてから設定されている一定時間が経過した場合を、要約トリガとして、要約処理を開始する。 Referring to B of FIG. 9C, the information processing apparatus according to the present embodiment performs, for example, a summarization process when a predetermined time has elapsed since it was detected that audio information was obtained as a summarization trigger. To start.
(1-7)開始条件の第7の例
 開始条件は、上記(1-1)に示す第1の例に係る開始条件~上記(1-6)に示す第6の例に係る開始条件のうちの、2以上を組み合わせた条件であってもよい。本実施形態に係る情報処理装置は、例えば、組み合わせた開始条件のうちの、いずれかの開始条件を満たした場合を、要約トリガとして、要約処理を開始する。
(1-7) Seventh Example of Start Condition The start conditions are from the start condition according to the first example shown in (1-1) to the start condition according to the sixth example shown in (1-6) above. The condition which combined 2 or more of them may be sufficient. For example, the information processing apparatus according to the present embodiment starts the summarization process when any one of the combined start conditions is satisfied as a summarization trigger.
(2)要約処理の第2の例:要約処理を行わない例外処理
 本実施形態に係る情報処理装置は、設定されている要約処理の除外条件(以下、「要約除外条件」と示す。)を満たしたと判定した場合には、要約処理を行わない。
(2) Second Example of Summarization Processing: Exception Processing without Performing Summarization Processing The information processing apparatus according to the present embodiment sets a summary processing exclusion condition (hereinafter referred to as “summary exclusion condition”). If it is determined that the condition is satisfied, the summarization process is not performed.
 本実施形態に係る要約除外条件としては、例えば、ジェスチャの検出に関する条件が挙げられる。本実施形態に係る情報処理装置は、設定されている所定のジェスチャが検出された場合に、要約除外条件を満たしたと判定する。 As the summary exclusion condition according to this embodiment, for example, a condition related to gesture detection can be given. The information processing apparatus according to the present embodiment determines that the summary exclusion condition is satisfied when a predetermined gesture that has been set is detected.
 要約除外条件に係る所定のジェスチャは、予め設定されている固定のジェスチャであってもよいし、ユーザの操作などに基づき追加、削除、変更が可能であってもよい。本実施形態に係る情報処理装置は、例えば、撮像デバイスによる撮像により得られた撮像画像を画像処理することや、加速度センサや角速度センサなどの動きセンサの検出結果に基づき動きを推定することなどによって、要約除外条件に係る所定のジェスチャが行われたか否かを判定する。 The predetermined gesture related to the summary exclusion condition may be a fixed gesture set in advance, or may be added, deleted, or changed based on a user operation or the like. For example, the information processing apparatus according to the present embodiment performs image processing on a captured image obtained by imaging with an imaging device, estimates a motion based on a detection result of a motion sensor such as an acceleration sensor or an angular velocity sensor, and the like. Then, it is determined whether or not a predetermined gesture related to the summary exclusion condition has been performed.
 なお、本実施形態に係る要約除外条件は、上記のようなジェスチャの検出に関する条件に限られない。 Note that the summary exclusion condition according to the present embodiment is not limited to the above-described conditions related to gesture detection.
 例えば、本実施形態に係る要約除外条件は、“要約処理を行う機能を無効化するためのボタンが押下されるなど、要約処理を行う機能を無効化する操作が検出されたこと”や、“本実施形態に係る情報処理装置の処理負荷が、設定されている閾値よりも大きくなったこと”など、要約除外条件として設定された任意の条件であってもよい。 For example, the summary exclusion condition according to the present embodiment is “the operation for invalidating the function for performing the summary processing, such as pressing a button for invalidating the function for performing the summary processing,” or “ It may be an arbitrary condition set as the summary exclusion condition, such as “the processing load of the information processing apparatus according to the present embodiment is larger than a set threshold”.
(3)要約処理の第3の例:要約のレベルを動的に変更する処理
 本実施形態に係る情報処理装置は、音声情報に基づき特定される発話期間と、音声情報に基づき特定される文字数との一方または双方に基づいて、発話の内容の要約のレベル(または、発話の内容の要約の程度。以下、同様とする。)を変更する。換言すると、本実施形態に係る情報処理装置は、音声情報に基づき特定される発話期間と、音声情報に基づき特定される文字数との少なくとも一方に基づいて、発話の内容の要約のレベルを変更する。
(3) Third Example of Summarization Processing: Processing for Dynamically Changing Summarization Level The information processing apparatus according to the present embodiment includes an utterance period identified based on speech information and the number of characters identified based on speech information. Based on one or both of the above, the utterance content summary level (or the utterance content summary level, the same shall apply hereinafter) is changed. In other words, the information processing apparatus according to the present embodiment changes the level of the summary of the utterance content based on at least one of the utterance period specified based on the voice information and the number of characters specified based on the voice information. .
 本実施形態に係る情報処理装置は、例えば、要約された発話の内容が示す文字数を制限することによって、発話の内容の要約のレベルを変更する。本実施形態に係る情報処理装置は、例えば、要約された発話の内容が示す文字数が、設定されている上限値を超えないようにすることによって、要約された発話の内容が示す文字数を制限する。要約された発話の内容が示す文字数が制限されることによって、要約された発話の内容が示す文字数、すなわち、要約量を、自動的に減らすことが可能となる。 The information processing apparatus according to the present embodiment changes the level of utterance content summarization by, for example, limiting the number of characters indicated by the summarized utterance content. The information processing apparatus according to the present embodiment limits the number of characters indicated by the summarized utterance content, for example, by preventing the number of characters indicated by the summarized utterance content from exceeding the set upper limit value. . By limiting the number of characters indicated by the summarized utterance content, it is possible to automatically reduce the number of characters indicated by the summarized utterance content, that is, the summary amount.
 ここで、発話期間は、例えば、音声情報に基づき音声が存在する音声区間を検出することによって、特定される。また、発話に対応する文字数は、音声情報に基づく音声テキスト情報が示す文字列の文字数をカウントすることによって特定される。 Here, the utterance period is specified, for example, by detecting a voice section in which voice is present based on voice information. Further, the number of characters corresponding to the utterance is specified by counting the number of characters in the character string indicated by the speech text information based on the speech information.
 発話期間に基づいて発話の内容の要約のレベルを変更する場合、本実施形態に係る情報処理装置は、例えば、発話期間が設定されている所定の期間を越えた場合、または、発話期間が設定されている所定の期間以上となった場合に、発話の内容の要約のレベルを変更する。ここで、発話期間に基づいて発話の内容の要約のレベルを変更する場合における上記期間は、予め設定されている固定の期間であってもよいし、ユーザの操作などに基づき変更可能な可変の期間であってもよい。 When changing the summary level of the utterance content based on the utterance period, the information processing apparatus according to the present embodiment, for example, when the utterance period exceeds a predetermined period or the utterance period is set When the predetermined period is exceeded, the summary level of the content of the utterance is changed. Here, the above-mentioned period when the level of the summary of the utterance content is changed based on the utterance period may be a fixed period set in advance, or a variable that can be changed based on a user operation or the like. It may be a period.
 また、音声情報に基づき特定される文字数に基づいて発話の内容の要約のレベルを変更する場合、本実施形態に係る情報処理装置は、例えば、文字数が設定されている閾値より大きくなった場合、または、文字数が設定されている閾値以上となった場合に、発話の内容の要約のレベルを変更する。ここで、音声情報に基づき特定される文字数に基づいて発話の内容の要約のレベルを変更する場合における上記閾値は、予め設定されている固定の閾値であってもよいし、ユーザの操作などに基づき変更可能な可変の閾値であってもよい。 In addition, when changing the summary level of the content of the utterance based on the number of characters specified based on the voice information, the information processing apparatus according to the present embodiment, for example, when the number of characters is larger than a set threshold, Or, when the number of characters exceeds a set threshold value, the level of utterance content summarization is changed. Here, the threshold in the case of changing the summary level of the utterance content based on the number of characters specified based on the voice information may be a preset fixed threshold, or may be used by a user operation or the like. It may be a variable threshold that can be changed based on this.
[3-2]本実施形態に係る翻訳処理
 上記(c)に示すように、本実施形態に係る情報処理装置は、第1の情報処理方法に係る要約処理により要約された発話の内容を他の言語に翻訳する翻訳処理を、さらに行うことが可能である。本実施形態に係る情報処理装置は、上述したように、発話に対応する第1の言語を、第1の言語と異なる第2の言語に翻訳する。
[3-2] Translation processing according to this embodiment As shown in (c) above, the information processing apparatus according to this embodiment uses the contents of the utterances summarized by the summarization processing according to the first information processing method. It is possible to further perform a translation process for translating into the above languages. As described above, the information processing apparatus according to the present embodiment translates the first language corresponding to the utterance into a second language different from the first language.
 また、翻訳処理では、翻訳単位ごとに翻訳結果の信頼度が設定されてもよい。 In the translation process, the reliability of the translation result may be set for each translation unit.
 翻訳単位とは、翻訳処理において翻訳を行う単位である。翻訳単位としては、例えば、単語ごと、1または2以上の文節ごとなど、設定されている固定の単位が挙げられる。また、翻訳単位は、例えば、発話に対応する言語(第1の言語)などに応じて動的に設定されてもよい。また、翻訳単位は、例えば、ユーザの設定操作などに基づいて変更可能であってもよい。 Translation unit is a unit that translates in translation processing. As a translation unit, for example, a fixed unit that is set, such as for each word, for each one or two or more phrases, can be cited. Further, the translation unit may be dynamically set according to, for example, a language (first language) corresponding to the utterance. The translation unit may be changeable based on, for example, a user setting operation.
 翻訳結果の信頼度とは、例えば、翻訳結果の確かさを示す指標であり、例えば、0[%](信頼度が最も低いことを示す)~100[%](信頼度が最も高いことを示す)の値で表される。翻訳結果の信頼度は、例えば、翻訳結果に対するフィードバックの結果を用いる機械学習の結果など、任意の機械学習の結果を用いて、求められる。なお、翻訳結果の信頼度は、機械学習を用いて求められることに限られず、翻訳結果の確かさを求めることが可能な、任意の方法によって求められてもい。 The reliability of the translation result is, for example, an index indicating the certainty of the translation result. For example, 0 [%] (indicating that the reliability is the lowest) to 100 [%] (indicating that the reliability is the highest) (Shown). The reliability of the translation result is obtained by using an arbitrary machine learning result such as a machine learning result using a feedback result with respect to the translation result. Note that the reliability of the translation result is not limited to being obtained using machine learning, but may be obtained by any method capable of obtaining the certainty of the translation result.
 また、本実施形態に係る情報処理装置は、翻訳処理として、例えば下記の(i)、(ii)のうちの、一方または双方を行うことが可能である。 In addition, the information processing apparatus according to the present embodiment can perform, for example, one or both of the following (i) and (ii) as the translation processing.
(i)翻訳処理の第1の例:翻訳処理を行わない例外処理
 本実施形態に係る情報処理装置は、設定されている翻訳処理の除外条件を満たしたと判定した場合には、翻訳処理を行わない。
(I) First example of translation processing: exception processing without translation processing The information processing apparatus according to the present embodiment performs translation processing when it is determined that a set translation processing exclusion condition is satisfied. Absent.
 本実施形態に係る翻訳処理の除外条件としては、例えば、ジェスチャの検出に関する条件が挙げられる。本実施形態に係る情報処理装置は、設定されている所定のジェスチャが検出された場合に、翻訳処理を満たしたと判定する。 Exceptional conditions for translation processing according to the present embodiment include, for example, conditions relating to gesture detection. The information processing apparatus according to the present embodiment determines that the translation process is satisfied when a predetermined gesture that has been set is detected.
 翻訳処理に係る所定のジェスチャは、予め設定されている固定のジェスチャであってもよいし、ユーザの操作などに基づき追加、削除、変更が可能であってもよい。予め設定されている固定のジェスチャとしては、例えば、ハンドサインなどのノンバーバルなコミュニケーションに係る身振り、手振りなどが挙げられる。本実施形態に係る情報処理装置は、例えば、撮像デバイスによる撮像により得られた撮像画像を画像処理することや、加速度センサや角速度センサなどの動きセンサの検出結果に基づき動きを推定することなどによって、翻訳処理に係る所定のジェスチャが行われたか否かを判定する。 The predetermined gesture related to the translation processing may be a fixed gesture set in advance, or may be added, deleted, or changed based on a user operation or the like. Examples of the fixed gesture set in advance include gestures and hand gestures related to non-verbal communication such as hand signs. For example, the information processing apparatus according to the present embodiment performs image processing on a captured image obtained by imaging with an imaging device, estimates a motion based on a detection result of a motion sensor such as an acceleration sensor or an angular velocity sensor, and the like. Then, it is determined whether or not a predetermined gesture related to the translation process has been performed.
 なお、本実施形態に係る翻訳処理の除外条件は、上記のようなジェスチャの検出に関する条件に限られない。 It should be noted that the exclusion conditions for the translation processing according to the present embodiment are not limited to the conditions relating to gesture detection as described above.
 例えば、本実施形態に係る翻訳処理の除外条件は、“翻訳処理を行う機能を無効化するためのボタンが押下されるなど、翻訳処理を行う機能を無効化する操作が検出されたこと”や、“本実施形態に係る情報処理装置の処理負荷が、設定されている閾値よりも大きくなったこと”など、翻訳処理の除外条件として設定された任意の条件であってもよい。また、本実施形態に係る翻訳処理の除外条件は、上述した本実施形態に係る要約除外条件と同一の条件であってもよいし、異なる条件であってもよい。 For example, the exclusion condition for the translation process according to the present embodiment is that “an operation for invalidating the function for performing the translation process, such as pressing a button for invalidating the function for performing the translation process, is detected” or An arbitrary condition set as an exclusion condition for translation processing, such as “the processing load of the information processing apparatus according to the present embodiment has become larger than a set threshold”, may be used. Further, the exclusion conditions for the translation processing according to the present embodiment may be the same conditions as the summary exclusion conditions according to the above-described embodiment, or may be different conditions.
(ii)翻訳処理の第2の例:再翻訳における処理
 本実施形態に係る情報処理装置は、他の言語に翻訳された内容を、翻訳前の言語に再翻訳することも可能である。
(Ii) Second Example of Translation Processing: Processing in Retranslation The information processing apparatus according to the present embodiment can also retranslate content translated into another language into a language before translation.
 本実施形態に係る情報処理装置は、例えば、再翻訳を行うためのボタンが押下されるなど、再翻訳の処理を行うための操作が検出された場合に、他の言語に翻訳された内容を、翻訳前の言語に再翻訳する。 The information processing apparatus according to the present embodiment, when an operation for performing re-translation processing is detected, such as when a button for performing re-translation is pressed, for example, , Retranslate to the language before translation.
 なお、再翻訳のトリガは、上記のような再翻訳の処理を行うための操作が検出されたことに限られない。例えば、本実施形態に係る情報処理装置は、翻訳単位ごとに設定された翻訳結果の信頼度に基づいて、再翻訳を自動的に行うことも可能である。本実施形態に係る情報処理装置は、例えば、翻訳単位ごとに設定された翻訳結果の信頼度の中に、設定された閾値以下、または、当該閾値より小さい信頼度がある場合を、再翻訳のトリガとして、再翻訳を行う。 Note that the retranslation trigger is not limited to the detection of the operation for performing the retranslation processing as described above. For example, the information processing apparatus according to the present embodiment can automatically perform retranslation based on the reliability of the translation result set for each translation unit. For example, the information processing apparatus according to the present embodiment performs retranslation when the reliability of the translation result set for each translation unit is less than or equal to the set threshold value or less than the set threshold value. Re-translate as a trigger.
 また、他の言語に翻訳された内容が、翻訳前の言語に再翻訳された場合には、本実施形態に係る情報処理装置は、再翻訳の結果を利用した要約処理を行ってもよい。 In addition, when the content translated into another language is re-translated into the language before translation, the information processing apparatus according to the present embodiment may perform a summary process using the result of the re-translation.
 一例を挙げると、本実施形態に係る情報処理装置は、例えば、再翻訳した後に取得された音声情報が示す発話の内容に、再翻訳後の内容に含まれている言葉が存在する場合には、再翻訳後の内容に含まれている言葉を、要約された発話の内容に含める。上記のような再翻訳の結果を利用した要約処理が行われることによって、例えば“ユーザが発話した内容に再翻訳前と同じ文言が登場した場合、今回の発話に対応する要約では、再翻訳前と同じ文言が削除されないように調整すること”が、実現される。 As an example, the information processing apparatus according to the present embodiment, for example, in the case where there are words included in the content after retranslation in the content of the utterance indicated by the voice information acquired after retranslation Include the words contained in the re-translated content in the summarized utterance content. By performing the summarization process using the result of retranslation as described above, for example, “When the same words as before retranslation appear in the content uttered by the user, the summary corresponding to this utterance Is adjusted so that the same words are not deleted.
[3-3]第2の情報処理方法に係る通知制御処理
 本実施形態に係る情報処理装置は、第1の情報処理方法に係る要約処理によって要約された、声情報が示す発話の内容を、通知させる。
[3-3] Notification Control Process According to Second Information Processing Method The information processing apparatus according to the present embodiment is configured to display the utterance content indicated by the voice information summarized by the summary process according to the first information processing method. Notify me.
 上述したように、本実施形態に係る翻訳処理によって、要約された発話の内容が他の言語に翻訳された場合には、本実施形態に係る情報処理装置は、翻訳結果を通知させる。 As described above, when the content of the summarized utterance is translated into another language by the translation processing according to the present embodiment, the information processing apparatus according to the present embodiment notifies the translation result.
 また、上述したように、本実施形態に係る情報処理装置は、例えば、視覚的な方法による通知と聴覚的な方法による通知との一方または双方によって、通知内容を通知させる。 Further, as described above, the information processing apparatus according to the present embodiment notifies the notification content by one or both of notification by a visual method and notification by an auditory method, for example.
 図10は、第2の情報処理方法に係る通知制御処理により実現される視覚的な方法による通知の一例を示す説明図である。図10は、本実施形態に係る情報処理装置が、スマートフォンの表示画面に、翻訳結果を表示させた場合の一例を示している。 FIG. 10 is an explanatory diagram showing an example of notification by a visual method realized by the notification control process according to the second information processing method. FIG. 10 shows an example when the information processing apparatus according to the present embodiment displays the translation result on the display screen of the smartphone.
 また、本実施形態に係る情報処理装置は、通知制御処理として、例えば下記の(I)の処理~(VII)の処理のうちの、1または2以上を行うことが可能である。以下では、本実施形態に係る情報処理装置が、翻訳結果を通知させる場合を例に挙げる。なお、本実施形態に係る情報処理装置は、翻訳前の要約された発話の内容についても、翻訳結果を通知させる場合と同様に、通知させることが可能である。 In addition, the information processing apparatus according to the present embodiment can perform one or more of the following processes (I) to (VII) as the notification control process, for example. Hereinafter, a case where the information processing apparatus according to the present embodiment notifies the translation result will be described as an example. Note that the information processing apparatus according to the present embodiment can also notify the content of the summarized utterance before translation in the same manner as when the translation result is notified.
 図11~図21は、第2の情報処理方法に係る通知制御処理の一例を説明するための説明図である。以下、図11~図21を適宜参照しつつ、第2の情報処理方法に係る通知制御処理の一例を説明する。 FIG. 11 to FIG. 21 are explanatory diagrams for explaining an example of the notification control processing according to the second information processing method. Hereinafter, an example of the notification control process according to the second information processing method will be described with reference to FIGS. 11 to 21 as appropriate.
(I)通知制御処理の第1の例:翻訳言語の語順による通知
 本実施形態に係る情報処理装置は、翻訳された他の言語に対応する語順で、翻訳結果を通知させる。
(I) First example of notification control processing: Notification in word order of translated language The information processing apparatus according to the present embodiment notifies the translation result in a word order corresponding to another translated language.
 例えば、要約処理において発話の内容が、図3のCに示すような分割テキストに要約されたときにおいて、上記他の言語が英語である場合、本実施形態に係る情報処理装置は、以下の順で、翻訳結果を通知させる。
  ・名詞
  ・動詞
  ・形容詞
  ・副詞
  ・その他
For example, when the content of the utterance is summarized in the divided text as shown in C of FIG. 3 in the summarization process and the other language is English, the information processing apparatus according to the present embodiment Then, notify the translation result.
・ Noun ・ Verb ・ Adjective ・ Adverb ・ Other
 また、例えば、要約処理において発話の内容が、図3のCに示すような分割テキストに要約されたときにおいて、上記他の言語が日本語である場合、本実施形態に係る情報処理装置は、以下の順で、翻訳結果を通知させる。
  ・動詞
  ・名詞
  ・形容詞
  ・副詞
  ・その他
Further, for example, when the content of the utterance is summarized into the divided text as shown in C of FIG. 3 in the summary process, when the other language is Japanese, the information processing apparatus according to the present embodiment is The translation results are notified in the following order.
・ Verbs ・ Nouns ・ Adjectives ・ Adverbs ・ Others
 上記のように、翻訳された他の言語に対応する語順で、翻訳結果を通知させることによって、例えば、図4のBに示す翻訳結果の語順と、図5に示すような聴覚的な通知における語順とを、変えることが可能である。 As described above, by causing the translation results to be notified in the word order corresponding to the other translated languages, for example, in the word order of the translation results shown in B of FIG. 4 and the auditory notification as shown in FIG. It is possible to change the word order.
 ここで、翻訳された他の言語に対応する語順は、予め設定された固定の語順であってもよいし、ユーザの操作などに基づいて変更可能であってもよい。 Here, the word order corresponding to the other translated languages may be a fixed word order set in advance, or may be changeable based on a user operation or the like.
(II)通知制御処理の第2の例:翻訳単位ごとの信頼度に基づく通知制御処理
 上述したように、翻訳処理では、翻訳単位ごとに翻訳結果の信頼度が設定されうる。翻訳処理において翻訳単位ごとに翻訳結果の信頼度が設定される場合、本実施形態に係る情報処理装置は、要約された発話の内容における翻訳単位ごとの信頼度に基づいて、翻訳結果を通知させる。
(II) Second Example of Notification Control Processing: Notification Control Processing Based on Reliability for Each Translation Unit As described above, in translation processing, the reliability of translation results can be set for each translation unit. When the reliability of the translation result is set for each translation unit in the translation process, the information processing apparatus according to the present embodiment notifies the translation result based on the reliability for each translation unit in the summarized utterance content. .
 本実施形態に係る情報処理装置は、例えば下記の(II-1)と(II-2)の一方または双方の処理を行うことによって、翻訳単位ごとの信頼度に基づいて、翻訳結果を通知させる。 The information processing apparatus according to the present embodiment notifies the translation result based on the reliability for each translation unit by performing one or both of the following processes (II-1) and (II-2), for example. .
(II-1)翻訳単位ごとの信頼度に基づく通知制御処理の第1の例
 本実施形態に係る情報処理装置は、信頼度が高い翻訳結果を、優先的に通知させる。
(II-1) First Example of Notification Control Processing Based on Reliability for Each Translation Unit The information processing apparatus according to the present embodiment gives priority to notification of translation results with high reliability.
 例えば、翻訳結果を表示デバイスの表示画面に表示させることにより視覚的に通知させる場合、本実施形態に係る情報処理装置は、表示のさせ方によって、信頼度が高い翻訳結果の優先的な通知を実現する。また、翻訳結果を音声出力デバイスから音声によって聴覚的に通知させる場合には、本実施形態に係る情報処理装置は、例えば、通知の順序によって、信頼度が高い翻訳結果の優先的な通知を実現してもよい。 For example, when displaying the translation result visually on the display screen of the display device, the information processing apparatus according to the present embodiment gives a priority notification of the translation result with high reliability depending on how to display the translation result. Realize. In addition, when the translation result is audibly notified by voice from the voice output device, the information processing apparatus according to the present embodiment realizes a high-priority translation result notification according to the order of notification, for example. May be.
 以下では、翻訳結果を表示デバイスの表示画面に表示させることにより視覚的に通知させる場合を例に挙げて、第1の例に係る翻訳単位ごとの信頼度に基づく通知制御処理により実現される通知の一例を、説明する。 In the following, the notification realized by the notification control process based on the reliability for each translation unit according to the first example, taking the case where the translation result is displayed visually on the display screen of the display device as an example. An example will be described.
 図11は、翻訳結果を表示デバイスの表示画面に表示させる場合の第1の例を示しており、信頼度が高い翻訳結果を優先的に通知させる場合の一例を示している。図11に示す例では、“お薦め”、“観光”、“道順”、“教えて”、および“浅草”それぞれが、翻訳単位ごとの翻訳結果に該当する。また、図11は、“お薦め”、“観光”、“道順”、“教えて”、および“浅草”の順に、より低い信頼度が設定された場合における例を示している。 FIG. 11 shows a first example in the case where the translation result is displayed on the display screen of the display device, and shows an example in which the translation result with high reliability is notified preferentially. In the example shown in FIG. 11, “Recommendation”, “Sightseeing”, “Directions”, “Tell me”, and “Asakusa” correspond to the translation results for each translation unit. FIG. 11 shows an example in which lower reliability is set in the order of “recommended”, “sightseeing”, “direction”, “tell me”, and “Asakusa”.
 本実施形態に係る情報処理装置は、例えば図11のAに示すように、翻訳単位ごとの翻訳結果が、信頼度が高い順序で階層的に表示されるように、翻訳単位ごとの翻訳結果を、表示画面に表示させる。 For example, as illustrated in FIG. 11A, the information processing apparatus according to the present embodiment displays the translation results for each translation unit so that the translation results for each translation unit are hierarchically displayed in an order of high reliability. And display it on the display screen.
 ここで、階層的な表示は、例えば、翻訳単位ごとの信頼度と、表示させる階層の決定に係る1または2以上の閾値とを用いた閾値処理によって、実現される。ここで、階層的な表示に係る閾値は、予め設定されている固定値であってもよいし、ユーザの操作などに基づき変更することが可能な可変値であってもよい。 Here, hierarchical display is realized by threshold processing using, for example, reliability for each translation unit and one or more thresholds related to determination of the hierarchy to be displayed. Here, the threshold for hierarchical display may be a fixed value set in advance, or may be a variable value that can be changed based on a user operation or the like.
 上記閾値処理の結果、同一の階層に、複数の翻訳単位ごとの翻訳結果を表示させる場合、本実施形態に係る情報処理装置は、例えば、“階層に対応する表示画面の領域において左から右に向かって信頼度が高い順序で並べる”などの設定された所定の順序で、当該複数の翻訳単位ごとの翻訳結果を表示させる。 As a result of the threshold processing, when displaying the translation results for each of a plurality of translation units on the same hierarchy, the information processing apparatus according to the present embodiment, for example, “from left to right in the display screen area corresponding to the hierarchy” The translation results for each of the plurality of translation units are displayed in a predetermined order such as “Arrange in order of higher reliability”.
 また、上記閾値処理の結果、信頼度が所定の閾値より大きい翻訳結果、または、信頼度が当該所定の閾値以上の翻訳結果が、複数存在する場合には、本実施形態に係る情報処理装置は、例えば図11のBに示すように、存在する複数の翻訳結果を、表示画面の所定の領域にまとめて表示させてもよい。ここで、上記所定の閾値としては、閾値処理に用いられる1または2以上の閾値のうちの、1または2以上の閾値が挙げられる。また、上記所定の領域としては、例えば“上記所定の閾値による閾値処理に対応付けられている階層に対応する、表示画面の領域”が、挙げられる。 In addition, when there are a plurality of translation results whose reliability is greater than the predetermined threshold or a plurality of translation results whose reliability is equal to or higher than the predetermined threshold as a result of the threshold processing, the information processing apparatus according to the present embodiment For example, as shown in FIG. 11B, a plurality of existing translation results may be displayed together in a predetermined area of the display screen. Here, examples of the predetermined threshold include one or more thresholds among one or more thresholds used for threshold processing. Further, examples of the predetermined area include “display screen area corresponding to a hierarchy associated with the threshold processing based on the predetermined threshold”.
 例えば図11に示すような表示が行われることによって、“翻訳処理において高い信頼度(スコアに相当する。)が設定された翻訳単位ごとの翻訳結果が、上位に表示され、かつ、信頼度が所定の閾値を超えた場合は、翻訳単位ごとの翻訳結果それぞれをまとめて表示すること”が実現される。なお、信頼度が高い翻訳結果を優先的に通知させる場合における表示の例が、図11に示す例に限られないことは、言うまでもない。 For example, by performing the display as shown in FIG. 11, the translation result for each translation unit for which “high reliability (corresponding to a score) in translation processing” is set is displayed at the top and the reliability is high. When a predetermined threshold value is exceeded, “translation results for each translation unit are displayed together” is realized. Needless to say, the display example in the case where the translation result with high reliability is preferentially notified is not limited to the example shown in FIG.
(II-2)翻訳単位ごとの信頼度に基づく通知制御処理の第2の例
 本実施形態に係る情報処理装置は、信頼度に応じて強調されるように、翻訳結果を通知させる。
(II-2) Second Example of Notification Control Processing Based on Reliability for Each Translation Unit The information processing apparatus according to the present embodiment notifies the translation result so as to be emphasized according to the reliability.
 例えば、翻訳結果を表示デバイスの表示画面に表示させることにより視覚的に通知させる場合、本実施形態に係る情報処理装置は、表示のさせ方によって、信頼度に応じて強調した通知を実現する。また、翻訳結果を音声出力デバイスから音声によって聴覚的に通知させる場合には、本実施形態に係る情報処理装置は、例えば、信頼度に基づき音声の音圧、音量などを変えることによって、信頼度に応じて強調した通知を実現してもよい。 For example, when the translation result is displayed visually on the display screen of the display device, the information processing apparatus according to the present embodiment realizes the notice emphasized according to the reliability depending on the display method. In addition, when the translation result is audibly notified by voice from the voice output device, the information processing apparatus according to the present embodiment changes the reliability by changing the sound pressure, the volume, and the like of the voice based on the reliability, for example. A notice that is emphasized according to the above may be realized.
 以下では、翻訳結果を表示デバイスの表示画面に表示させることにより視覚的に通知させる場合を例に挙げて、第2の例に係る翻訳単位ごとの信頼度に基づく通知制御処理により実現される通知の一例を、説明する。 In the following, the notification realized by the notification control process based on the reliability for each translation unit according to the second example will be described by taking as an example the case where the translation result is displayed visually on the display screen of the display device. An example will be described.
 本実施形態に係る情報処理装置は、例えば“翻訳単位ごとの翻訳結果それぞれを、信頼度に応じた大きさで表示させること”によって、信頼度に応じて翻訳結果を強調して表示させる。 The information processing apparatus according to the present embodiment emphasizes and displays the translation result according to the reliability by, for example, “displaying each translation result for each translation unit in a size corresponding to the reliability”.
 図12は、翻訳結果を表示デバイスの表示画面に表示させる場合の第2の例を示しており、信頼度に応じて翻訳結果が強調して表示される場合の第1の例を示している。図12に示す例では、“お薦め”、“観光”、“道順”、“教えて”、および“浅草”それぞれが、翻訳単位ごとの翻訳結果に該当する。また、図12は、“お薦め”、“観光”、“道順”、“教えて”、および“浅草”の順に、より低い信頼度が設定された場合における例を示している。 FIG. 12 shows a second example when the translation result is displayed on the display screen of the display device, and shows a first example when the translation result is displayed in an emphasized manner according to the reliability. . In the example shown in FIG. 12, “Recommendation”, “Sightseeing”, “Directions”, “Tell me”, and “Asakusa” correspond to the translation results for each translation unit. FIG. 12 shows an example in which lower reliability is set in the order of “recommended”, “sightseeing”, “direction”, “tell me”, and “Asakusa”.
 また、図12は、本実施形態に係る情報処理装置が、上記第1の例に係る翻訳単位ごとの信頼度に基づく通知制御処理に加えて、さらに、翻訳単位ごとの翻訳結果それぞれを信頼度に応じた大きさで表示させた例を示している。なお、“第2の例に係る翻訳単位ごとの信頼度に基づく通知制御処理を行う場合において、本実施形態に係る情報処理装置が、図11に示すような階層的な表示のように、信頼度が高い翻訳結果を優先的に通知させなくてもよいこと”は、言うまでもない。 FIG. 12 shows the information processing apparatus according to this embodiment in addition to the notification control process based on the reliability for each translation unit according to the first example, in addition to the translation results for each translation unit. An example in which the size is displayed in accordance with is shown. Note that, in the case of performing the notification control process based on the reliability for each translation unit according to the second example, the information processing apparatus according to the present embodiment is reliable as in the hierarchical display shown in FIG. Needless to say, it is not necessary to preferentially notify the translation result having a high degree.
 本実施形態に係る情報処理装置は、例えば図12のAに示すように、翻訳単位ごとの翻訳結果それぞれを、信頼度に応じた大きさで表示させる。本実施形態に係る情報処理装置は、例えば“信頼度と、表示画面に翻訳単位ごとの翻訳結果を表示させる際の表示サイズとが対応付けられているテーブル(または、データベース)”を参照することによって、翻訳単位ごとの翻訳結果それぞれを、信頼度に応じた大きさで表示させる。 The information processing apparatus according to the present embodiment displays each translation result for each translation unit in a size corresponding to the reliability, for example, as shown in FIG. The information processing apparatus according to the present embodiment refers to, for example, “a table (or database) in which reliability is associated with a display size when displaying a translation result for each translation unit on the display screen”. The translation result for each translation unit is displayed in a size corresponding to the reliability.
 例えば図12に示すような表示が行われることによって、“翻訳処理において高い信頼度(スコアに相当する。)が設定された翻訳単位ごとの翻訳結果が、上位に表示され、かつ、上位に表示される翻訳単位ごとの翻訳結果ほど、目立つように大きさを変えること”が実現される。なお、翻訳単位ごとの翻訳結果それぞれを信頼度に応じた大きさで表示させる場合における表示の例が、図12に示す例に限られないことは、言うまでもない。 For example, by performing the display as shown in FIG. 12, the translation result for each translation unit having a high reliability (corresponding to a score) in the translation process is displayed at the top and displayed at the top. As the result of translation for each translation unit is changed, the size is changed prominently ”. Needless to say, the display example when displaying each translation result for each translation unit in a size corresponding to the reliability is not limited to the example shown in FIG.
 また、本実施形態に係る情報処理装置は、例えば“翻訳単位ごとの翻訳結果それぞれを、信頼度が高い翻訳結果が表示画面においてより手前に表示されるように、表示させること”によって、信頼度に応じて翻訳結果を強調して表示させてもよい。 In addition, the information processing apparatus according to the present embodiment, for example, provides reliability by “displaying each translation result for each translation unit so that a translation result with high reliability is displayed in the foreground”. Depending on the case, the translation result may be highlighted and displayed.
 図13は、翻訳結果を表示デバイスの表示画面に表示させる場合の第3の例を示しており、信頼度に応じて翻訳結果が強調して表示される場合の第2の例を示している。図13に示す例では、“お薦め”、“観光”、“道順”、“教えて”、および“浅草”、…それぞれが、翻訳単位ごとの翻訳結果に該当する。また、図13は、“お薦め”、“観光”、“道順”、“教えて”、および“浅草”、…の順に、より低い信頼度が設定された場合における例を示している。 FIG. 13 shows a third example when the translation result is displayed on the display screen of the display device, and shows a second example when the translation result is displayed with emphasis according to the reliability. . In the example shown in FIG. 13, “Recommendation”, “Sightseeing”, “Direction”, “Tell me”, “Asakusa”, etc. correspond to the translation results for each translation unit. FIG. 13 shows an example in which lower reliability is set in the order of “recommendation”, “sightseeing”, “direction”, “tell me”, “Asakusa”,.
 また、図13は、本実施形態に係る情報処理装置が、上記第1の例に係る翻訳単位ごとの信頼度に基づく通知制御処理に加えて、さらに、信頼度が高い翻訳結果を表示画面においてより手前に表示させた例を示している。なお、上述したように、“第2の例に係る翻訳単位ごとの信頼度に基づく通知制御処理を行う場合において、本実施形態に係る情報処理装置が、図11に示すような階層的な表示のように、信頼度が高い翻訳結果を優先的に通知させなくてもよいこと”は、言うまでもない。 Further, FIG. 13 shows that the information processing apparatus according to the present embodiment displays a translation result with higher reliability on the display screen in addition to the notification control process based on the reliability for each translation unit according to the first example. The example displayed on the near side is shown. As described above, when the notification control process based on the reliability for each translation unit according to the second example is performed, the information processing apparatus according to the present embodiment displays a hierarchical display as illustrated in FIG. Needless to say, it is not necessary to preferentially notify a translation result with a high degree of reliability.
 本実施形態に係る情報処理装置は、例えば図13のAに示すように、信頼度が高い翻訳結果を、表示画面においてより手前に表示させる。本実施形態に係る情報処理装置は、例えば、“信頼度と、表示画面に翻訳単位ごとの翻訳結果を表示させる際の奥行方向の座標値とが対応付けられているテーブル(または、データベース)”を参照することによって、翻訳単位ごとの翻訳結果それぞれを、信頼度が高い翻訳結果が表示画面においてより手前に表示されるように表示させる。 For example, as illustrated in FIG. 13A, the information processing apparatus according to the present embodiment displays a translation result with high reliability on the display screen. The information processing apparatus according to the present embodiment is, for example, “a table (or database) in which the reliability and the coordinate value in the depth direction when displaying the translation result for each translation unit on the display screen are associated with each other”. By referring to, each translation result for each translation unit is displayed so that the translation result with high reliability is displayed on the front side of the display screen.
 例えば図13に示すような表示が行われることによって、“翻訳処理において高い信頼度(スコアに相当する。)が設定された翻訳単位ごとの翻訳結果が、表示画面における奥行き方向の前面に表示されることによって、高い信頼度が設定された翻訳単位ごとの翻訳結果ほど、目立たせること”が実現される。なお、翻訳単位ごとの翻訳結果それぞれを、信頼度が高い翻訳結果が表示画面においてより手前に表示されるように表示させる場合における表示の例が、図13に示す例に限られないことは、言うまでもない。 For example, when the display as shown in FIG. 13 is performed, the translation result for each translation unit having a high reliability (corresponding to a score) in the translation process is displayed on the front in the depth direction on the display screen. As a result, it is realized that the result of translation for each translation unit for which high reliability is set is more conspicuous. In addition, the example of a display in the case of displaying each translation result for every translation unit so that a translation result with high reliability is displayed in the foreground on the display screen is not limited to the example shown in FIG. Needless to say.
 また、本実施形態に係る情報処理装置は、例えば“翻訳単位ごとの翻訳結果それぞれを、信頼度に応じた色と信頼度に応じた透過度との一方または双方で、表示させること”によって、信頼度に応じて翻訳結果を強調して表示させてもよい。 Further, the information processing apparatus according to the present embodiment, for example, “displays each translation result for each translation unit in one or both of the color according to the reliability and the transparency according to the reliability”, The translation result may be highlighted and displayed according to the reliability.
 図14は、翻訳結果を表示デバイスの表示画面に表示させる場合の第4の例を示しており、信頼度に応じて翻訳結果が強調して表示される場合の第3の例を示している。図14に示す例では、“お薦め”、“観光”、“道順”、“教えて”、および“浅草”それぞれが、翻訳単位ごとの翻訳結果に該当する。また、図14は、“お薦め”、“観光”、“道順”、“教えて”、および“浅草”の順に、より低い信頼度が設定された場合における例を示している。 FIG. 14 shows a fourth example when the translation result is displayed on the display screen of the display device, and shows a third example when the translation result is displayed in an emphasized manner according to the reliability. . In the example illustrated in FIG. 14, “recommendation”, “tourism”, “direction”, “tell me”, and “Asakusa” correspond to the translation results for each translation unit. FIG. 14 shows an example in which lower reliability is set in the order of “recommendation”, “tourism”, “direction”, “tell me”, and “Asakusa”.
 また、図14は、本実施形態に係る情報処理装置が、上記第1の例に係る翻訳単位ごとの信頼度に基づく通知制御処理に加えて、さらに、翻訳単位ごとの翻訳結果それぞれを、信頼度に応じた色と信頼度に応じた透過度との一方または双方で、表示させた例を示している。なお、上述したように、“第2の例に係る翻訳単位ごとの信頼度に基づく通知制御処理を行う場合において、本実施形態に係る情報処理装置が、図11に示すような階層的な表示のように、信頼度が高い翻訳結果を優先的に通知させなくてもよいこと”は、言うまでもない。 Further, FIG. 14 shows that the information processing apparatus according to the present embodiment further adds the translation result for each translation unit to the reliability in addition to the notification control process based on the reliability for each translation unit according to the first example. An example is shown in which one or both of the color according to the degree and the transparency according to the reliability are displayed. As described above, when the notification control process based on the reliability for each translation unit according to the second example is performed, the information processing apparatus according to the present embodiment displays a hierarchical display as illustrated in FIG. Needless to say, it is not necessary to preferentially notify a translation result with a high degree of reliability.
 本実施形態に係る情報処理装置は、例えば図14のAに示すように、翻訳単位ごとの翻訳結果それぞれを、信頼度に応じた色で表示させる。また、本実施形態に係る情報処理装置は、例えば、翻訳単位ごとの翻訳結果それぞれを、信頼度に応じた透過度で表示させてもよい。さらに、本実施形態に係る情報処理装置は、例えば、翻訳単位ごとの翻訳結果それぞれを、信頼度に応じた色、および信頼度に応じた透過度で、表示させることも可能である。 For example, as illustrated in FIG. 14A, the information processing apparatus according to the present embodiment displays each translation result for each translation unit in a color corresponding to the reliability. In addition, the information processing apparatus according to the present embodiment may display each translation result for each translation unit with a transparency according to the reliability. Furthermore, the information processing apparatus according to the present embodiment can display, for example, each translation result for each translation unit with a color according to the reliability and a transparency according to the reliability.
 本実施形態に係る情報処理装置は、例えば、“信頼度、表示画面に翻訳単位ごとの翻訳結果を表示させる際の色、および表示画面に翻訳単位ごとの翻訳結果を表示させる際の透過度が対応付けられているテーブル(または、データベース)”を参照することによって、翻訳単位ごとの翻訳結果それぞれを、信頼度に応じた色と信頼度に応じた透過度との一方または双方で、表示させる。 The information processing apparatus according to the present embodiment has, for example, “reliability, color when displaying the translation result for each translation unit on the display screen, and transparency when displaying the translation result for each translation unit on the display screen. By referring to the “corresponding table (or database)”, each translation result for each translation unit is displayed in one or both of a color corresponding to the reliability and a transparency corresponding to the reliability. .
 例えば図14に示すような表示が行われることによって、“翻訳処理において高い信頼度(スコアに相当する。)が設定された翻訳単位ごとの翻訳結果ほど、目立つように、色と透過度との一方または双方が強調されること”が実現される。なお、翻訳単位ごとの翻訳結果それぞれを、信頼度に応じた色と信頼度に応じた透過度との一方または双方で表示させる場合における表示の例が、図14に示す例に限られないことは、言うまでもない。 For example, when the display as shown in FIG. 14 is performed, “translation results for each translation unit for which high reliability (corresponding to a score) is set in the translation process” is more conspicuous. One or both are emphasized ". In addition, the example of a display in the case of displaying each translation result for every translation unit by one or both of the color according to reliability and the transparency according to reliability is not restricted to the example shown in FIG. Needless to say.
(III)通知制御処理の第3の例:音声情報に基づく通知制御処理
 通知内容を、表示デバイスの表示画面に表示させることにより視覚的に通知させる場合、本実施形態に係る情報処理装置は、音声情報に基づいて、通知内容の表示の仕方を制御する。
(III) Third example of notification control processing: Notification control processing based on audio information When the notification content is displayed visually on the display screen of the display device, the information processing apparatus according to the present embodiment is Based on the voice information, the display method of the notification content is controlled.
 本実施形態に係る情報処理装置は、例えば“通知内容を、音声情報から特定される音圧または音量に応じた大きさで表示させること”によって、音声情報に基づき通知内容の表示の仕方を制御する。本実施形態に係る情報処理装置は、例えば、“音圧または音量、分割テキストを表示させる際の表示サイズ、およびフォントの大きさが対応付けられているテーブル(または、データベース)”を参照することによって、通知内容を、音声情報から特定される音圧または音量に応じた大きさで表示させる。 The information processing apparatus according to the present embodiment controls how the notification content is displayed based on the audio information, for example, by “displaying the notification content in a size corresponding to the sound pressure or volume specified from the audio information”. To do. The information processing apparatus according to the present embodiment refers to, for example, “a table (or database) in which sound pressure or volume, display size when displaying divided text, and font size are associated with each other”. The notification content is displayed in a size corresponding to the sound pressure or volume specified from the sound information.
 また、本実施形態に係る翻訳処理によって、要約された発話の内容が他の言語に翻訳された場合には、本実施形態に係る情報処理装置は、上記通知内容の表示の仕方を制御する場合と同様に、音声情報に基づいて、翻訳結果の表示の仕方を制御することが、可能である。 Further, when the content of the summarized utterance is translated into another language by the translation processing according to the present embodiment, the information processing apparatus according to the present embodiment controls how to display the notification content. Similarly to the above, it is possible to control the display method of the translation result based on the voice information.
 図15は、翻訳結果を表示デバイスの表示画面に表示させる場合の第5の例を示しており、音声情報に基づき翻訳結果が強調して表示される場合の一例を示している。図15に示す例では、“お薦め”、“観光”、“道順”、“教えて”、および“浅草”それぞれが、翻訳単位ごとの翻訳結果に該当する。また、図15は、例えば、“教えて”、“道順”、“お薦め”、“観光”、および“浅草”の順に、音圧または音量がより低い場合における例を示している。 FIG. 15 shows a fifth example in which the translation result is displayed on the display screen of the display device, and shows an example in which the translation result is displayed with emphasis based on the audio information. In the example shown in FIG. 15, “recommendation”, “tourism”, “direction”, “tell me”, and “Asakusa” correspond to the translation results for each translation unit. FIG. 15 shows an example when the sound pressure or volume is lower in the order of “Tell me”, “Direction”, “Recommendation”, “Sightseeing”, and “Asakusa”, for example.
 本実施形態に係る情報処理装置は、例えば図15のAに示すように、翻訳単位ごとの翻訳結果(翻訳された要約された発話の内容)を、音声情報から特定される音圧または音量に応じた大きさで表示させる。本実施形態に係る情報処理装置は、例えば、“音圧または音量、翻訳単位ごとの翻訳結果を表示させる際の表示サイズ、およびフォントの大きさが対応付けられているテーブル(または、データベース)”を参照することによって、翻訳結果を、音声情報から特定される音圧または音量に応じた大きさで表示させる。 As shown in FIG. 15A, for example, the information processing apparatus according to the present embodiment converts the translation result for each translation unit (the contents of the translated summary utterance) into a sound pressure or volume specified from the sound information. Display in the appropriate size. The information processing apparatus according to the present embodiment, for example, “a table (or database) in which the sound pressure or volume, the display size when displaying the translation result for each translation unit, and the font size” are associated with each other. The translation result is displayed in a size corresponding to the sound pressure or sound volume specified from the sound information.
 例えば図15に示すような表示が行われることによって、“音圧(または音量)が高かったものがより目立つように、フォントおよび表示サイズを大きく表示させること”が実現される。なお、音声情報に基づいて表示の仕方を制御する場合における表示の例が、図15に示す例に限られないことは、言うまでもない。 For example, when the display as shown in FIG. 15 is performed, “the font and the display size are displayed in a large size so that the one with a high sound pressure (or volume) is more conspicuous” is realized. Needless to say, the display example in the case of controlling the display method based on the audio information is not limited to the example shown in FIG.
(IV)通知制御処理の第4の例:表示画面に対して行われる操作に基づく通知制御処理
 通知内容を、表示デバイスの表示画面に表示させることにより視覚的に通知させる場合、本実施形態に係る情報処理装置は、表示画面に対して行われる操作に基づいて、表示画面に表示されている内容を変更させる。
(IV) Fourth Example of Notification Control Processing: Notification Control Processing Based on Operation Performed on Display Screen When displaying the notification content on the display screen of the display device for visual notification, this embodiment The information processing apparatus changes the content displayed on the display screen based on an operation performed on the display screen.
 ここで、表示画面に対して行われる操作としては、例えば、ボタンや方向キー、マウス、キーボードなどの操作入力デバイスを用いた操作、表示画面に対する操作(表示デバイスがタッチパネルである場合)など、表示画面に対して操作を行うことが可能な、任意の操作が、挙げられる。 Here, the operations performed on the display screen include, for example, operations using operation input devices such as buttons, direction keys, a mouse, and a keyboard, and operations on the display screen (when the display device is a touch panel). Arbitrary operations that can be performed on the screen are listed.
 本実施形態に係る情報処理装置は、例えば下記の(IV-1)と(IV-2)の一方または双方の処理を行うことによって、表示画面に対して行われる操作に基づいて、表示画面に表示されている内容を変更させる。 The information processing apparatus according to this embodiment performs, for example, one or both of the following processes (IV-1) and (IV-2) to display the display screen based on an operation performed on the display screen. Change the displayed contents.
(IV-1)表示画面に対して行われる操作に基づく通知制御処理の第1の例
 本実施形態に係る情報処理装置は、表示画面に対して行われる操作に基づいて、表示画面に表示されている内容を変更させる。本実施形態に係る表示画面に表示されている内容を変更させる例としては、下記に示す例のうちの1または2以上が、挙げられる。
  ・表示画面における通知内容の表示位置の変更(または、表示画面における、翻訳結果の表示位置の変更)
  ・表示画面に表示されている通知内容の一部の削除(または、表示画面に表示されている、翻訳結果の一部の削除)
(IV-1) First Example of Notification Control Processing Based on Operation Performed on Display Screen The information processing apparatus according to the present embodiment is displayed on the display screen based on the operation performed on the display screen. Change the contents. Examples of changing the content displayed on the display screen according to the present embodiment include one or more of the examples shown below.
-Change the display position of the notification content on the display screen (or change the display position of the translation result on the display screen)
・ Deleting part of the notification content displayed on the display screen (or deleting part of the translation result displayed on the display screen)
 本実施形態に係る情報処理装置が、表示画面に対して行われる操作に基づいて、表示画面における通知内容の表示位置(または、表示画面における、翻訳結果の表示位置)を変更させることよって、例えば、コミュニケーション相手に提示する内容の手動での変更が、可能となる。また、本実施形態に係る情報処理装置が、表示画面に対して行われる操作に基づいて、表示画面に表示されている通知内容の一部(または、表示画面に表示されている、翻訳結果の一部)を削除させることよって、例えば、誤訳が生じている翻訳結果などを手動で削除することが、可能となる。 The information processing apparatus according to the present embodiment changes the display position of the notification content on the display screen (or the display position of the translation result on the display screen) based on an operation performed on the display screen, for example, The content to be presented to the communication partner can be changed manually. In addition, the information processing apparatus according to the present embodiment may display a part of the notification content displayed on the display screen (or the translation result displayed on the display screen based on an operation performed on the display screen. For example, it is possible to manually delete a translation result in which a mistranslation has occurred.
 図16A~図16Cは、表示画面に対して行われる操作に基づいて、表示画面に表示されている内容を変更させる場合における表示画面の例をそれぞれ示している。ここで、図16Aは、翻訳処理による翻訳単位ごとの翻訳結果が、再翻訳された場合における表示の一例を示している。また、図16Bは、表示画面に表示されている翻訳単位ごとの翻訳結果(翻訳された要約された発話の内容)の一部を削除する場合における、表示の一例を示している。また、図16Cは、表示画面に表示されている翻訳単位ごとの翻訳結果(翻訳された要約された発話の内容)の表示位置を変更する場合における、表示の一例を示している。 FIGS. 16A to 16C show examples of display screens in the case where contents displayed on the display screen are changed based on operations performed on the display screen. Here, FIG. 16A shows an example of a display when the translation result for each translation unit by the translation process is re-translated. FIG. 16B shows an example of display in the case where a part of the translation result (translated summarized utterance content) for each translation unit displayed on the display screen is deleted. FIG. 16C shows an example of display when the display position of the translation result (translated summarized utterance content) for each translation unit displayed on the display screen is changed.
 例えば、ユーザが表示画面に表示されている翻訳単位ごとの翻訳結果の一部である“お薦め”を削除することを望む場合を、例に挙げる。ユーザが、図16BのAにおいて符号Oで示すように“お薦め”を選択すると、図16BのAに示すように、削除するか否かを選択するウィンドウWが表示される。また、ユーザがウィンドウWにおいて“はい”を選択すると、図16BのBに示すように、翻訳結果の一部である“お薦め”が削除される。なお、表示画面に表示されている翻訳単位ごとの翻訳結果の一部を削除する場合の例が、図16Bに示す例に限られないことは、言うまでもない。 For example, a case where the user desires to delete “recommendation” which is a part of the translation result for each translation unit displayed on the display screen is taken as an example. When the user selects “recommendation” as indicated by symbol O in A of FIG. 16B, a window W for selecting whether or not to delete is displayed as shown in A of FIG. 16B. When the user selects “Yes” in the window W, “Recommendation” which is a part of the translation result is deleted as shown in B of FIG. 16B. Needless to say, the example of deleting a part of the translation result for each translation unit displayed on the display screen is not limited to the example shown in FIG. 16B.
 また、例えば、ユーザが表示画面に表示されている翻訳単位ごとの翻訳結果のうち、“お薦め”と“教えて”との表示位置を変更することを望む場合を、例に挙げる。例えば、ユーザが、図16CのAにおいて符号O1で示すように“教えて”を選択し、その後、ドラッグ操作により図16CのBにおいて符号O2で示す位置を指定すると、図16CのBに示すように、“お薦め”と“教えて”との表示位置が入れ替わる。なお、表示画面に表示されている翻訳単位ごとの翻訳結果の表示位置を変更する場合の例が、図16Cに示す例に限られないことは、言うまでもない。 Also, for example, a case where the user desires to change the display position of “Recommend” and “Tell me” in the translation results for each translation unit displayed on the display screen will be described as an example. For example, when the user selects “Tell me” as indicated by reference numeral O1 in A of FIG. 16C and then designates the position indicated by reference numeral O2 in B of FIG. 16C by a drag operation, as shown in B of FIG. 16C. In addition, the display positions of “recommend” and “tell me” are switched. Needless to say, the example of changing the display position of the translation result for each translation unit displayed on the display screen is not limited to the example shown in FIG. 16C.
(IV-2)表示画面に対して行われる操作に基づく通知制御処理の第2の例
 要約された発話の内容(または、翻訳結果)を、通知内容として表示デバイスの表示画面に表示させる場合には、要約された発話の内容(または、翻訳結果)が、一画面に表示しきれないことが、起こりうる。上記のように、要約された発話の内容(または、翻訳結果)を一画面に表示しきれないことが生じた場合、本実施形態に係る情報処理装置は、通知内容のうちの一の部分を、表示画面に表示させる。
(IV-2) Second Example of Notification Control Processing Based on Operation Performed on Display Screen When displaying summarized utterance content (or translation result) on display screen of display device as notification content It may happen that the content of the summarized utterance (or the translation result) cannot be displayed on one screen. As described above, when the content of the summarized utterance (or the translation result) cannot be displayed on one screen, the information processing apparatus according to the present embodiment selects one part of the notification content. And display it on the display screen.
 また、通知内容のうちの一の部分が、表示画面に表示されている場合、本実施形態に係る情報処理装置は、表示画面に対して行われる操作に基づいて、表示画面に表示されている内容を変更させる。本実施形態に係る情報処理装置は、例えば、表示画面に表示される通知内容を、上記一の部分から他の部分に変更させることによって、表示画面に表示されている内容を変更させる。 In addition, when one part of the notification content is displayed on the display screen, the information processing apparatus according to the present embodiment is displayed on the display screen based on an operation performed on the display screen. Change the contents. The information processing apparatus according to the present embodiment changes the content displayed on the display screen, for example, by changing the notification content displayed on the display screen from the one part to the other part.
 図17、図18は、表示画面に対して行われる操作に基づいて、翻訳処理による翻訳単位ごとの翻訳結果(翻訳された要約された発話の内容)を変更させる場合における表示画面の例をそれぞれ示している。ここで、図17は、図17のAに示すようなスライダー型のUIによって、表示画面に表示されている内容を変更させることが可能な表示画面の例を示している。また、図18は、表示画面の奥行き方向に回転して表示が変わるリボルバー型のUIによって、表示画面に表示されている内容を変更させることが可能な表示画面の例を示している。 FIGS. 17 and 18 are examples of display screens in the case of changing the translation result (translated summarized utterance content) for each translation unit based on the operation performed on the display screen. Show. Here, FIG. 17 shows an example of a display screen in which the content displayed on the display screen can be changed by a slider-type UI as shown in FIG. FIG. 18 shows an example of a display screen in which the content displayed on the display screen can be changed by a revolver type UI whose display changes by rotating in the depth direction of the display screen.
 例えば図17に示す表示がされている場合において、ユーザが、表示画面に表示されている内容をすることを望む場合を、例に挙げる。ユーザは、例えば図17のAに示すスライダーの任意の部分に触れるタッチ操作などにより、スライダー型のUIを操作することによって、表示画面に表示されている翻訳結果を、一の部分から他の部分に変更させる。 For example, in the case where the display shown in FIG. 17 is displayed, a case where the user desires to display the content displayed on the display screen is taken as an example. The user operates the slider-type UI by touching an arbitrary part of the slider shown in FIG. 17A, for example, to display the translation result displayed on the display screen from one part to another part. To change.
 また、例えば図18に示す表示がされている場合において、ユーザが、表示画面に表示されている内容をすることを望む場合を、例に挙げる。ユーザは、例えば図18において符号O1で示すようなフリック操作を行うことなどにより、リボルバー型のUIを操作することによって、表示画面に表示されている翻訳結果を、一の部分から他の部分に変更させる。 Further, for example, in the case where the display shown in FIG. 18 is displayed, a case where the user desires to display the contents displayed on the display screen is taken as an example. The user operates the revolver-type UI, for example, by performing a flick operation as indicated by reference numeral O1 in FIG. 18 to change the translation result displayed on the display screen from one part to another part. Change it.
 なお、表示画面に表示されている翻訳結果を変更する場合の例が、図17、図18に示す例に限られないことは、言うまでもない。 Needless to say, the example of changing the translation result displayed on the display screen is not limited to the examples shown in FIGS.
(V)通知制御処理の第5の例:音声による操作に基づく通知制御処理
 本実施形態に係る情報処理装置は、音声による操作に基づいて、翻訳結果を、音声出力デバイスから音声により聴覚的に通知させてもよい。
(V) Fifth Example of Notification Control Processing: Notification Control Processing Based on Voice Operation The information processing apparatus according to this embodiment audibly transmits a translation result from a voice output device based on voice operation. You may be notified.
 図19は、音声による操作に基いて、翻訳結果が聴覚的に通知される場合の一例を示している。図19は、音声による操作に基いて、翻訳処理による翻訳単位ごとの翻訳結果の中から、コミュニケーション相手に通知する内容が選択される場合の例を示している。 FIG. 19 shows an example of the case where the translation result is audibly notified based on the voice operation. FIG. 19 shows an example in which the content to be notified to the communication partner is selected from the translation results for each translation unit by the translation processing based on the operation by voice.
 例えば、翻訳処理による翻訳単位ごとの翻訳結果が、“お薦め”、“観光”、“道順”、および“教えて”であった場合、本実施形態に係る情報処理装置は、図19のAに示すように、再翻訳した結果を、図19のAにおいて符号“I1”で示すように音声で通知させる。このとき、本実施形態に係る情報処理装置は、図19のAのに示すように、分割テキストの区切りにおいて、図19のAにおいて符号“S”で示すようなサウンドフィードバックを挿入してもよい。 For example, when the translation results for each translation unit by the translation processing are “recommendation”, “tourism”, “direction”, and “tell me”, the information processing apparatus according to the present embodiment is shown in FIG. As shown, the retranslated result is notified by voice as indicated by reference numeral “I1” in A of FIG. At this time, the information processing apparatus according to the present embodiment may insert a sound feedback as indicated by a symbol “S” in FIG. 19A at the division of the divided text as shown in FIG. 19A. .
 再翻訳した結果を音声で通知させた後、図19のBにおいて符号“O”で示すような音声による選択操作が検出された場合、本実施形態に係る情報処理装置は、図19のBにおいて符号“I2”で示すように、当該音声による選択操作に対応する翻訳結果を示す音声を音声出力デバイスから出力させる。ここで、図19のBでは、コミュニケーション相手に通知したいものを番号で指定するための、音声による選択操作の一例を示している。なお、本実施形態に係る音声による選択操作の例が、上記に示す例に限られないことは、言うまでもない。 After the re-translation result is notified by voice, when a voice selection operation as indicated by the symbol “O” is detected in B of FIG. 19, the information processing apparatus according to the present embodiment As indicated by reference numeral “I2”, a voice indicating a translation result corresponding to the selection operation by the voice is output from the voice output device. Here, B in FIG. 19 shows an example of a selection operation by voice for designating a number to be notified to the communication partner. Needless to say, the example of the selection operation by voice according to the present embodiment is not limited to the example described above.
 図20は、音声による操作に基いて、翻訳結果が聴覚的に通知される場合の他の例を示している。図20は、音声による操作に基いて、翻訳処理による翻訳単位ごとの翻訳結果の中から、コミュニケーション相手に通知する内容が除外される場合の例を示している。 FIG. 20 shows another example in the case where the translation result is audibly notified based on the voice operation. FIG. 20 shows an example in which the content to be notified to the communication partner is excluded from the translation results for each translation unit by the translation processing based on the voice operation.
 例えば、翻訳処理による翻訳単位ごとの翻訳結果が、“お薦め”、“観光”、“道順”、および“教えて”であった場合、本実施形態に係る情報処理装置は、図20のAに示すように、再翻訳した結果を、図20のAにおいて符号“I1”で示すように音声で通知させる。なお、本実施形態に係る情報処理装置は、図19のAと同様に、分割テキストの区切りにおいてサウンドフィードバックを挿入してもよい。 For example, when the translation results for each translation unit by the translation processing are “recommendation”, “tourism”, “direction”, and “tell me”, the information processing apparatus according to the present embodiment is shown in FIG. As shown, the re-translated result is notified by voice as indicated by reference numeral “I1” in A of FIG. Note that the information processing apparatus according to the present embodiment may insert sound feedback at the division of the divided text, as in A of FIG.
 再翻訳した結果を音声で通知させた後、図20のBにおいて符号“O”で示すような音声による除外操作が検出された場合、本実施形態に係る情報処理装置は、図20のBにおいて符号“I2”で示すように、当該音声による選択操作に対応する翻訳結果を示す音声を音声出力デバイスから出力させる。ここで、図20のBでは、コミュニケーション相手への通知が不要であったものを番号で指定するための、音声による除外操作の一例を示している。なお、本実施形態に係る音声による除外操作の例が、上記に示す例に限られないことは、言うまでもない。 After the re-translation result is notified by voice, when an excluding operation by voice as indicated by the symbol “O” is detected in B of FIG. 20, the information processing apparatus according to the present embodiment As indicated by reference numeral “I2”, a voice indicating a translation result corresponding to the selection operation by the voice is output from the voice output device. Here, B in FIG. 20 shows an example of an excluding operation by voice for designating a number that does not require notification to the communication partner. Needless to say, the example of the audio exclusion operation according to the present embodiment is not limited to the example described above.
 なお、音声による操作の例、および音声による操作に基づく通知の例が、図19、図20に示す例に限られないことは、言うまでもない。 Note that it goes without saying that the example of the voice operation and the example of the notification based on the voice operation are not limited to the examples shown in FIGS.
(VI)通知制御処理の第6の例:通知順序を動的に制御する場合の通知制御処理
 本実施形態に係る情報処理装置は、通知内容の通知順序を、動的に制御することも可能である。
(VI) Sixth Example of Notification Control Processing: Notification Control Processing for Controlling Notification Order Dynamically The information processing apparatus according to the present embodiment can also dynamically control the notification order of notification contents. It is.
 本実施形態に係る情報処理装置は、例えば、第1のユーザに対応する情報と、第2のユーザに対応する情報とのうちの少なくとも一方に基づいて、通知内容の通知順序を制御する。第1のユーザに対応する情報には、例えば、第1のユーザに関する情報、アプリケーションに関する情報、およびデバイスに関する情報のうちの少なくとも1つが含まれる。また、第2のユーザに対応する情報には、第2のユーザに関する情報、アプリケーションに関する情報、およびデバイスに関する情報のうちの少なくとも1つが含まれる。 The information processing apparatus according to the present embodiment controls the notification order of notification contents based on at least one of information corresponding to the first user and information corresponding to the second user, for example. The information corresponding to the first user includes, for example, at least one of information regarding the first user, information regarding the application, and information regarding the device. The information corresponding to the second user includes at least one of information on the second user, information on the application, and information on the device.
 第1のユーザに関する情報は、例えば、第1のユーザがおかれている状況と、第1のユーザの状態との一方または双方を示す。また、第2のユーザに関する情報は、例えば、第2のユーザがおかれている状況と、第2のユーザの状態との一方または双方を示す。また、アプリケーションに関する情報は、上述したように、例えば、アプリケーションの実行状態を示す。また、デバイスに関する情報は、上述したように、例えば、デバイスの種類とデバイスの状態との一方または双方を示す。 The information related to the first user indicates, for example, one or both of the situation where the first user is placed and the state of the first user. Moreover, the information regarding a 2nd user shows the one or both of the condition where the 2nd user is placed, and the state of a 2nd user, for example. Further, as described above, the information related to the application indicates, for example, the execution state of the application. Further, as described above, the information about the device indicates one or both of the device type and the device state, for example.
 ユーザ(第1のユーザまたは第2のユーザ)がおかれている状況は、例えば、音声情報から検出されるユーザの周囲の雑音(例えば、発話に基づく音声以外の音)に基づいて推定する方法、位置情報が示す位置に基づきユーザがいる状況を推定する方法など、ユーザがおかれている状況を推定することが可能な、任意の方法に係る処理によって、推定される。ユーザがおかれている状況を推定する処理は、本実施形態に係る情報処理装置が行ってもよいし、本実施形態に係る情報処理装置の外部装置において行われてもよい。 A method of estimating a situation in which a user (first user or second user) is placed based on, for example, noise around the user detected from voice information (for example, sound other than voice based on speech) It is estimated by processing according to an arbitrary method capable of estimating the situation where the user is placed, such as a method of estimating the situation where the user is present based on the position indicated by the position information. The process of estimating the situation where the user is placed may be performed by the information processing apparatus according to the present embodiment, or may be performed by an external device of the information processing apparatus according to the present embodiment.
 また、ユーザの状態は、上述したように、例えば、ユーザの生体情報、動きセンサの検出結果、撮像デバイスにより撮像された撮像画像などのうちの1または2以上を用いた、任意の行動推定処理または任意の感情推定処理によって、推定される。 In addition, as described above, the user state is an arbitrary behavior estimation process using one or more of the user's biological information, the detection result of the motion sensor, the captured image captured by the imaging device, and the like. Alternatively, it is estimated by an arbitrary emotion estimation process.
 図21は、通知順序を動的に制御する場合における表示の一例を示している。図21のAは、ユーザの状態に基づいて、翻訳処理による翻訳単位ごとの翻訳結果(翻訳された要約された発話の内容)が表示された場合の一例を示している。また、図21のBは、アプリケーションの実行状態に基づいて、翻訳処理による翻訳単位ごとの翻訳結果が表示された場合の一例を示している。また、図21のCは、ユーザがおかれている状況に基づいて、翻訳処理による翻訳単位ごとの翻訳結果が表示された場合の一例を示している。 FIG. 21 shows an example of display when the notification order is dynamically controlled. FIG. 21A shows an example of the case where the translation result (translated summarized utterance content) for each translation unit is displayed based on the state of the user. Moreover, B of FIG. 21 has shown an example when the translation result for every translation unit by a translation process is displayed based on the execution state of an application. FIG. 21C shows an example of a case where the translation result for each translation unit by the translation process is displayed based on the situation where the user is placed.
 図21のAは、翻訳単位ごとの翻訳結果が“お薦め”、“観光”、“道順”、および
“教えて”であった場合における、ユーザの状態に基づく表示の例を示している。
FIG. 21A shows an example of display based on the state of the user when the translation results for each translation unit are “recommended”, “tourist”, “direction”, and “tell me”.
 例えば、生体情報や動きセンサの検出結果などに基づいてユーザの状態が「焦り」の状態であると認識された場合、本実施形態に係る情報処理装置は、図21のAに示すように動詞を表示画面の最も左側に表示させるなどにより、動詞を優先的に表示させる。本実施形態に係る情報処理装置は、例えば“ユーザの状態と表示順を示す情報とが対応付けられているテーブル(または、データベース)”を参照することによって、通知順序を特定する。 For example, when it is recognized that the user's state is “impaired” based on the biometric information, the detection result of the motion sensor, or the like, the information processing apparatus according to the present embodiment uses a verb as illustrated in FIG. Is displayed preferentially, such as by displaying on the leftmost side of the display screen. The information processing apparatus according to the present embodiment specifies the notification order by referring to, for example, “a table (or database) in which the user status and information indicating the display order are associated with each other”.
 図21のBは、翻訳単位ごとの翻訳結果が“北海道”、“産地”、“おいしい”、および“魚”であった場合における、アプリケーションの実行状態に基づく表示の例を示している。 FIG. 21B shows an example of display based on the execution state of the application when the translation results for each translation unit are “Hokkaido”, “Origin”, “Delicious”, and “Fish”.
 例えば、ユーザが所持しているスマートフォンなどの、ユーザに対応付けられている装置において、実行されているアプリケーションの種別が「食事ブラウザ」であると認識された場合、本実施形態に係る情報処理装置は、図21のBに示すように形容詞を表示画面の最も左側に表示させるなどにより、形容詞を優先的に表示させる。本実施形態に係る情報処理装置は、例えば“アプリケーションの種別と表示順を示す情報とが対応付けられているテーブル(または、データベース)”を参照することによって、通知順序を特定する。 For example, in a device associated with a user such as a smartphone possessed by the user, when the type of application being executed is recognized as “meal browser”, the information processing apparatus according to the present embodiment 21 preferentially displays adjectives by displaying the adjectives on the leftmost side of the display screen as shown in FIG. The information processing apparatus according to the present embodiment specifies the notification order by referring to, for example, “a table (or database) in which an application type and information indicating a display order are associated with each other”.
 図21のCは、翻訳単位ごとの翻訳結果が“急いで”、“渋谷”、“集まって”、および“時間がない”であった場合における、ユーザがおかれている状況に基づく表示の例を示している。 FIG. 21C shows a display based on the situation where the user is placed when the translation result for each translation unit is “Hurry”, “Shibuya”, “Collecting”, and “No time”. An example is shown.
 例えば、音声情報から検出される雑音(例えば、発話に基づく音声以外の音)が、設定されている閾値より大きい場合、本実施形態に係る情報処理装置は、ユーザが騒がしい状況にいることを認識する。そして、本実施形態に係る情報処理装置は、図21のCに示すように名詞(または固有名詞)を表示画面の最も左側に表示させるなどにより、名詞(または固有名詞)を優先的に表示させる。本実施形態に係る情報処理装置は、例えば“ユーザがおかれている環境と表示順を示す情報とが対応付けられているテーブル(または、データベース)”を参照することによって、通知順序を特定する。 For example, when noise detected from voice information (for example, sound other than voice based on speech) is larger than a set threshold, the information processing apparatus according to the present embodiment recognizes that the user is in a noisy situation. To do. Then, the information processing apparatus according to the present embodiment preferentially displays the noun (or proper noun) by displaying the noun (or proper noun) on the leftmost side of the display screen as shown in FIG. 21C. . The information processing apparatus according to the present embodiment specifies the notification order by referring to, for example, “a table (or database) in which an environment where a user is placed and information indicating a display order are associated”. .
 なお、通知順序を動的に制御する例は、図21に示す例に限られない。 Note that the example of dynamically controlling the notification order is not limited to the example shown in FIG.
 例えば、ユーザがおかれている状況、ユーザの状態、およびアプリケーションの実行状態のうちの2以上に基づいて、通知順序を動的に制御する場合(複数の情報に基づいて通知順序を動的に制御する場合の一例)には、本実施形態に係る情報処理装置は、ユーザがおかれている状況、ユーザの状態、およびアプリケーションの実行状態それぞれに設定されている優先度(または優先順位)に基づき、通知順序を特定する。本実施形態に係る情報処理装置は、優先度(または優先順位)が高い指標に対応する通知内容を、優先的に通知させる。 For example, when the notification order is dynamically controlled based on two or more of the situation where the user is placed, the user's state, and the application execution state (the notification order is dynamically changed based on a plurality of pieces of information) As an example of control, the information processing apparatus according to the present embodiment has the priority (or priority) set for each of the situation where the user is placed, the state of the user, and the execution state of the application. Based on this, the notification order is specified. The information processing apparatus according to the present embodiment causes notification contents corresponding to an index having a high priority (or priority) to be preferentially notified.
 また、図21では視覚的な方法による通知の一例を示したが、上述したように、本実施形態に係る情報処理装置は、聴覚的な方法による通知を行うことも可能である。 FIG. 21 shows an example of the notification by the visual method. However, as described above, the information processing apparatus according to the present embodiment can also perform the notification by the auditory method.
 また、上述したように、本実施形態に係る情報処理装置は、デバイスに関する情報それぞれに基づいて、通知順序を動的に制御することも可能である。デバイスに関する情報に基づいて通知順序を動的に制御する例としては、例えば、プロセッサの処理負荷に応じて通知順序を動的に制御することなどが、挙げられる。 As described above, the information processing apparatus according to the present embodiment can also dynamically control the notification order based on each piece of information about the device. As an example of dynamically controlling the notification order based on the information about the device, for example, dynamically controlling the notification order according to the processing load of the processor can be cited.
(VII)通知制御処理の第6の例:通知内容を動的に制御する場合の通知制御処理
 本実施形態に係る情報処理装置は、通知内容の情報量を、動的に制御することも可能である。
(VII) Sixth Example of Notification Control Processing: Notification Control Processing for Controlling Notification Content Dynamically The information processing apparatus according to this embodiment can also dynamically control the information content of notification content. It is.
 本実施形態に係る情報処理装置は、例えば、要約情報、第1のユーザに対応する情報、第2のユーザに対応する情報、音声情報のうちの1または2以上に基づいて、通知内容の情報量を、動的に制御する。情報量の動的な変更の一例としては、例えば下記の(VII-1)~(VII―5)に示す例が挙げられる。なお、情報量の動的な変更の例が、下記の(VII-1)~(VII-5)に示す例に限られないことは、言うまでもない。 The information processing apparatus according to the present embodiment is, for example, notification content information based on one or more of summary information, information corresponding to the first user, information corresponding to the second user, and voice information. The amount is dynamically controlled. Examples of the dynamic change of the information amount include, for example, the following (VII-1) to (VII-5). Needless to say, examples of dynamically changing the information amount are not limited to the examples shown in the following (VII-1) to (VII-5).
(VII-1)要約情報に基づく通知内容の動的な変更の一例
 ・本実施形態に係る情報処理装置は、例えば、要約情報が示す要約された発話の内容に、「あれ」、「それ」などの指示語が含まれる場合には、当該指示語(または、当該指示語の翻訳結果)を、通知させない。
 ・本実施形態に係る情報処理装置は、例えば、要約情報が示す要約された発話の内容に、挨拶に対応する言葉が含まれる場合には、当該挨拶に対応する言葉(または、当該挨拶に対応する言葉の翻訳結果)を、通知させない。
(VII-1) Example of dynamic change of notification content based on summary information The information processing apparatus according to the present embodiment includes, for example, “that” and “it” in the content of the summarized utterance indicated by the summary information. Are included, the instruction word (or the translation result of the instruction word) is not notified.
The information processing apparatus according to the present embodiment, for example, if the content of the summarized utterance indicated by the summary information includes a word corresponding to the greeting (or corresponding to the greeting) (Translation result of words to be) is not notified.
(VII-2)第1のユーザに対応する情報に基づく通知内容の動的な変更の一例
 ・本実施形態に係る情報処理装置は、例えば、第1のユーザの表情が笑いと判定された場合には、通知内容を通知させるときの情報量を減らす。
 ・本実施形態に係る情報処理装置は、例えば、第1のユーザの視線が上を向いていると判定された場合(独り言に近いと判定された場合の一例)には、通知内容を、通知させない。
 ・本実施形態に係る情報処理装置は、例えば、「あれ」、「それ」、「これ」などの指示語に対応するジェスチャ(例えば、指し示すジェスチャなど)が検出された場合には、通知内容を、通知させない。
 ・本実施形態に係る情報処理装置は、例えば、第1のユーザが雑音が大きい状況におかれていると判定された場合には、通知内容を、全て通知させる。
(VII-2) Example of dynamic change of notification content based on information corresponding to first user-In the information processing apparatus according to the present embodiment, for example, when the facial expression of the first user is determined to be laughter The amount of information when notifying the notification content is reduced.
The information processing apparatus according to the present embodiment notifies, for example, notification contents when it is determined that the first user's line of sight is facing upward (an example when it is determined that the first user is close to monologue). I won't let you.
The information processing apparatus according to the present embodiment, for example, displays a notification content when a gesture (for example, a pointing gesture) corresponding to an instruction word such as “that”, “it”, “this” is detected. , Do not let me know.
The information processing apparatus according to the present embodiment, for example, notifies all of the notification contents when it is determined that the first user is in a situation where noise is high.
(VII-3)第2のユーザに対応する情報に基づく通知内容の動的な変更の一例
 ・本実施形態に係る情報処理装置は、例えば、第2のユーザの表情が笑いと判定された場合には、通知内容を通知させるときの情報量を減らす。
 ・第2のユーザがコミュニケーション相手である場合、本実施形態に係る情報処理装置は、例えば、第2のユーザが発話内容を理解してない可能性があると判定したとき(例えば、第2のユーザの視線が、第1のユーザに向いていないと判定されたときなど)には、通知内容を通知させるときの情報量を増やす。
 ・第2のユーザがコミュニケーション相手である場合、本実施形態に係る情報処理装置は、例えば、第2のユーザがあくびしていると判定したとき(例えば、第2のユーザが飽きていると判定されたときなど)には、通知内容を通知させるときの情報量を減らす。
 ・第2のユーザがコミュニケーション相手である場合、本実施形態に係る情報処理装置は、例えば、第2のユーザがうなずきまたは相槌を行ったと判定したときには、通知内容を通知させるときの情報量を増やす。
 ・第2のユーザがコミュニケーション相手である場合、本実施形態に係る情報処理装置は、例えば、第2のユーザの瞳孔の大きさが所定の大きさより大きいと判定されたとき、または、当該瞳孔の大きさが当該所定の大きさ以上であると判定されたとき(興味があると判定されたときの一例)には、通知内容を通知させるときの情報量を増やす。
 ・第2のユーザがコミュニケーション相手である場合、本実施形態に係る情報処理装置は、例えば、第2のユーザが発話内容を理解してない可能性があると判定したとき(例えば、第2のユーザの手が動いてないと判定されたときなど)には、通知内容を通知させるときの情報量を増やす。
 ・第2のユーザがコミュニケーション相手である場合、本実施形態に係る情報処理装置は、例えば、第2のユーザの身体の傾き具合が前方に傾いていると判定されたとき(興味があると判定されたときの一例)には、通知内容を通知させるときの情報量を増やす。
・本実施形態に係る情報処理装置は、例えば、第2のユーザが雑音が大きい状況におかれていると判定された場合には、通知内容を、全て通知させる。
(VII-3) Example of dynamic change of notification content based on information corresponding to second user-In the information processing apparatus according to the present embodiment, for example, when the facial expression of the second user is determined to be laughter The amount of information when notifying the notification content is reduced.
When the second user is a communication partner, the information processing apparatus according to the present embodiment, for example, determines that the second user may not understand the utterance content (for example, the second user When it is determined that the user's line of sight is not suitable for the first user), the amount of information when the notification content is notified is increased.
When the second user is a communication partner, the information processing apparatus according to the present embodiment determines, for example, that the second user is yawning (for example, determines that the second user is bored) Etc.), the amount of information when the notification content is notified is reduced.
When the second user is a communication partner, the information processing apparatus according to the present embodiment increases the amount of information when notifying the notification content when, for example, it is determined that the second user has nodded or consulted. .
When the second user is a communication partner, the information processing apparatus according to the present embodiment, for example, when it is determined that the size of the pupil of the second user is larger than a predetermined size, or When it is determined that the size is equal to or greater than the predetermined size (an example when it is determined that the user is interested), the amount of information when the notification content is notified is increased.
When the second user is a communication partner, the information processing apparatus according to the present embodiment, for example, determines that the second user may not understand the utterance content (for example, the second user For example, when it is determined that the user's hand is not moving), the amount of information when the notification content is notified is increased.
When the second user is a communication partner, the information processing apparatus according to the present embodiment, for example, when it is determined that the inclination of the body of the second user is tilted forward (determined as interested) An example of when the notification is made) increases the amount of information when the notification content is notified.
The information processing apparatus according to the present embodiment, for example, notifies all of the notification contents when it is determined that the second user is in a situation where noise is high.
(VII-4)音声情報に基づく通知内容の動的な変更の一例
 ・本実施形態に係る情報処理装置は、例えば、音声情報から検出される発話の音量が所定の閾値より大きい場合、または、当該発話の音量が当該所定の閾値以上である場合には、通知内容を、通知させない。
・本実施形態に係る情報処理装置は、例えば、音声情報から検出される発話の音量が所定の閾値より大きい場合、または、当該発話の音量が当該所定の閾値以上である場合には、通知内容の一部、または全てを通知させる。
(VII-4) Example of dynamic change of notification content based on voice information-For example, the information processing apparatus according to the present embodiment has a volume of utterances detected from voice information larger than a predetermined threshold, or When the volume of the utterance is equal to or higher than the predetermined threshold, the notification content is not notified.
The information processing apparatus according to the present embodiment, for example, if the volume of the utterance detected from the voice information is greater than a predetermined threshold or the volume of the utterance is greater than or equal to the predetermined threshold Notify some or all of
(VII-5)複数の情報の組み合わせに基づく通知内容の動的な変更の一例
 ・本実施形態に係る情報処理装置は、例えば、第1のユーザと第2のユーザとが異なる場合、第1のユーザの視線と第2のユーザの視線とが合ったと判定されたときに、通知内容を通知させるときの情報量を増やす(第1のユーザに対応する情報、および第2のユーザに対応する情報に基づく通知内容の動的な変更の一例)。
(VII-5) An example of dynamic change of notification content based on a combination of a plurality of pieces of information-The information processing apparatus according to the present embodiment is, for example, when the first user and the second user are different, When it is determined that the user's line of sight matches the line of sight of the second user, the amount of information to be notified is increased (information corresponding to the first user and corresponding to the second user) Example of dynamic change of notification content based on information).
[4]本実施形態に係る情報処理方法に係る処理の具体例
 次に、上述した本実施形態に係る情報処理方法に係る処理の具体例を示す。以下では、本実施形態に係る情報処理方法に係る処理の具体例として、図1~図5を参照して説明したユースケースにおける処理の一例を示す。
[4] Specific Example of Processing According to Information Processing Method According to This Embodiment Next, a specific example of processing according to the above-described information processing method according to this embodiment will be described. Hereinafter, an example of processing in the use case described with reference to FIGS. 1 to 5 will be shown as a specific example of processing related to the information processing method according to the present embodiment.
 図22~図33は、本実施形態に係る情報処理方法に係る処理の一例を示す流れ図である。以下、図22~図33を適宜参照して、本実施形態に係る情報処理方法に係る処理の一例を説明する。 22 to 33 are flowcharts showing an example of processing related to the information processing method according to the present embodiment. Hereinafter, an example of processing according to the information processing method according to the present embodiment will be described with reference to FIGS. 22 to 33 as appropriate.
 本実施形態に係る情報処理装置は、要約に関する重み(以下、「要約機能に対する重み」、または単に「重み」と示す場合がある。)を設定する(S100。事前設定)。本実施形態に係る情報処理装置は、要約に関する重みを決定し、記憶部(後述する)などの記録媒体に保持することによって、要約に関する重みを設定する。ステップS100の処理としては、例えば、図23に示す処理が挙げられる。 The information processing apparatus according to the present embodiment sets a weight related to summarization (hereinafter, sometimes referred to as “weight for summarization function” or simply “weight”) (S100, presetting). The information processing apparatus according to the present embodiment determines a weight related to the summary by determining a weight related to the summary and holding the weight in a recording medium such as a storage unit (described later). An example of the process in step S100 is the process shown in FIG.
 図23を参照すると、本実施形態に係る情報処理装置は、スケジュールアプリケーションからスケジュール内容を示すデータを取得する(S200)。 Referring to FIG. 23, the information processing apparatus according to the present embodiment acquires data indicating schedule contents from a schedule application (S200).
 本実施形態に係る情報処理装置は、取得されたスケジュール内容を示すデータから認識される行動と、図8に示す要約に関する重みの種類を特定するためのテーブル(以下、「行動情報要約重みテーブル」と示す場合がある。)とに基づいて、要約に関する重みの種類を決定する(S202)。 The information processing apparatus according to the present embodiment is a table (hereinafter referred to as “behavior information summary weight table”) for identifying the behavior recognized from the data indicating the acquired schedule content and the type of weight related to the summary illustrated in FIG. The type of weight related to the summary is determined (S202).
 そして、本実施形態に係る情報処理装置は、ステップS202において決定された要約に関する重みの種類と、図6に示す要約に関する重みを特定するためのテーブル(以下、「要約テーブル」と示す場合がある。)とに基づいて、要約に関する重みを決定する(S204)。 The information processing apparatus according to the present embodiment may indicate a type of weight related to the summary determined in step S202 and a table for specifying the weight related to the summary illustrated in FIG. 6 (hereinafter referred to as “summary table”). )), The weight for the summary is determined (S204).
 本実施形態に係る情報処理装置は、図22のステップS100の処理として、例えば、図23に示す処理を行う。なお、図22のステップS100の処理が、図23に示す処理に限られないことは、言うまでもない。 The information processing apparatus according to the present embodiment performs, for example, the process illustrated in FIG. 23 as the process of step S100 in FIG. Needless to say, the process of step S100 in FIG. 22 is not limited to the process shown in FIG.
 再度図22を参照して、本実施形態に係る情報処理方法に係る処理の一例を説明する。本実施形態に係る情報処理装置は、例えば音声入力に係るアプリケーションを起動させることなどによって、音声入力を有効化する(S102)。 Referring to FIG. 22 again, an example of processing related to the information processing method according to the present embodiment will be described. The information processing apparatus according to the present embodiment validates voice input by, for example, starting an application related to voice input (S102).
 本実施形態に係る情報処理装置は、音声情報が取得されたか否かを判定する(S104)。ステップS104において音声情報が取得されたと判定されない場合、本実施形態に係る情報処理装置は、例えば、音声情報が取得されたと判定されるまで、ステップS106以降の処理を進めない。 The information processing apparatus according to the present embodiment determines whether audio information has been acquired (S104). If it is not determined in step S104 that the voice information has been acquired, the information processing apparatus according to the present embodiment does not proceed with the processes in and after step S106 until it is determined that the voice information has been acquired, for example.
 また、ステップS104において音声情報が取得されたと判定された場合、本実施形態に係る情報処理装置は、音声情報を解析する(S106)。本実施形態に係る情報処理装置は、音声情報を解析することによって、例えば、音圧、ピッチ、平均的な周波数帯域などを得る。そして、本実施形態に係る情報処理装置は、音声情報を記憶部(後述する)などの記録媒体に保持する(S108)。 If it is determined in step S104 that voice information has been acquired, the information processing apparatus according to the present embodiment analyzes the voice information (S106). The information processing apparatus according to the present embodiment obtains, for example, sound pressure, pitch, average frequency band, and the like by analyzing audio information. The information processing apparatus according to the present embodiment holds the audio information in a recording medium such as a storage unit (described later) (S108).
 本実施形態に係る情報処理装置は、音声情報などに基づいて要約に関する重みを設定する(S110)。ステップS110の処理としては、例えば、図24に示す処理が挙げられる。 The information processing apparatus according to the present embodiment sets a weight related to summarization based on voice information or the like (S110). An example of the process in step S110 is the process shown in FIG.
 図24を参照すると、本実施形態に係る情報処理装置は、例えば、音声情報が示す音声(以下、「入力音声」と示す場合がある。)の平均周波数に基づいて、要約に関する重みを設定する(S300)。ステップS300の処理としては、例えば、図25に示す処理が挙げられる。 Referring to FIG. 24, the information processing apparatus according to the present embodiment sets weights related to summarization based on, for example, the average frequency of the voice indicated by the voice information (hereinafter sometimes referred to as “input voice”). (S300). An example of the process in step S300 is the process shown in FIG.
 なお、図24では、ステップS300の処理の後に、ステップS302の処理が行われる例を示しているが、図22のステップS110の処理は、図24に示す処理に限られない。例えば、ステップS300の処理とステップS302の処理とは独立した処理であるので、本実施形態に係る情報処理装置は、ステップS302の処理の後に、ステップS304の処理を行うことができ、または、ステップS300の処理とステップS302の処理とを平行に行うこともできる。 24 shows an example in which the process of step S302 is performed after the process of step S300, the process of step S110 of FIG. 22 is not limited to the process shown in FIG. For example, since the process of step S300 and the process of step S302 are independent processes, the information processing apparatus according to the present embodiment can perform the process of step S304 after the process of step S302. The process of S300 and the process of step S302 can also be performed in parallel.
 図25を参照すると、本実施形態に係る情報処理装置は、音声の平均的な周波数帯域が、300[Hz]~550[Hz]であるか否かを判定する(S400)。 Referring to FIG. 25, the information processing apparatus according to the present embodiment determines whether or not the average frequency band of voice is 300 [Hz] to 550 [Hz] (S400).
 ステップS400において、音声の平均的な周波数帯域が300[Hz]~550[Hz]であると判定された場合には、本実施形態に係る情報処理装置は、要約に関する重みの種類として「男性」を決定する(S402)。 If it is determined in step S400 that the average frequency band of the voice is 300 [Hz] to 550 [Hz], the information processing apparatus according to the present embodiment selects “male” as the type of weight related to the summary. Is determined (S402).
 また、ステップS400において、音声の平均的な周波数帯域が300[Hz]~550[Hz]であると判定されない場合には、本実施形態に係る情報処理装置は、音声の平均的な周波数帯域が、400[Hz]~700[Hz]であるか否かを判定する(S404)。 If it is not determined in step S400 that the average frequency band of the voice is 300 [Hz] to 550 [Hz], the information processing apparatus according to the present embodiment has an average frequency band of the voice. , 400 [Hz] to 700 [Hz] is determined (S404).
 ステップS404において、音声の平均的な周波数帯域が400[Hz]~700[Hz]であると判定された場合には、本実施形態に係る情報処理装置は、要約に関する重みの種類として「女性」を決定する(S406)。 If it is determined in step S404 that the average frequency band of the voice is 400 [Hz] to 700 [Hz], the information processing apparatus according to the present embodiment selects “female” as the type of weight related to the summary. Is determined (S406).
 また、ステップS404において、音声の平均的な周波数帯域が400[Hz]~700[Hz]であると判定されない場合には、本実施形態に係る情報処理装置は、要約に関する重みを決定しない。 If it is not determined in step S404 that the average frequency band of the voice is 400 [Hz] to 700 [Hz], the information processing apparatus according to the present embodiment does not determine the weight related to the summary.
 本実施形態に係る情報処理装置は、図24のステップS300の処理として、例えば、図25に示す処理を行う。なお、図24のステップS300の処理が、図25に示す処理に限られないことは、言うまでもない。 The information processing apparatus according to the present embodiment performs, for example, the process shown in FIG. 25 as the process of step S300 of FIG. Needless to say, the process of step S300 in FIG. 24 is not limited to the process shown in FIG.
 再度図24を参照して、図22のステップS110の処理の一例を説明する。本実施形態に係る情報処理装置は、例えば、音声情報が示す音声の音圧に基づいて、要約に関する重みを設定する(S302)。ステップS302の処理としては、例えば、図26に示す処理が挙げられる。 Referring to FIG. 24 again, an example of the process of step S110 in FIG. 22 will be described. For example, the information processing apparatus according to the present embodiment sets a weight related to the summary based on the sound pressure of the sound indicated by the sound information (S302). An example of the processing in step S302 is the processing shown in FIG.
 図26を参照すると、本実施形態に係る情報処理装置は、発話者のユーザとコミュニケーション相手との間の距離に基づいて、音圧に係る閾値を決定する(S500)。ステップS500の処理としては、例えば、図27に示す処理が挙げられる。 Referring to FIG. 26, the information processing apparatus according to the present embodiment determines a threshold value related to sound pressure based on the distance between the user of the speaker and the communication partner (S500). An example of the process in step S500 is the process shown in FIG.
 図27を参照すると、本実施形態に係る情報処理装置は、撮像デバイスにより撮像された撮像画像に基づく画像認識によって、現在のコミュニケーション相手との間の距離Dを取得する(S600)。 Referring to FIG. 27, the information processing apparatus according to the present embodiment acquires the distance D between the current communication partner and the image recognition based on the captured image captured by the imaging device (S600).
 本実施形態に係る情報処理装置は、例えば下記の数式2の演算を行う(S602)。 The information processing apparatus according to the present embodiment performs, for example, the following mathematical formula 2 (S602).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 そして、本実施形態に係る情報処理装置は、例えば下記の数式3の演算を行い、音圧に係る閾値VPWR_thresh_upper、および音圧に係る閾値VPWR_thresh_lowreを調整することによって、音圧に係る閾値を決定する(S604)。 Then, the information processing apparatus according to the present embodiment, for example, performs the calculation of Equation 3 below, and determines the threshold value related to sound pressure by adjusting the threshold value VPWR_thresh_upper related to sound pressure and the threshold value VPWR_thresh_lowre related to sound pressure. (S604).
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 本実施形態に係る情報処理装置は、図26のステップS500の処理として、例えば、図27に示す処理を行う。なお、図26のステップS500の処理が、図27に示す処理に限られないことは、言うまでもない。 The information processing apparatus according to the present embodiment performs, for example, the process shown in FIG. 27 as the process of step S500 of FIG. Needless to say, the process of step S500 in FIG. 26 is not limited to the process shown in FIG.
 再度図26を参照して、図24のステップS302の処理の一例を説明する。本実施形態に係る情報処理装置は、音声情報が示す音声の音圧が、音圧に係る閾値VPWR_thresh_upper以上であるか否かを、判定する(S502)。 Referring to FIG. 26 again, an example of the process in step S302 in FIG. 24 will be described. The information processing apparatus according to the present embodiment determines whether or not the sound pressure of the sound indicated by the sound information is greater than or equal to a threshold VPWR_thresh_upper related to the sound pressure (S502).
 ステップS502において、音声情報が示す音声の音圧が音圧に係る閾値VPWR_thresh_upper以上であると判定された場合には、本実施形態に係る情報処理装置は、要約に関する重みの種類として「怒り」および「喜び」を決定する(S504)。 When it is determined in step S502 that the sound pressure of the sound indicated by the sound information is equal to or higher than the threshold VPWR_thresh_upper related to the sound pressure, the information processing apparatus according to the present embodiment selects “anger” as the weight type related to the summary. “Joy” is determined (S504).
 また、ステップS502において、音声情報が示す音声の音圧が音圧に係る閾値VPWR_thresh_upper以上であると判定されない場合には、本実施形態に係る情報処理装置は、音声情報が示す音声の音圧が、音圧に係る閾値VPWR_thresh_lowre以下であるか否かを判定する(S506)。 In step S502, when it is not determined that the sound pressure of the sound indicated by the sound information is equal to or higher than the threshold VPWR_thresh_upper related to the sound pressure, the information processing apparatus according to the present embodiment has the sound pressure of the sound indicated by the sound information. Then, it is determined whether or not the threshold value VPWR_thresh_lowre relating to the sound pressure is below (S506).
 ステップS506において、音声情報が示す音声の音圧が音圧に係る閾値VPWR_thresh_lowre以下であると判定された場合には、本実施形態に係る情報処理装置は、要約に関する重みの種類として「悲しみ」、「不快」、「苦痛」、および「不安」を決定する(S508)。 If it is determined in step S506 that the sound pressure of the sound indicated by the sound information is equal to or lower than the threshold VPWR_thresh_lowre related to the sound pressure, the information processing apparatus according to the present embodiment selects “sadness” as the weight type related to the summary. “Uncomfortable”, “pain”, and “anxiety” are determined (S508).
 また、ステップS506において、音声情報が示す音声の音圧が音圧に係る閾値VPWR_thresh_lowre以下であると判定されない場合には、本実施形態に係る情報処理装置は、要約に関する重みを決定しない。 In Step S506, when it is not determined that the sound pressure of the sound indicated by the sound information is equal to or lower than the threshold VPWR_thresh_lowre related to the sound pressure, the information processing apparatus according to the present embodiment does not determine the weight regarding the summary.
 本実施形態に係る情報処理装置は、図24のステップS302の処理として、例えば、図26に示す処理を行う。なお、図24のステップS302の処理が、図26に示す処理に限られないことは、言うまでもない。 The information processing apparatus according to the present embodiment performs, for example, the process shown in FIG. 26 as the process of step S302 of FIG. Needless to say, the process of step S302 in FIG. 24 is not limited to the process shown in FIG.
 再度図24を参照して、図22のステップS110の処理の一例を説明する。本実施形態に係る情報処理装置は、例えば、音声情報を解析し、モーラ数、アクセントの場所を、保持する(S304)。なお、ステップS304の処理は、図22のステップS106の処理において行われてもよい。 Referring to FIG. 24 again, an example of the process of step S110 in FIG. 22 will be described. The information processing apparatus according to the present embodiment, for example, analyzes voice information and holds the number of mora and the location of the accent (S304). Note that the process of step S304 may be performed in the process of step S106 of FIG.
 本実施形態に係る情報処理装置は、図22のステップS110の処理として、例えば、図24に示す処理を行う。なお、図22のステップS110の処理が、図24に示す処理に限られないことは、言うまでもない。 The information processing apparatus according to the present embodiment performs, for example, the process shown in FIG. 24 as the process of step S110 of FIG. Needless to say, the process of step S110 in FIG. 22 is not limited to the process shown in FIG.
 再度図22を参照して、本実施形態に係る情報処理方法に係る処理の一例を説明する。本実施形態に係る情報処理装置は、音声情報に対する音声認識を行う(S112)。ステップS112の処理が行われることによって、音声テキスト情報が取得される。 Referring to FIG. 22 again, an example of processing related to the information processing method according to the present embodiment will be described. The information processing apparatus according to the present embodiment performs voice recognition on voice information (S112). The voice text information is acquired by performing the process of step S112.
 ステップS112の処理が行われると、本実施形態に係る情報処理装置は、音声認識結果などに基づいて要約に関する重みを設定する(S114)。ステップS114の処理としては、例えば、図28に示す処理が挙げられる。 When the process of step S112 is performed, the information processing apparatus according to the present embodiment sets a weight related to the summary based on the speech recognition result and the like (S114). An example of the process in step S114 is the process shown in FIG.
 図28を参照すると、本実施形態に係る情報処理装置は、音声テキスト情報が示す文字列の言語に基づいて、要約に関する重みを設定する(S700)。ステップS700の処理としては、例えば、図29に示す処理が挙げられる。 Referring to FIG. 28, the information processing apparatus according to the present embodiment sets a weight for summarization based on the language of the character string indicated by the speech text information (S700). An example of the process in step S700 is the process shown in FIG.
 なお、図28では、ステップS700、S702の処理の後に、ステップS704~S710の処理が行われる例を示しているが、図22のステップS114の処理は、図28に示す処理に限られない。例えば、ステップS700、S702の処理とステップS704~S710の処理とは独立した処理であるので、本実施形態に係る情報処理装置は、ステップSS704~S710の処理の後に、ステップS700、S702の処理を行うことができ、または、ステップS700、S702の処理とステップS704~S710の処理とを平行に行うこともできる。 Note that FIG. 28 shows an example in which the processing of steps S704 to S710 is performed after the processing of steps S700 and S702, but the processing of step S114 of FIG. 22 is not limited to the processing shown in FIG. For example, since the processes of steps S700 and S702 and the processes of steps S704 to S710 are independent processes, the information processing apparatus according to the present embodiment performs the processes of steps S700 and S702 after the processes of steps SS704 to S710. Alternatively, the processes in steps S700 and S702 and the processes in steps S704 to S710 can be performed in parallel.
 図29を参照すると、本実施形態に係る情報処理装置は、音声テキスト情報が示す文字列の言語を推定する(S800)。本実施形態に係る情報処理装置は、例えば、言語辞書とのマッチングによる推定など、文字列から言語を推定することが可能な任意の方法に係る処理によって、言語を推定する Referring to FIG. 29, the information processing apparatus according to the present embodiment estimates the language of the character string indicated by the voice text information (S800). The information processing apparatus according to the present embodiment estimates a language by a process related to an arbitrary method capable of estimating a language from a character string, such as estimation based on matching with a language dictionary.
 ステップS800において言語が推定されると、本実施形態に係る情報処理装置は、推定された言語が日本語であるか否かを判定する(S802)。 When the language is estimated in step S800, the information processing apparatus according to the present embodiment determines whether or not the estimated language is Japanese (S802).
 ステップS802において、推定された言語が日本語であると判定された場合には、本実施形態に係る情報処理装置は、「日本語の動詞」の重みが高くなるように、要約に関する重みを決定する(S804)。 If it is determined in step S802 that the estimated language is Japanese, the information processing apparatus according to the present embodiment determines the weight related to the summary so that the weight of the “Japanese verb” is high. (S804).
 また、ステップS802において、推定された言語が日本語であると判定されない場合には、本実施形態に係る情報処理装置は、推定された言語が英語であるか否かを判定する(S806)。 In step S802, if it is not determined that the estimated language is Japanese, the information processing apparatus according to the present embodiment determines whether the estimated language is English (S806).
 ステップS806において、推定された言語が英語であると判定された場合には、本実施形態に係る情報処理装置は、「英語の名詞および動詞」の重みが高くなるように、要約に関する重みを決定する(S808)。 If it is determined in step S806 that the estimated language is English, the information processing apparatus according to the present embodiment determines the weight related to the summary so that the weight of “English nouns and verbs” increases. (S808).
 また、ステップS806において、推定された言語が英語であると判定されない場合には、本実施形態に係る情報処理装置は、要約に関する重みを決定しない。 In step S806, if it is not determined that the estimated language is English, the information processing apparatus according to the present embodiment does not determine the weight regarding the summary.
 本実施形態に係る情報処理装置は、図28のステップS700の処理として、例えば、図29に示す処理を行う。なお、図28のステップS700の処理が、図29に示す処理に限られないことは、言うまでもない。 The information processing apparatus according to the present embodiment performs, for example, the process shown in FIG. 29 as the process of step S700 of FIG. Needless to say, the process of step S700 of FIG. 28 is not limited to the process shown in FIG.
 再度図28を参照して、図22のステップS114の処理の一例を説明する。本実施形態に係る情報処理装置は、例えば、音声情報を解析し、モーラ数、アクセントの場所を、保持する(S702)。なお、ステップS702の処理は、図22のステップS106の処理において行われてもよい。 Referring to FIG. 28 again, an example of the process in step S114 in FIG. 22 will be described. The information processing apparatus according to the present embodiment, for example, analyzes voice information and holds the number of mora and the location of the accent (S702). Note that the process of step S702 may be performed in the process of step S106 of FIG.
 本実施形態に係る情報処理装置は、音声テキスト情報が示す文字列(以下、「音声テキスト結果」と示す場合がある。)を自然言語処理により形態素の単位に分割し、対応する音声情報の解析結果を紐付ける(S704)。 The information processing apparatus according to the present embodiment divides a character string indicated by the speech text information (hereinafter, may be referred to as “speech text result”) into morpheme units by natural language processing, and analyzes the corresponding speech information. The results are linked (S704).
 本実施形態に係る情報処理装置は、ステップS704において形態素の単位で紐付けられた音声情報の解析結果に基づいて、感情を推定する(S706)。本実施形態に係る情報処理装置は、例えば、音声情報の解析結果と感情とが対応付けられているテーブルを利用する方法など、音声情報の解析結果を利用することにより感情を推定することが可能な、任意の方法によって、感情を推定する。 The information processing apparatus according to the present embodiment estimates an emotion based on the analysis result of the voice information linked in units of morphemes in step S704 (S706). The information processing apparatus according to the present embodiment can estimate an emotion by using an analysis result of audio information, such as a method of using a table in which an analysis result of audio information and an emotion are associated with each other. Estimate emotions by any method.
 また、本実施形態に係る情報処理装置は、ステップS704において形態素の単位で紐付けられた音声情報の解析結果に基づいて、要約に関する重みの強さ(感情に関する重みの強さ)を決定する(S708)。本実施形態に係る情報処理装置は、例えば、音声情報の解析結果のうちの、基本周波数の変化率、音の変化率、発話時間の変化率に基づいて、要約に関する重みの強さを決定する。本実施形態に係る情報処理装置は、例えば、音声情報の解析結果と要約に関する重みの強さとが対応付けられているテーブルを利用する方法など、音声情報の解析結果を利用することにより要約に関する重みの強さを決定することが可能な、任意の方法によって、要約に関する重みの強さを決定する。 In addition, the information processing apparatus according to the present embodiment determines the strength of the weight related to the summary (the strength of the weight related to the emotion) based on the analysis result of the speech information linked in units of morphemes in step S704 ( S708). For example, the information processing apparatus according to the present embodiment determines the strength of the weight related to the summary based on the change rate of the fundamental frequency, the change rate of the sound, and the change rate of the utterance time in the analysis result of the speech information. . The information processing apparatus according to the present embodiment uses, for example, a method of using a table in which an analysis result of speech information is associated with a strength of the summary weight, and the weight related to the summary by using the analysis result of the speech information. The strength of the weight for the summary is determined by any method that can determine the strength of the summary.
 本実施形態に係る情報処理装置は、ステップS706において推定された感情に基づいて、要約に関する重みを決定する(S710)。また、本実施形態に係る情報処理装置は、推定された感情に基づく決定される要約に関する重みを、ステップS708において決定された要約に関する重みの強さにより調整してもよい。 The information processing apparatus according to the present embodiment determines a summary weight based on the emotion estimated in step S706 (S710). Further, the information processing apparatus according to the present embodiment may adjust the weight related to the summary determined based on the estimated emotion by the strength of the weight related to the summary determined in step S708.
 本実施形態に係る情報処理装置は、図22のステップS114の処理として、例えば、図28に示す処理を行う。なお、図22のステップS114の処理が、図28に示す処理に限られないことは、言うまでもない。 The information processing apparatus according to the present embodiment performs, for example, the process shown in FIG. 28 as the process of step S114 of FIG. Needless to say, the process of step S114 in FIG. 22 is not limited to the process shown in FIG.
 再度図22を参照して、本実施形態に係る情報処理方法に係る処理の一例を説明する。本実施形態に係る情報処理装置は、ステップS100、S110、S114それぞれにおいて決定された要約に関する重みに基づいて、要約処理を行う(S116)。 Referring to FIG. 22 again, an example of processing related to the information processing method according to the present embodiment will be described. The information processing apparatus according to the present embodiment performs a summarization process based on the weights related to the summaries determined in steps S100, S110, and S114 (S116).
 ステップS116の処理が完了すると、本実施形態に係る情報処理装置は、翻訳処理を行うか否かを判定する(S118)。 When the processing in step S116 is completed, the information processing apparatus according to the present embodiment determines whether or not to perform translation processing (S118).
 ステップS118において翻訳処理を行うと判定されない場合には、本実施形態に係る情報処理装置は、通知制御処理により要約結果を通知させる(S120)。 If it is not determined in step S118 that the translation process is to be performed, the information processing apparatus according to the present embodiment notifies the summary result by the notification control process (S120).
 また、ステップS118において翻訳処理を行うと判定された場合には、本実施形態に係る情報処理装置は、要約結果に対して翻訳処理を行い、通知制御処理により翻訳結果を通知させる(S122)。ステップS122の処理としては、例えば、図30に示す処理が挙げられる。 If it is determined in step S118 that the translation process is to be performed, the information processing apparatus according to the present embodiment performs the translation process on the summary result and notifies the translation result by the notification control process (S122). An example of the process of step S122 is the process shown in FIG.
 図30を参照すると、本実施形態に係る情報処理装置は、例えば、要約結果に対して自然言語処理を行うことにより、形態素解析を行う(S900)。 Referring to FIG. 30, the information processing apparatus according to the present embodiment performs morphological analysis, for example, by performing natural language processing on the summary result (S900).
 本実施形態に係る情報処理装置は、未処理の要約結果がなくなるまで、主要品詞(名詞、動詞、形容詞、副詞)と他の形態素を組み合わせた分割テキストを生成する(S902)。 The information processing apparatus according to the present embodiment generates a divided text in which the main part of speech (noun, verb, adjective, adverb) and other morphemes are combined until there is no unprocessed summary result (S902).
 本実施形態に係る情報処理装置は、要約結果の言語が英語であるか否かを判定する(S904)。 The information processing apparatus according to the present embodiment determines whether or not the language of the summary result is English (S904).
 ステップS904において要約結果の言語が英語であると判定されない場合には、本実施形態に係る情報処理装置は、後述するステップS908の処理を行う。 If it is not determined in step S904 that the language of the summary result is English, the information processing apparatus according to the present embodiment performs the process of step S908 described later.
 また、ステップS904において要約結果の言語が英語であると判定された場合には、本実施形態に係る情報処理装置は、5W1Hに相当する単語を、分割テキストとする(S906)。 If it is determined in step S904 that the language of the summary result is English, the information processing apparatus according to the present embodiment sets a word corresponding to 5W1H as a divided text (S906).
 ステップS904において要約結果の言語が英語であると判定されない場合、または、ステップS906の処理が行われると、本実施形態に係る情報処理装置は、分割テキストそれぞれに対して翻訳処理を行い、翻訳結果と翻訳前の元の品詞の情報とを紐付けて保持する(S908)。 When it is not determined in step S904 that the language of the summary result is English, or when the process of step S906 is performed, the information processing apparatus according to the present embodiment performs a translation process on each divided text, and the translation result And the original part-of-speech information before translation are linked and held (S908).
 本実施形態に係る情報処理装置は、分割翻訳テキスト(翻訳結果の一例)の言語が、英語であるか否かを判定する(S910)。 The information processing apparatus according to the present embodiment determines whether or not the language of the divided translation text (an example of the translation result) is English (S910).
 ステップS910において、分割翻訳テキストの言語が英語であると判定された場合には、本実施形態に係る情報処理装置は、英語での通知順序を決定する(S912)。ステップS912の処理としては、例えば、図31に示す処理が挙げられる。 When it is determined in step S910 that the language of the divided translation text is English, the information processing apparatus according to the present embodiment determines the notification order in English (S912). An example of the process in step S912 is the process shown in FIG.
 図31を参照すると、本実施形態に係る情報処理装置は、処理する分割翻訳テキストが存在するか否かを判定する(S1000)。ここでステップS1000における処理する分割翻訳テキストとしては、翻訳単位ごとの翻訳結果のうちの、未処理の翻訳結果が該当する。本実施形態に係る情報処理装置は、例えば、未処理の翻訳結果が存在する場合に、処理する分割翻訳テキストが存在すると判定し、未処理の翻訳結果が存在しない場合に、処理する分割翻訳テキストが存在しないと判定する。 Referring to FIG. 31, the information processing apparatus according to the present embodiment determines whether there is a divided translated text to be processed (S1000). Here, the divided translation text to be processed in step S1000 corresponds to an unprocessed translation result among the translation results for each translation unit. The information processing apparatus according to the present embodiment determines, for example, that there is a divided translation text to be processed when there is an unprocessed translation result, and processes the divided translation text when there is no unprocessed translation result. Is determined not to exist.
 ステップS1000において、処理する分割翻訳テキストが存在すると判定された場合には、本実施形態に係る情報処理装置は、次に処理する分割翻訳テキストを取得する(S1002)。 When it is determined in step S1000 that there is a divided translation text to be processed, the information processing apparatus according to the present embodiment acquires a divided translation text to be processed next (S1002).
 本実施形態に係る情報処理装置は、処理する分割翻訳テキストが名詞を含むか否かを判定する(S1004)。 The information processing apparatus according to the present embodiment determines whether or not the divided translated text to be processed includes a noun (S1004).
 ステップS1004において、処理する分割翻訳テキストが名詞を含むと判定された場合には、本実施形態に係る情報処理装置は、優先度を最大値「5」に設定する(S1006)。そして、本実施形態に係る情報処理装置は、ステップS1000からの処理を繰り返す。 When it is determined in step S1004 that the divided translated text to be processed includes a noun, the information processing apparatus according to the present embodiment sets the priority to the maximum value “5” (S1006). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1000.
 また、ステップS1004において、処理する分割翻訳テキストが名詞を含むと判定されない場合には、本実施形態に係る情報処理装置は、処理する分割翻訳テキストが動詞を含むか否かを判定する(S1008)。 If it is not determined in step S1004 that the divided translation text to be processed includes a noun, the information processing apparatus according to the present embodiment determines whether the divided translation text to be processed includes a verb (S1008). .
 ステップS1008において、処理する分割翻訳テキストが動詞を含むと判定された場合には、本実施形態に係る情報処理装置は、優先度を「4」に設定する(S1010)。そして、本実施形態に係る情報処理装置は、ステップS1000からの処理を繰り返す。 In step S1008, when it is determined that the divided translated text to be processed includes a verb, the information processing apparatus according to the present embodiment sets the priority to “4” (S1010). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1000.
 また、ステップS1008において、処理する分割翻訳テキストが動詞を含むと判定されない場合には、本実施形態に係る情報処理装置は、処理する分割翻訳テキストが形容詞を含むか否かを判定する(S1012)。 If it is not determined in step S1008 that the divided translated text to be processed includes a verb, the information processing apparatus according to the present embodiment determines whether the divided translated text to be processed includes an adjective (S1012). .
 ステップS1012において、処理する分割翻訳テキストが形容詞を含むと判定された場合には、本実施形態に係る情報処理装置は、優先度を「3」に設定する(S1014)。そして、本実施形態に係る情報処理装置は、ステップS1000からの処理を繰り返す。 In step S1012, if it is determined that the divided translated text to be processed includes an adjective, the information processing apparatus according to the present embodiment sets the priority to “3” (S1014). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1000.
 また、ステップS1012において、処理する分割翻訳テキストが形容詞を含むと判定されない場合には、本実施形態に係る情報処理装置は、処理する分割翻訳テキストが副詞を含むか否かを判定する(S1016)。 In step S1012, if it is not determined that the divided translation text to be processed includes an adjective, the information processing apparatus according to the present embodiment determines whether the divided translation text to be processed includes an adverb (S1016). .
 ステップS1016において、処理する分割翻訳テキストが副詞を含むと判定された場合には、本実施形態に係る情報処理装置は、優先度を「2」に設定する(S1018)。そして、本実施形態に係る情報処理装置は、ステップS1000からの処理を繰り返す。 If it is determined in step S1016 that the divided translated text to be processed includes an adverb, the information processing apparatus according to the present embodiment sets the priority to “2” (S1018). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1000.
 また、ステップS1016において、処理する分割翻訳テキストが副詞を含むと判定されない場合には、本実施形態に係る情報処理装置は、優先度を最小値「1」に設定する(S1020)。そして、本実施形態に係る情報処理装置は、ステップS1000からの処理を繰り返す。 In step S1016, if it is not determined that the divided translation text to be processed includes an adverb, the information processing apparatus according to the present embodiment sets the priority to the minimum value “1” (S1020). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1000.
 ステップS1000において、処理する分割翻訳テキストが存在すると判定されない場合には、本実施形態に係る情報処理装置は、設定された優先度によって、通知の順序をソートする(S1022)。 If it is not determined in step S1000 that there is a divided translated text to be processed, the information processing apparatus according to the present embodiment sorts the notification order according to the set priority (S1022).
 本実施形態に係る情報処理装置は、図30のステップS912の処理として、例えば、図31に示す処理を行う。なお、図30のステップS912の処理が、図31に示す処理に限られないことは、言うまでもない。 The information processing apparatus according to the present embodiment performs, for example, the process illustrated in FIG. 31 as the process in step S912 in FIG. Needless to say, the process of step S912 in FIG. 30 is not limited to the process shown in FIG.
 再度図30を参照して、図22のステップS122の処理の一例を説明する。ステップS910において、分割翻訳テキストの言語が英語であると判定されない場合には、本実施形態に係る情報処理装置は、日本語での通知順序を決定する(S914)。ステップS914の処理としては、例えば、図32に示す処理が挙げられる。 Referring to FIG. 30 again, an example of the process in step S122 in FIG. 22 will be described. If it is not determined in step S910 that the language of the divided translated text is English, the information processing apparatus according to the present embodiment determines the notification order in Japanese (S914). An example of the process in step S914 is the process shown in FIG.
 図32を参照すると、本実施形態に係る情報処理装置は、図31のステップS1100と同様に、処理する分割翻訳テキストが存在するか否かを判定する(S1100)。ここでステップS1100における処理する分割翻訳テキストとしては、翻訳単位ごとの翻訳結果のうちの、未処理の翻訳結果が該当する。 Referring to FIG. 32, the information processing apparatus according to the present embodiment determines whether or not there is a divided translated text to be processed, similar to step S1100 of FIG. 31 (S1100). Here, the divided translation text to be processed in step S1100 corresponds to an unprocessed translation result among the translation results for each translation unit.
 ステップS1100において、処理する分割翻訳テキストが存在すると判定された場合には、本実施形態に係る情報処理装置は、次に処理する分割翻訳テキストを取得する(S1102)。 When it is determined in step S1100 that there is a divided translation text to be processed, the information processing apparatus according to the present embodiment acquires a divided translation text to be processed next (S1102).
 本実施形態に係る情報処理装置は、処理する分割翻訳テキストが動詞を含むか否かを判定する(S1104)。 The information processing apparatus according to the present embodiment determines whether or not the divided translated text to be processed includes a verb (S1104).
 ステップS1104において、処理する分割翻訳テキストが動詞を含むと判定された場合には、本実施形態に係る情報処理装置は、優先度を最大値「5」に設定する(S1106)。そして、本実施形態に係る情報処理装置は、ステップS1100からの処理を繰り返す。 When it is determined in step S1104 that the divided translated text to be processed includes a verb, the information processing apparatus according to the present embodiment sets the priority to the maximum value “5” (S1106). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1100.
 また、ステップS1104において、処理する分割翻訳テキストが動詞を含むと判定されない場合には、本実施形態に係る情報処理装置は、処理する分割翻訳テキストが名詞を含むか否かを判定する(S1108)。 If it is not determined in step S1104 that the divided translation text to be processed includes a verb, the information processing apparatus according to the present embodiment determines whether or not the divided translation text to be processed includes a noun (S1108). .
 ステップS1108において、処理する分割翻訳テキストが名詞を含むと判定された場合には、本実施形態に係る情報処理装置は、優先度を「4」に設定する(S1110)。そして、本実施形態に係る情報処理装置は、ステップS1100からの処理を繰り返す。 If it is determined in step S1108 that the divided translated text to be processed includes a noun, the information processing apparatus according to the present embodiment sets the priority to “4” (S1110). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1100.
 また、ステップS1108において、処理する分割翻訳テキストが名詞を含むと判定されない場合には、本実施形態に係る情報処理装置は、処理する分割翻訳テキストが形容詞を含むか否かを判定する(S1112)。 If it is not determined in step S1108 that the divided translation text to be processed includes a noun, the information processing apparatus according to the present embodiment determines whether the divided translation text to be processed includes an adjective (S1112). .
 ステップS1112において、処理する分割翻訳テキストが形容詞を含むと判定された場合には、本実施形態に係る情報処理装置は、優先度を「3」に設定する(S1114)。そして、本実施形態に係る情報処理装置は、ステップS1100からの処理を繰り返す。 If it is determined in step S1112 that the divided translated text to be processed includes an adjective, the information processing apparatus according to the present embodiment sets the priority to “3” (S1114). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1100.
 また、ステップS1112において、処理する分割翻訳テキストが形容詞を含むと判定されない場合には、本実施形態に係る情報処理装置は、処理する分割翻訳テキストが副詞を含むか否かを判定する(S1116)。 If it is not determined in step S1112 that the divided translation text to be processed includes an adjective, the information processing apparatus according to the present embodiment determines whether the divided translation text to be processed includes an adverb (S1116). .
 ステップS1116において、処理する分割翻訳テキストが副詞を含むと判定された場合には、本実施形態に係る情報処理装置は、優先度を「2」に設定する(S1118)。そして、本実施形態に係る情報処理装置は、ステップS1100からの処理を繰り返す。 If it is determined in step S1116 that the divided translated text to be processed includes an adverb, the information processing apparatus according to the present embodiment sets the priority to “2” (S1118). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1100.
 また、ステップS1116において、処理する分割翻訳テキストが副詞を含むと判定されない場合には、本実施形態に係る情報処理装置は、優先度を最小値「1」に設定する(S1120)。そして、本実施形態に係る情報処理装置は、ステップS1100からの処理を繰り返す。 If it is not determined in step S1116 that the divided translated text to be processed includes an adverb, the information processing apparatus according to the present embodiment sets the priority to the minimum value “1” (S1120). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1100.
 ステップS1100において、処理する分割翻訳テキストが存在すると判定されない場合には、本実施形態に係る情報処理装置は、設定された優先度によって、通知の順序をソートする(S1122)。 If it is not determined in step S1100 that there is a divided translated text to be processed, the information processing apparatus according to the present embodiment sorts the notification order according to the set priority (S1122).
 本実施形態に係る情報処理装置は、図30のステップS914の処理として、例えば、図32に示す処理を行う。なお、図30のステップS914の処理が、図32に示す処理に限られないことは、言うまでもない。 The information processing apparatus according to the present embodiment performs, for example, the process illustrated in FIG. 32 as the process of step S914 in FIG. Needless to say, the process of step S914 in FIG. 30 is not limited to the process shown in FIG.
 再度図30を参照して、図22のステップS122の処理の一例を説明する。ステップS912の処理、またはステップS914の処理が完了すると、本実施形態に係る情報処理装置は、通知順序が決定された分割翻訳テキストを、通知制御処理により通知させる(S916)。ステップS916の処理としては、例えば、図33に示す処理が挙げられる。 Referring to FIG. 30 again, an example of the process in step S122 in FIG. 22 will be described. When the process of step S912 or the process of step S914 is completed, the information processing apparatus according to the present embodiment causes the notification control process to notify the divided translated text for which the notification order is determined (S916). An example of the process in step S916 is the process shown in FIG.
 図33を参照すると、本実施形態に係る情報処理装置は、図31のステップS1000と同様に、処理する分割翻訳テキストが存在するか否かを判定する(S1200)。ここでステップS1200における処理する分割翻訳テキストとしては、翻訳単位ごとの翻訳結果のうちの、未処理の翻訳結果が該当する。 Referring to FIG. 33, the information processing apparatus according to the present embodiment determines whether or not there is a divided translated text to be processed, similar to step S1000 in FIG. 31 (S1200). Here, the divided translation text to be processed in step S1200 corresponds to an unprocessed translation result among the translation results for each translation unit.
 ステップS1200において、処理する分割翻訳テキストが存在すると判定された場合には、本実施形態に係る情報処理装置は、次に処理する分割翻訳テキストを取得する(S1202)。 When it is determined in step S1200 that there is a divided translation text to be processed, the information processing apparatus according to the present embodiment acquires a divided translation text to be processed next (S1202).
 本実施形態に係る情報処理装置は、処理する分割翻訳テキストに対応する音声情報から音圧を取得し、処理する分割翻訳テキストの音圧を上げて出力させる(S1204)。 The information processing apparatus according to the present embodiment acquires the sound pressure from the speech information corresponding to the divided translated text to be processed, and increases the sound pressure of the divided translated text to be processed for output (S1204).
 本実施形態に係る情報処理装置は、ステップS1204において出力させた分割翻訳テキストが、最後の分割翻訳テキストであるか否かを判定する(S1206)。本実施形態に係る情報処理装置は、例えば、未処理の翻訳結果が存在する場合に、最後の分割翻訳テキストではないと判定し、未処理の翻訳結果が存在しない場合に、最後の分割翻訳テキストであると判定する。 The information processing apparatus according to the present embodiment determines whether or not the divided translated text output in step S1204 is the last divided translated text (S1206). The information processing apparatus according to the present embodiment determines, for example, that there is an unprocessed translation result and determines that it is not the last divided translated text, and if there is no unprocessed translation result, the last divided translated text It is determined that
 ステップS1206において、最後の分割翻訳テキストであると判定されない場合には、本実施形態に係る情報処理装置は、まだ以降も続くことを伝えるためのサウンドフィードバックとして「ピッ」という音を出力させる(S1208)。そして、本実施形態に係る情報処理装置は、ステップS1200からの処理を繰り返す。 If it is not determined in step S1206 that the text is the last divided translated text, the information processing apparatus according to the present embodiment outputs a “beep” sound as sound feedback for notifying that it will continue thereafter (S1208). ). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1200.
 また、ステップS1206において、最後の分割翻訳テキストであると判定された場合には、本実施形態に係る情報処理装置は、最後であることを伝えるためのサウンドフィードバックとして「ピピッ」という音を出力させる(S1210)。そして、本実施形態に係る情報処理装置は、ステップS1200からの処理を繰り返す。 If it is determined in step S1206 that the text is the last divided translated text, the information processing apparatus according to the present embodiment outputs a “beep” sound as sound feedback for notifying the end. (S1210). Then, the information processing apparatus according to the present embodiment repeats the processing from step S1200.
 ステップS1200において、処理する分割翻訳テキストが存在すると判定されない場合には、本実施形態に係る情報処理装置は、図33の処理を終了する。 If it is not determined in step S1200 that there is a divided translated text to be processed, the information processing apparatus according to the present embodiment ends the process of FIG.
 本実施形態に係る情報処理装置は、図30のステップS916の処理として、例えば、図33に示す処理を行う。なお、図30のステップS916の処理が、図33に示す処理に限られないことは、言うまでもない。 The information processing apparatus according to the present embodiment performs, for example, the process shown in FIG. 33 as the process of step S916 in FIG. Needless to say, the process of step S916 in FIG. 30 is not limited to the process shown in FIG.
 例えば、図22~図33に示す処理が行われることによって、図1~図5を参照して説明したユースケースを実現することができる。なお、本実施形態に係る情報処理方法に係る処理が、図22~図33に示す処理に限られないことは、言うまでもない。 For example, the use cases described with reference to FIGS. 1 to 5 can be realized by performing the processes shown in FIGS. Needless to say, the processing related to the information processing method according to the present embodiment is not limited to the processing shown in FIGS.
[5]本実施形態に係る情報処理方法が用いられることにより奏される効果の一例
 本実施形態に係る情報処理装置が本実施形態に係る情報処理方法に係る処理を行うことによって、例えば下記に示す効果が奏される。なお、本実施形態に係る情報処理方法が用いられることにより奏される効果が、下記に示す効果に限られないことは、言うまでもない。
  ・発話者がまとまりのない話し方で話した場合であっても、要点だけが翻訳され、発話者が伝えたい事項を受け手に伝えることが可能となる。
  ・要点だけが翻訳されることにより、受け手の確認時間を短縮することができ、円滑な翻訳コミュニケーションを実現することができる。
  ・翻訳処理の処理対象となる文章自体を極端に減らせるケースもあり、翻訳自体の精度を向上させることが可能となる。
  ・発話の内容が要約された上で翻訳されることによって、受け手は、不要な言葉を受け取らなくて済むので、受け手は理解をしやすい。その結果、外国語が得意ではない者に、言語の壁を越えて話すことを、促すことができる。
[5] An example of an effect produced by using the information processing method according to the present embodiment When the information processing apparatus according to the present embodiment performs processing according to the information processing method according to the present embodiment, for example, the following The effect shown is produced. Needless to say, the effects produced by using the information processing method according to the present embodiment are not limited to the effects described below.
・ Even if the speaker speaks in an uncoordinated manner, only the main points are translated, and it is possible to convey to the receiver what the speaker wants to convey.
・ By translating only the main points, the confirmation time for the recipient can be shortened, and smooth translation communication can be realized.
-In some cases, the sentences to be processed can be extremely reduced, and the accuracy of the translation itself can be improved.
・ Since the content of the utterance is summarized and translated, the recipient does not have to receive unnecessary words, so the recipient is easy to understand. As a result, it is possible to encourage those who are not good at foreign languages to speak across language barriers.
(本実施形態に係る情報処理装置)
 次に、上述した本実施形態に係る情報処理方法に係る処理を行うことが可能な本実施形態に係る情報処理装置の構成の一例について、説明する。以下では、本実施形態に係る情報処理装置の構成の一例として、上述した第1の情報処理方法に係る処理と上述した第2の情報処理方法に係る処理との一方または双方を行うことが可能な、情報処理装置の一例を示す。
(Information processing apparatus according to this embodiment)
Next, an example of the configuration of the information processing apparatus according to the present embodiment capable of performing the processing according to the information processing method according to the present embodiment described above will be described. Hereinafter, as an example of the configuration of the information processing apparatus according to the present embodiment, one or both of the processing related to the first information processing method described above and the processing related to the second information processing method described above can be performed. An example of an information processing apparatus is shown.
 図34は、本実施形態に係る情報処理装置100の構成の一例を示すブロック図である。情報処理装置100は、例えば、通信部102と、制御部104とを備える。 FIG. 34 is a block diagram illustrating an example of the configuration of the information processing apparatus 100 according to the present embodiment. The information processing apparatus 100 includes, for example, a communication unit 102 and a control unit 104.
 また、情報処理装置100は、例えば、ROM(Read Only Memory。図示せず)や、RAM(Random Access Memory。図示せず)、記憶部(図示せず)、情報処理装置100の使用者が操作可能な操作部(図示せず)、様々な画面を表示画面に表示する表示部(図示せず)などを備えていてもよい。情報処理装置100は、例えば、データの伝送路としてのバスにより上記各構成要素間を接続する。また、情報処理装置100は、例えば、情報処理装置100が備えているバッテリなどの内部電源から供給される電力、または、接続されている外部電源から供給される電力などによって、駆動する。 The information processing apparatus 100 is operated by, for example, a ROM (Read Only Memory. Not shown), a RAM (Random Access Memory. Not shown), a storage unit (not shown), or a user of the information processing apparatus 100. A possible operation unit (not shown), a display unit (not shown) for displaying various screens on the display screen, and the like may be provided. The information processing apparatus 100 connects the above constituent elements by, for example, a bus as a data transmission path. The information processing apparatus 100 is driven by, for example, power supplied from an internal power supply such as a battery provided in the information processing apparatus 100, or power supplied from a connected external power supply.
 ROM(図示せず)は、制御部104が使用するプログラムや演算パラメータなどの制御用データを記憶する。RAM(図示せず)は、制御部104により実行されるプログラムなどを一時的に記憶する。 A ROM (not shown) stores control data such as a program used by the control unit 104 and calculation parameters. A RAM (not shown) temporarily stores a program executed by the control unit 104.
 記憶部(図示せず)は、情報処理装置100が備える記憶手段であり、例えば、要約に関する重みを設定するためのテーブルなどの本実施形態に係る情報処理方法に係るデータや、各種アプリケーションなど様々なデータを記憶する。ここで、記憶部(図示せず)としては、例えば、ハードディスク(Hard Disk)などの磁気記録媒体や、フラッシュメモリ(flash memory)などの不揮発性メモリ(nonvolatile memory)などが挙げられる。また、記憶部(図示せず)は、情報処理装置100から着脱可能であってもよい。 The storage unit (not shown) is a storage unit included in the information processing apparatus 100. For example, the storage unit (not shown) includes various data such as a table for setting weights related to the summary, data related to the information processing method according to the present embodiment, various applications, Memorize data. Here, examples of the storage unit (not shown) include a magnetic recording medium such as a hard disk, and a non-volatile memory such as a flash memory. Further, the storage unit (not shown) may be detachable from the information processing apparatus 100.
 操作部(図示せず)としては、後述する操作入力デバイスが挙げられる。また、表示部(図示せず)としては、後述する表示デバイスが挙げられる。 As the operation unit (not shown), an operation input device to be described later can be cited. Moreover, as a display part (not shown), the display device mentioned later is mentioned.
[情報処理装置100のハードウェア構成例]
 図35は、本実施形態に係る情報処理装置100のハードウェア構成の一例を示す説明図である。情報処理装置100は、例えば、MPU150と、ROM152と、RAM154と、記録媒体156と、入出力インタフェース158と、操作入力デバイス160と、表示デバイス162と、通信インタフェース164とを備える。また、情報処理装置100は、例えば、データの伝送路としてのバス166で各構成要素間を接続する。
[Hardware Configuration Example of Information Processing Apparatus 100]
FIG. 35 is an explanatory diagram illustrating an example of a hardware configuration of the information processing apparatus 100 according to the present embodiment. The information processing apparatus 100 includes, for example, an MPU 150, a ROM 152, a RAM 154, a recording medium 156, an input / output interface 158, an operation input device 160, a display device 162, and a communication interface 164. In addition, the information processing apparatus 100 connects each component with a bus 166 as a data transmission path, for example.
 MPU150は、例えば、MPUなどの演算回路で構成される、1または2以上のプロセッサや、各種処理回路などで構成され、情報処理装置100全体を制御する制御部104として機能する。また、MPU150は、情報処理装置100において、例えば、後述する処理部110の役目を果たす。なお、処理部110は、処理部110の処理を実現可能な専用の(または汎用の)回路(例えば、MPU150とは別体のプロセッサなど)で構成されていてもよい。 The MPU 150 is composed of, for example, one or two or more processors configured by an arithmetic circuit such as an MPU, various processing circuits, and the like, and functions as the control unit 104 that controls the information processing apparatus 100 as a whole. Further, the MPU 150 plays a role of, for example, the processing unit 110 described later in the information processing apparatus 100. The processing unit 110 may be configured with a dedicated (or general-purpose) circuit (for example, a processor separate from the MPU 150) that can realize the processing of the processing unit 110.
 ROM152は、MPU150が使用するプログラムや演算パラメータなどの制御用データなどを記憶する。RAM154は、例えば、MPU150により実行されるプログラムなどを一時的に記憶する。 The ROM 152 stores programs used by the MPU 150, control data such as calculation parameters, and the like. The RAM 154 temporarily stores a program executed by the MPU 150, for example.
 記録媒体156は、記憶部(図示せず)として機能し、例えば、要約に関する重みを設定するためのテーブルなどの本実施形態に係る情報処理方法に係るデータや、各種アプリケーションなど様々なデータを記憶する。ここで、記録媒体156としては、例えば、ハードディスクなどの磁気記録媒体や、フラッシュメモリなどの不揮発性メモリが挙げられる。また、記録媒体156は、情報処理装置100から着脱可能であってもよい。 The recording medium 156 functions as a storage unit (not shown), and stores various data such as data related to the information processing method according to the present embodiment such as a table for setting weights related to summarization and various applications. To do. Here, examples of the recording medium 156 include a magnetic recording medium such as a hard disk and a non-volatile memory such as a flash memory. Further, the recording medium 156 may be detachable from the information processing apparatus 100.
 入出力インタフェース158は、例えば、操作入力デバイス160や、表示デバイス162を接続する。操作入力デバイス160は、操作部(図示せず)として機能し、また、表示デバイス162は、表示部(図示せず)として機能する。ここで、入出力インタフェース158としては、例えば、USB(Universal Serial Bus)端子や、DVI(Digital Visual Interface)端子、HDMI(High-Definition Multimedia Interface)(登録商標)端子、各種処理回路などが挙げられる。 The input / output interface 158 connects, for example, the operation input device 160 and the display device 162. The operation input device 160 functions as an operation unit (not shown), and the display device 162 functions as a display unit (not shown). Here, examples of the input / output interface 158 include a USB (Universal Serial Bus) terminal, a DVI (Digital Visual Interface) terminal, an HDMI (High-Definition Multimedia Interface) (registered trademark) terminal, and various processing circuits. .
 また、操作入力デバイス160は、例えば、情報処理装置100上に備えられ、情報処理装置100の内部で入出力インタフェース158と接続される。操作入力デバイス160としては、例えば、ボタンや、方向キー、ジョグダイヤルなどの回転型セレクタ、あるいは、これらの組み合わせなどが挙げられる。 The operation input device 160 is provided on the information processing apparatus 100, for example, and is connected to the input / output interface 158 inside the information processing apparatus 100. Examples of the operation input device 160 include a button, a direction key, a rotary selector such as a jog dial, or a combination thereof.
 また、表示デバイス162は、例えば、情報処理装置100上に備えられ、情報処理装置100の内部で入出力インタフェース158と接続される。表示デバイス162としては、例えば、液晶ディスプレイ(Liquid Crystal Display)や有機ELディスプレイ(Organic Electro-Luminescence Display。または、OLEDディスプレイ(Organic Light Emitting Diode Display)ともよばれる。)などが挙げられる。 The display device 162 is provided on the information processing apparatus 100, for example, and is connected to the input / output interface 158 inside the information processing apparatus 100. Examples of the display device 162 include a liquid crystal display (Liquid Crystal Display), an organic EL display (Organic Electro-Luminescence Display, or an OLED display (Organic Light Emitting Diode Display)), and the like.
 なお、入出力インタフェース158が、情報処理装置100の外部の操作入力デバイス(例えば、キーボードやマウスなど)や外部の表示デバイスなどの、外部デバイスと接続することも可能であることは、言うまでもない。また、表示デバイス162は、例えばタッチパネルなど、表示とユーザ操作とが可能なデバイスであってもよい。 It goes without saying that the input / output interface 158 can be connected to an external device such as an operation input device (for example, a keyboard or a mouse) external to the information processing apparatus 100 or an external display device. The display device 162 may be a device capable of display and user operation, such as a touch panel.
 通信インタフェース164は、情報処理装置100が備える通信手段であり、ネットワークを介して(あるいは、直接的に)、例えば外部装置や外部のデバイスなどと、無線または有線で通信を行うための通信部102として機能する。ここで、通信インタフェース164としては、例えば、通信アンテナおよびRF(Radio Frequency)回路(無線通信)や、IEEE802.15.1ポートおよび送受信回路(無線通信)、IEEE802.11ポートおよび送受信回路(無線通信)、あるいはLAN(Local Area Network)端子および送受信回路(有線通信)などが挙げられる。 The communication interface 164 is a communication unit included in the information processing apparatus 100, and is a communication unit 102 for performing wireless or wired communication with, for example, an external device or an external device via a network (or directly). Function as. Here, as the communication interface 164, for example, a communication antenna and an RF (Radio Frequency) circuit (wireless communication), an IEEE 802.15.1 port and a transmission / reception circuit (wireless communication), an IEEE 802.11 port and a transmission / reception circuit (wireless communication). ), Or a LAN (Local Area Network) terminal and a transmission / reception circuit (wired communication).
 情報処理装置100は、例えば図35に示す構成によって、本実施形態に係る情報処理方法に係る処理を行う。なお、本実施形態に係る情報処理装置100のハードウェア構成は、図35に示す構成に限られない。 The information processing apparatus 100 performs a process related to the information processing method according to the present embodiment, for example, with the configuration illustrated in FIG. Note that the hardware configuration of the information processing apparatus 100 according to the present embodiment is not limited to the configuration illustrated in FIG.
 例えば、情報処理装置100は、接続されている外部の通信デバイスを介して外部装置などと通信を行う場合には、通信インタフェース164を備えていなくてもよい。また、通信インタフェース164は、複数の通信方式によって、1または2以上の外部装置などと通信を行うことが可能な構成であってもよい。 For example, the information processing apparatus 100 may not include the communication interface 164 when communicating with an external apparatus or the like via a connected external communication device. Further, the communication interface 164 may be configured to be able to communicate with one or more external devices or the like by a plurality of communication methods.
 また、情報処理装置100は、例えば、記録媒体156や、操作入力デバイス160、表示デバイス162を備えない構成をとることが可能である。 In addition, the information processing apparatus 100 can have a configuration that does not include the recording medium 156, the operation input device 160, and the display device 162, for example.
 また、情報処理装置100は、例えば、動きセンサや生体センサなどの各種センサ、マイクロホンなどの音声入力デバイス、スピーカなどの音声出力デバイス、振動デバイス、撮像デバイスなどのうちの、1または2以上を、さらに備えていてもよい。 In addition, the information processing apparatus 100 includes, for example, one or more of various sensors such as a motion sensor and a biological sensor, a voice input device such as a microphone, a voice output device such as a speaker, a vibration device, and an imaging device. Furthermore, you may provide.
 また、例えば、図35に示す構成(または変形例に係る構成)の一部または全部は、1、または2以上のICで実現されてもよい。 Also, for example, part or all of the configuration shown in FIG. 35 (or the configuration according to the modification) may be realized by one or two or more ICs.
 再度図34を参照して、情報処理装置100の構成の一例について説明する。通信部102は、情報処理装置100が備える通信手段であり、ネットワークを介して(あるいは、直接的に)、例えば外部装置や外部のデバイスなどと、無線または有線で通信を行う。また、通信部102は、例えば制御部104により通信が制御される。 Referring to FIG. 34 again, an example of the configuration of the information processing apparatus 100 will be described. The communication unit 102 is a communication unit included in the information processing apparatus 100, and performs wireless or wired communication with, for example, an external apparatus or an external device via a network (or directly). The communication of the communication unit 102 is controlled by the control unit 104, for example.
 ここで、通信部102としては、例えば、通信アンテナおよびRF回路や、LAN端子および送受信回路などが挙げられるが、通信部102の構成は、上記に限られない。例えば、通信部102は、USB端子および送受信回路などの通信を行うことが可能な任意の規格に対応する構成や、ネットワークを介して外部装置と通信可能な任意の構成をとることができる。また、通信部102は、複数の通信方式によって、1または2以上の外部装置などと通信を行うことが可能な構成であってもよい。 Here, examples of the communication unit 102 include a communication antenna and an RF circuit, a LAN terminal, and a transmission / reception circuit, but the configuration of the communication unit 102 is not limited to the above. For example, the communication unit 102 can have a configuration corresponding to an arbitrary standard capable of performing communication such as a USB terminal and a transmission / reception circuit, or an arbitrary configuration capable of communicating with an external device via a network. Further, the communication unit 102 may be configured to be able to communicate with one or more external devices or the like by a plurality of communication methods.
 制御部104は、例えばMPUなどで構成され、情報処理装置100全体を制御する役目を果たす。また、制御部104は、例えば、処理部110を備え、本実施形態に係る情報処理方法に係る処理を主導的に行う役目を果たす。処理部110は、例えば、上述した第1の情報処理方法に係る処理と上述した第2の情報処理方法に係る処理との一方または双方を、主導的に行う役目を果たす。 The control unit 104 is configured by, for example, an MPU and plays a role of controlling the entire information processing apparatus 100. In addition, the control unit 104 includes, for example, a processing unit 110 and plays a role of leading the processing related to the information processing method according to the present embodiment. For example, the processing unit 110 plays a role of leading one or both of the processing related to the first information processing method and the processing related to the second information processing method.
 上述した第1の情報処理方法に係る処理を行う場合、処理部110は、取得した要約に関する重みを示す情報に基づいて、音声情報が示す発話の内容を要約する要約処理を行う。処理部110は、要約処理として、例えば上記[3-1]に示した処理を行う。 When performing the process related to the first information processing method described above, the processing unit 110 performs a summarization process for summarizing the content of the utterance indicated by the voice information, based on the acquired information indicating the weight related to the summary. The processing unit 110 performs, for example, the process described in [3-1] as the summary process.
 上述した第2の情報処理方法に係る処理を行う場合、処理部110は、要約情報に基づいて、通知内容の通知を制御する通知制御処理を行う。処理部110は、通知制御処理として、例えば上記[3-3]に示した処理を行う。 When performing the processing related to the second information processing method described above, the processing unit 110 performs notification control processing for controlling notification of notification contents based on the summary information. The processing unit 110 performs, for example, the process described in [3-3] as the notification control process.
 また、処理部110は、要約処理により要約された発話の内容を他の言語に翻訳する翻訳処理を、さらに行ってもよい。処理部110は、翻訳処理として、例えば上記[3-2]に示した処理を行う。 Further, the processing unit 110 may further perform a translation process for translating the content of the utterance summarized by the summarization process into another language. The processing unit 110 performs, for example, the process described in [3-2] as the translation process.
 翻訳処理により要約された発話の内容が他の言語に翻訳された場合、処理部110は、通知制御処理によって、翻訳結果を通知させることが可能である。 When the content of the utterance summarized by the translation process is translated into another language, the processing unit 110 can notify the translation result by the notification control process.
 また、処理部110は、例えば、音声認識に係る処理、音声解析に係る処理、ユーザの状態の推定に係る処理、ユーザとコミュニケーションをとる相手との間の距離の推定に係る処理など、本実施形態に係る情報処理方法に関連する各種処理を、行うことも可能である。なお、本実施形態に係る情報処理方法に関連する各種処理は、情報処理装置100の外部装置において行われてもよい。 In addition, the processing unit 110 performs, for example, a process related to speech recognition, a process related to speech analysis, a process related to estimation of a user's state, a process related to estimation of a distance between a user and a communication partner, and the like. Various processes related to the information processing method according to the embodiment can also be performed. Various processes related to the information processing method according to the present embodiment may be performed in an external device of the information processing apparatus 100.
 情報処理装置100は、例えば図34に示す構成によって、本実施形態に係る情報処理方法に係る処理(例えば、“第1の情報処理方法に係る要約処理と第2の情報処理方法に係る通知制御処理との一方または双方”や、“第1の情報処理方法に係る要約処理と第2の情報処理方法に係る通知制御処理との一方または双方、および、翻訳処理”など)を行う。 The information processing apparatus 100 has, for example, the configuration shown in FIG. 34 to perform processing related to the information processing method according to this embodiment (for example, “summarization processing related to the first information processing method and notification control related to the second information processing method” One or both of the processing ”,“ one or both of the summary processing according to the first information processing method and the notification control processing according to the second information processing method, and the translation processing ”).
 したがって、本実施形態に係る情報処理方法に係る処理として第1の情報処理方法に係る要約処理を行う場合、情報処理装置100は、例えば図34に示す構成によって、発話の内容を要約することができる。 Therefore, when performing the summarization process according to the first information processing method as the process according to the information processing method according to the present embodiment, the information processing apparatus 100 may summarize the content of the utterance with the configuration illustrated in FIG. 34, for example. it can.
 また、本実施形態に係る情報処理方法に係る処理として第2の情報処理方法に係る通知制御処理を行う場合、情報処理装置100は、例えば図34に示す構成によって、要約された発話の内容を、通知させることができる。 In addition, when performing the notification control process according to the second information processing method as the process according to the information processing method according to the present embodiment, the information processing apparatus 100 uses the configuration illustrated in FIG. Can be notified.
 また、例えば図34に示す構成によって、情報処理装置100は、上述したような本実施形態に係る情報処理方法に係る処理が行われることにより奏される効果を、奏することができる。 Also, for example, with the configuration shown in FIG. 34, the information processing apparatus 100 can achieve the effects that are achieved by performing the processing related to the information processing method according to the present embodiment as described above.
 なお、本実施形態に係る情報処理装置の構成は、図34に示す構成に限られない。 Note that the configuration of the information processing apparatus according to the present embodiment is not limited to the configuration shown in FIG.
 例えば、本実施形態に係る情報処理装置は、図34に示す処理部110を、制御部104とは個別に備える(例えば、別の処理回路で実現する)ことができる。また、例えば、第1の情報処理方法に係る要約処理、第2の情報処理方法に係る通知制御処理、本実施形態に係る翻訳処理は、複数の処理回路で分散して行われてもよい。 For example, the information processing apparatus according to the present embodiment can include the processing unit 110 illustrated in FIG. 34 separately from the control unit 104 (for example, realized by another processing circuit). Further, for example, the summary processing according to the first information processing method, the notification control processing according to the second information processing method, and the translation processing according to the present embodiment may be performed in a distributed manner by a plurality of processing circuits.
 また、第1の情報処理方法に係る要約処理、第2の情報処理方法に係る通知制御処理、および本実施形態に係る翻訳処理は、便宜上、本実施形態に係る情報処理方法に係る処理を規定したものである。よって、本実施形態に係る情報処理方法に係る処理を実現するための構成は、図34に示す構成に限られず、本実施形態に係る情報処理方法に係る処理の切り分け方に応じた構成をとることが可能である。 For the sake of convenience, the summary processing according to the first information processing method, the notification control processing according to the second information processing method, and the translation processing according to the present embodiment define the processing according to the information processing method according to the present embodiment. It is a thing. Therefore, the configuration for realizing the processing according to the information processing method according to the present embodiment is not limited to the configuration illustrated in FIG. 34, and the configuration according to the method of dividing the processing according to the information processing method according to the present embodiment is taken. It is possible.
 また、例えば、通信部102と同様の機能、構成を有する外部の通信デバイスを介して外部装置と通信を行う場合には、本実施形態に係る情報処理装置は、通信部102を備えていなくてもよい。 Further, for example, when communicating with an external device via an external communication device having the same function and configuration as the communication unit 102, the information processing apparatus according to the present embodiment does not include the communication unit 102. Also good.
 以上、本実施形態として、情報処理装置を挙げて説明したが、本実施形態は、かかる形態に限られない。本実施形態は、例えば、“PC(Personal Computer)やサーバなどのコンピュータ”や、“アイウェア型の装置、時計型の装置、腕輪型の装置などのようなユーザの身体に装着して用いられる任意のウェアラブル装置”、“スマートフォンなどの通信装置”、“タブレット型の装置”、“ゲーム機”、“自動車などの移動体”など、本実施形態に係る情報処理方法に係る処理(例えば、第1の情報処理方法に係る処理と第2の情報処理方法に係る処理との一方または双方)を行うことが可能な、様々な機器に適用することができる。また、本実施形態は、例えば、上記のような機器に組み込むことが可能な、処理ICに適用することもできる。 As described above, the information processing apparatus has been described as the present embodiment, but the present embodiment is not limited to such a form. The present embodiment is used by being worn on a user's body, such as a “computer such as a personal computer (PC) or a server” or an “eyewear type device, a clock type device, a bracelet type device, etc.” Processing related to the information processing method according to the present embodiment, such as “any wearable device”, “communication device such as a smartphone”, “tablet-type device”, “game machine”, “mobile object such as an automobile”, etc. The present invention can be applied to various devices capable of performing one or both of the processing related to the first information processing method and the processing related to the second information processing method. In addition, the present embodiment can be applied to a processing IC that can be incorporated in the above-described device, for example.
 また、本実施形態に係る情報処理装置は、例えばクラウドコンピューティングなどのように、ネットワークへの接続(または各装置間の通信)を前提とした処理システムに適用されてもよい。本実施形態に係る情報処理方法に係る処理が行われる処理システムの一例としては、例えば“処理システムを構成する一の装置によって第1の情報処理方法に係る要約処理および翻訳処理が行われ、処理システムを構成する他の装置によって第2の情報処理方法に係る通知制御処理が行われるシステム”が、挙げられる。 In addition, the information processing apparatus according to the present embodiment may be applied to a processing system that is premised on connection to a network (or communication between apparatuses), such as cloud computing. As an example of a processing system in which processing according to the information processing method according to the present embodiment is performed, for example, “summary processing and translation processing according to the first information processing method are performed by one apparatus configuring the processing system, And a system in which notification control processing according to the second information processing method is performed by another device constituting the system.
(本実施形態に係るプログラム)
[I]第1の情報処理方法に係るプログラム(コンピュータプログラム)
 コンピュータを、第1の情報処理方法に係る処理を行う本実施形態に係る情報処理装置として機能させるためのプログラム(例えば、“第1の情報処理方法に係る要約処理”や、“第1の情報処理方法に係る要約処理、および本実施形態に係る翻訳処理”など、第1の情報処理方法に係る処理を実行することが可能なプログラム)が、コンピュータにおいてプロセッサなどにより実行されることによって、発話の内容を要約することができる。
(Program according to this embodiment)
[I] Program according to first information processing method (computer program)
A program for causing a computer to function as the information processing apparatus according to the present embodiment that performs processing according to the first information processing method (for example, “summarization processing according to the first information processing method” or “first information A program capable of executing processing related to the first information processing method, such as “summarization processing related to the processing method and translation processing related to the present embodiment”, is executed by a processor or the like in the computer. The contents of can be summarized.
 また、コンピュータを、第1の情報処理方法に係る処理を行う本実施形態に係る情報処理装置として機能させるためのプログラムが、コンピュータにおいてプロセッサなどにより実行されることによって、上述した第1の情報処理方法に係る処理によって奏される効果を、奏することができる。 In addition, a program for causing a computer to function as the information processing apparatus according to the present embodiment that performs processing according to the first information processing method is executed by a processor or the like in the computer, whereby the first information processing described above is performed. The effect produced by the process according to the method can be produced.
[II]第2の情報処理方法に係るプログラム
 コンピュータを、第2の情報処理方法に係る処理を行う本実施形態に係る情報処理装置として機能させるためのプログラム(例えば、“第2の情報処理方法に係る通知制御処理”や、“本実施形態に係る翻訳処理、および第2の情報処理方法に係る通知制御処理”など、第2の情報処理方法に係る処理を実行することが可能なプログラム)が、コンピュータにおいてプロセッサなどにより実行されることによって、要約された発話の内容を、通知させることができる。
[II] Program Related to Second Information Processing Method A program for causing a computer to function as the information processing apparatus according to the present embodiment that performs processing related to the second information processing method (for example, “second information processing method” A program capable of executing processing related to the second information processing method such as “notification control processing related to the above”, “translation processing related to the present embodiment, and notification control processing related to the second information processing method”) However, the contents of the summarized utterance can be notified by being executed by a processor or the like in the computer.
 また、コンピュータを、第2の情報処理方法に係る処理を行う本実施形態に係る情報処理装置として機能させるためのプログラムが、コンピュータにおいてプロセッサなどにより実行されることによって、上述した第2の情報処理方法に係る処理によって奏される効果を、奏することができる。 In addition, a program for causing a computer to function as the information processing apparatus according to the present embodiment that performs processing according to the second information processing method is executed by a processor or the like in the computer, whereby the second information processing described above is performed. The effect produced by the process according to the method can be produced.
[III]本実施形態に係る情報処理方法に係るプログラム
 本実施形態に係る情報処理方法に係るプログラムには、上記第1の情報処理方法に係るプログラムと上記第2の情報処理方法に係るプログラムとの双方が含まれていてもよい。
[III] Program related to information processing method according to this embodiment The program related to the information processing method according to this embodiment includes a program related to the first information processing method and a program related to the second information processing method. Both of them may be included.
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure.
 例えば、上記では、コンピュータを、本実施形態に係る情報処理装置として機能させるためのプログラム(第1の情報処理方法に係る処理と第2の情報処理方法に係る処理との一方または双方を実行することが可能なプログラム)が提供されることを示したが、本実施形態は、さらに、上記プログラムを記憶させた記録媒体も併せて提供することができる。 For example, in the above, a program for causing a computer to function as the information processing apparatus according to the present embodiment (one or both of the processing related to the first information processing method and the processing related to the second information processing method are executed) However, the present embodiment can also provide a recording medium in which the program is stored.
 上述した構成は、本実施形態の一例を示すものであり、当然に、本開示の技術的範囲に属するものである。 The above-described configuration shows an example of the present embodiment, and naturally belongs to the technical scope of the present disclosure.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in this specification are merely illustrative or illustrative, and are not limited. That is, the technology according to the present disclosure can exhibit other effects that are apparent to those skilled in the art from the description of the present specification in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 取得した要約に関する重みを示す情報に基づいて、ユーザの発話に基づく音声情報が示す発話の内容を要約する要約処理を行う処理部を備える、情報処理装置
(2)
 前記処理部は、所定の開始条件を満たしたと判定した場合に、前記要約処理を行う、(1)に記載の情報処理装置。
(3)
 前記開始条件は、発話がされていない状態が継続する無発話期間に関する条件であり、
 前記処理部は、前記無発話期間が所定の期間を越えた場合、または、前記無発話期間が前記所定の期間以上となった場合に、前記開始条件を満たしたと判定する、(2)に記載の情報処理装置。
(4)
 前記開始条件は、前記音声情報から発話の内容を取得するための音声認識の状態に関する条件であり、
 前記処理部は、前記音声認識の停止要求が検出されたことに基づいて、前記開始条件を満たしたと判定する、(2)、または(3)に記載の情報処理装置。
(5)
 前記開始条件は、前記音声情報から発話の内容を取得するための音声認識の状態に関する条件であり、
 前記処理部は、前記音声認識の完了が検出されたことに基づいて、前記開始条件を満たしたと判定する、(2)~(4)のいずれか1つに記載の情報処理装置。
(6)
 前記開始条件は、発話の内容に関する条件であり、
 前記処理部は、前記音声情報が示す発話の内容から所定の言葉が検出されたことに基づいて、前記開始条件を満たしたと判定する、(2)~(5)のいずれか1つに記載の情報処理装置。
(7)
 前記開始条件は、発話の内容に関する条件であり、
 前記処理部は、前記音声情報に基づき言いよどみが検出されたことに基づいて、前記開始条件を満たしたと判定する、(2)~(6)のいずれか1つに記載の情報処理装置。
(8)
 前記開始条件は、前記音声情報が得られてからの経過時間に関する条件であり、
 前記処理部は、前記経過時間が所定の期間を越えた場合、または、前記経過時間が前記所定の期間以上となった場合に、前記開始条件を満たしたと判定する、(2)~(7)のいずれか1つに記載の情報処理装置。
(9)
 前記処理部は、所定の要約除外条件を満たしたと判定した場合には、前記要約処理を行わない、(1)~(8)のいずれか1つに記載の情報処理装置。
(10)
 前記要約除外条件は、ジェスチャの検出に関する条件であり、
 前記処理部は、所定のジェスチャが検出された場合に、前記要約除外条件を満たしたと判定する、(9)に記載の情報処理装置。
(11)
 前記処理部は、前記音声情報に基づき特定される発話期間と、前記音声情報に基づき特定される文字数とのうちの少なくとも一方に基づいて、前記発話の内容の要約のレベルを変更する、(1)~(10)のいずれか1つに記載の情報処理装置。
(12)
 前記処理部は、要約された発話の内容が示す文字数を制限することによって、前記前記発話の内容の要約のレベルを変更する、(11)に記載の情報処理装置。
(13)
 前記処理部は、前記音声情報、ユーザに関する情報、アプリケーションに関する情報、環境に関する情報、およびデバイスに関する情報のうちの少なくとも1つに基づいて、前記要約に関する重みを設定する、(1)~(12)のいずれか1つに記載の情報処理装置。
(14)
 前記ユーザに関する情報には、前記ユーザの状態情報と前記ユーザの操作情報とのうちの少なくとも1つが含まれる、(13)に記載の情報処理装置。
(15)
 前記処理部は、前記要約処理により要約された発話の内容を他の言語に翻訳する翻訳処理を、さらに行う、(1)~(14)のいずれか1つに記載の情報処理装置。
(16)
 前記処理部は、所定の翻訳除外条件を満たしたと判定した場合には前記翻訳処理を行わない、(15)に記載の情報処理装置。
(17)
 前記処理部は、
 前記翻訳処理により他の言語に翻訳された内容を、翻訳前の言語に再翻訳し、
 再翻訳した後に取得された前記音声情報が示す発話の内容に、再翻訳後の内容に含まれている言葉が存在する場合には、前記再翻訳後の内容に含まれている言葉を、要約された発話の内容に含める、(15)、または(16)に記載の情報処理装置。
(18)
 前記処理部は、要約された発話の内容の通知を制御する通知制御処理を、さらに行う、(1)~(17)のいずれか1つに記載の情報処理装置。
(19)
 取得した要約に関する重みを示す情報に基づいて、ユーザの発話に基づく音声情報が示す発話の内容を要約する要約処理を行うステップを有する、情報処理装置により実行される情報処理方法。
(20)
 取得した要約に関する重みを示す情報に基づいて、ユーザの発話に基づく音声情報が示す発話の内容を要約する要約処理を行う機能を、コンピュータに実現させるためのプログラム。
The following configurations also belong to the technical scope of the present disclosure.
(1)
An information processing apparatus (2) including a processing unit that performs a summarization process for summarizing the content of the utterance indicated by the voice information based on the user's utterance based on the acquired information indicating the weight related to the summary.
The information processing apparatus according to (1), wherein the processing unit performs the digest process when it is determined that a predetermined start condition is satisfied.
(3)
The start condition is a condition related to a non-speech period in which a state in which no speech is made continues.
The processing unit determines that the start condition is satisfied when the non-speech period exceeds a predetermined period or when the non-speech period becomes equal to or greater than the predetermined period, (2) Information processing device.
(4)
The start condition is a condition related to a state of speech recognition for acquiring the content of an utterance from the speech information,
The information processing apparatus according to (2) or (3), wherein the processing unit determines that the start condition is satisfied based on detection of the voice recognition stop request.
(5)
The start condition is a condition related to a state of speech recognition for acquiring the content of an utterance from the speech information,
The information processing apparatus according to any one of (2) to (4), wherein the processing unit determines that the start condition is satisfied based on detection of completion of the voice recognition.
(6)
The start condition is a condition related to the content of the utterance,
The processing unit determines that the start condition is satisfied based on detection of a predetermined word from the content of the utterance indicated by the audio information, according to any one of (2) to (5) Information processing device.
(7)
The start condition is a condition related to the content of the utterance,
The information processing apparatus according to any one of (2) to (6), wherein the processing unit determines that the start condition is satisfied based on detection of stagnation based on the audio information.
(8)
The start condition is a condition related to an elapsed time after the voice information is obtained,
The processing unit determines that the start condition is satisfied when the elapsed time exceeds a predetermined period or when the elapsed time is equal to or longer than the predetermined period. (2) to (7) The information processing apparatus according to any one of the above.
(9)
The information processing apparatus according to any one of (1) to (8), wherein the processing unit does not perform the summary processing when it is determined that a predetermined summary exclusion condition is satisfied.
(10)
The summary exclusion condition is a condition related to gesture detection,
The information processing apparatus according to (9), wherein the processing unit determines that the summary exclusion condition is satisfied when a predetermined gesture is detected.
(11)
The processing unit changes a summary level of the content of the utterance based on at least one of an utterance period specified based on the voice information and a number of characters specified based on the voice information. The information processing apparatus according to any one of (10) to (10).
(12)
The information processing apparatus according to (11), wherein the processing unit changes a summary level of the utterance content by limiting a number of characters indicated by the summarized utterance content.
(13)
The processing unit sets a weight for the summary based on at least one of the audio information, information about a user, information about an application, information about an environment, and information about a device, (1) to (12) The information processing apparatus according to any one of the above.
(14)
The information processing apparatus according to (13), wherein the information about the user includes at least one of the user status information and the user operation information.
(15)
The information processing apparatus according to any one of (1) to (14), wherein the processing unit further performs a translation process for translating the content of the speech summarized by the summary process into another language.
(16)
The information processing apparatus according to (15), wherein the processing unit does not perform the translation process when it is determined that a predetermined translation exclusion condition is satisfied.
(17)
The processor is
The content translated into another language by the translation process is re-translated into the language before translation,
If there is a word included in the re-translated content in the utterance content indicated by the speech information acquired after re-translation, the words included in the re-translated content are summarized. The information processing apparatus according to (15) or (16), which is included in the content of the uttered speech.
(18)
The information processing apparatus according to any one of (1) to (17), wherein the processing unit further performs notification control processing for controlling notification of the content of the summarized utterance.
(19)
An information processing method executed by an information processing apparatus, comprising: performing a summarization process for summarizing the content of an utterance indicated by voice information based on a user's utterance based on information indicating a weight related to the acquired summary.
(20)
A program for causing a computer to realize a function of performing a summarization process for summarizing the content of utterances indicated by voice information based on a user's utterances based on information indicating weights relating to acquired summaries.
 100  情報処理装置
 102  通信部
 104  制御部
 110  処理部
DESCRIPTION OF SYMBOLS 100 Information processing apparatus 102 Communication part 104 Control part 110 Processing part

Claims (20)

  1.  取得した要約に関する重みを示す情報に基づいて、ユーザの発話に基づく音声情報が示す発話の内容を要約する要約処理を行う処理部を備える、情報処理装置。 An information processing apparatus comprising a processing unit that performs a summarization process for summarizing the content of an utterance indicated by voice information based on a user's utterance based on information indicating a weight related to the acquired summary.
  2.  前記処理部は、所定の開始条件を満たしたと判定した場合に、前記要約処理を行う、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the processing unit performs the summarization process when it is determined that a predetermined start condition is satisfied.
  3.  前記開始条件は、発話がされていない状態が継続する無発話期間に関する条件であり、
     前記処理部は、前記無発話期間が所定の期間を越えた場合、または、前記無発話期間が前記所定の期間以上となった場合に、前記開始条件を満たしたと判定する、請求項2に記載の情報処理装置。
    The start condition is a condition related to a non-speech period in which a state in which no speech is made continues.
    The said processing part determines with satisfy | filling the said start conditions, when the said non-utterance period exceeds predetermined period, or when the said non-utterance period becomes more than the said predetermined period. Information processing device.
  4.  前記開始条件は、前記音声情報から発話の内容を取得するための音声認識の状態に関する条件であり、
     前記処理部は、前記音声認識の停止要求が検出されたことに基づいて、前記開始条件を満たしたと判定する、請求項2に記載の情報処理装置。
    The start condition is a condition related to a state of speech recognition for acquiring the content of an utterance from the speech information,
    The information processing apparatus according to claim 2, wherein the processing unit determines that the start condition is satisfied based on detection of the voice recognition stop request.
  5.  前記開始条件は、前記音声情報から発話の内容を取得するための音声認識の状態に関する条件であり、
     前記処理部は、前記音声認識の完了が検出されたことに基づいて、前記開始条件を満たしたと判定する、請求項2に記載の情報処理装置。
    The start condition is a condition related to a state of speech recognition for acquiring the content of an utterance from the speech information,
    The information processing apparatus according to claim 2, wherein the processing unit determines that the start condition is satisfied based on detection of completion of the voice recognition.
  6.  前記開始条件は、発話の内容に関する条件であり、
     前記処理部は、前記音声情報が示す発話の内容から所定の言葉が検出されたことに基づいて、前記開始条件を満たしたと判定する、請求項2に記載の情報処理装置。
    The start condition is a condition related to the content of the utterance,
    The information processing apparatus according to claim 2, wherein the processing unit determines that the start condition is satisfied based on detection of a predetermined word from the content of the utterance indicated by the audio information.
  7.  前記開始条件は、発話の内容に関する条件であり、
     前記処理部は、前記音声情報に基づき言いよどみが検出されたことに基づいて、前記開始条件を満たしたと判定する、請求項2に記載の情報処理装置。
    The start condition is a condition related to the content of the utterance,
    The information processing apparatus according to claim 2, wherein the processing unit determines that the start condition is satisfied based on detection of stagnation based on the audio information.
  8.  前記開始条件は、前記音声情報が得られてからの経過時間に関する条件であり、
     前記処理部は、前記経過時間が所定の期間を越えた場合、または、前記経過時間が前記所定の期間以上となった場合に、前記開始条件を満たしたと判定する、請求項2に記載の情報処理装置。
    The start condition is a condition related to an elapsed time after the voice information is obtained,
    The information according to claim 2, wherein the processing unit determines that the start condition is satisfied when the elapsed time exceeds a predetermined period or when the elapsed time becomes equal to or longer than the predetermined period. Processing equipment.
  9.  前記処理部は、所定の要約除外条件を満たしたと判定した場合には、前記要約処理を行わない、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the processing unit does not perform the summarization process when it is determined that a predetermined summarization exclusion condition is satisfied.
  10.  前記要約除外条件は、ジェスチャの検出に関する条件であり、
     前記処理部は、所定のジェスチャが検出された場合に、前記要約除外条件を満たしたと判定する、請求項9に記載の情報処理装置。
    The summary exclusion condition is a condition related to gesture detection,
    The information processing apparatus according to claim 9, wherein the processing unit determines that the summary exclusion condition is satisfied when a predetermined gesture is detected.
  11.  前記処理部は、前記音声情報に基づき特定される発話期間と、前記音声情報に基づき特定される文字数とのうちの少なくとも一方に基づいて、前記発話の内容の要約のレベルを変更する、請求項1に記載の情報処理装置。 The processing unit changes a summary level of the content of the utterance based on at least one of an utterance period specified based on the voice information and a number of characters specified based on the voice information. The information processing apparatus according to 1.
  12.  前記処理部は、要約された発話の内容が示す文字数を制限することによって、前記前記発話の内容の要約のレベルを変更する、請求項11に記載の情報処理装置。 12. The information processing apparatus according to claim 11, wherein the processing unit changes a summary level of the utterance content by limiting a number of characters indicated by the summarized utterance content.
  13.  前記処理部は、前記音声情報、ユーザに関する情報、アプリケーションに関する情報、環境に関する情報、およびデバイスに関する情報のうちの少なくとも1つに基づいて、前記要約に関する重みを設定する、請求項1に記載の情報処理装置。 The information according to claim 1, wherein the processing unit sets a weight related to the summary based on at least one of the audio information, information about a user, information about an application, information about an environment, and information about a device. Processing equipment.
  14.  前記ユーザに関する情報には、前記ユーザの状態情報と前記ユーザの操作情報とのうちの少なくとも1つが含まれる、請求項13に記載の情報処理装置。 14. The information processing apparatus according to claim 13, wherein the information about the user includes at least one of the user status information and the user operation information.
  15.  前記処理部は、前記要約処理により要約された発話の内容を他の言語に翻訳する翻訳処理を、さらに行う、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the processing unit further performs a translation process for translating the content of the utterance summarized by the summary process into another language.
  16.  前記処理部は、所定の翻訳除外条件を満たしたと判定した場合には前記翻訳処理を行わない、請求項15に記載の情報処理装置。 The information processing apparatus according to claim 15, wherein the processing unit does not perform the translation processing when it is determined that a predetermined translation exclusion condition is satisfied.
  17.  前記処理部は、
     前記翻訳処理により他の言語に翻訳された内容を、翻訳前の言語に再翻訳し、
     再翻訳した後に取得された前記音声情報が示す発話の内容に、再翻訳後の内容に含まれている言葉が存在する場合には、前記再翻訳後の内容に含まれている言葉を、要約された発話の内容に含める、請求項15に記載の情報処理装置。
    The processor is
    The content translated into another language by the translation process is re-translated into the language before translation,
    If there is a word included in the re-translated content in the utterance content indicated by the speech information acquired after re-translation, the words included in the re-translated content are summarized. The information processing device according to claim 15, wherein the information processing device is included in the content of the uttered speech.
  18.  前記処理部は、要約された発話の内容の通知を制御する通知制御処理を、さらに行う、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the processing unit further performs notification control processing for controlling notification of the content of the summarized utterance.
  19.  取得した要約に関する重みを示す情報に基づいて、ユーザの発話に基づく音声情報が示す発話の内容を要約する要約処理を行うステップを有する、情報処理装置により実行される情報処理方法。 An information processing method executed by the information processing apparatus, including a step of performing a summarization process for summarizing the content of the utterance indicated by the voice information based on the user's utterance based on the information indicating the weight related to the acquired summary.
  20.  取得した要約に関する重みを示す情報に基づいて、ユーザの発話に基づく音声情報が示す発話の内容を要約する要約処理を行う機能を、コンピュータに実現させるためのプログラム。 A program for causing a computer to realize a function of performing a summarization process for summarizing the content of speech indicated by voice information based on a user's speech based on information indicating the weight related to the acquired summary.
PCT/JP2016/080485 2016-01-25 2016-10-14 Information processing device, information processing method, and program WO2017130474A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP16888059.9A EP3410432A4 (en) 2016-01-25 2016-10-14 Information processing device, information processing method, and program
JP2017563679A JP6841239B2 (en) 2016-01-25 2016-10-14 Information processing equipment, information processing methods, and programs
US16/068,987 US11120063B2 (en) 2016-01-25 2016-10-14 Information processing apparatus and information processing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016011224 2016-01-25
JP2016-011224 2016-01-25

Publications (1)

Publication Number Publication Date
WO2017130474A1 true WO2017130474A1 (en) 2017-08-03

Family

ID=59397722

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/080485 WO2017130474A1 (en) 2016-01-25 2016-10-14 Information processing device, information processing method, and program

Country Status (4)

Country Link
US (1) US11120063B2 (en)
EP (1) EP3410432A4 (en)
JP (1) JP6841239B2 (en)
WO (1) WO2017130474A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019082981A (en) * 2017-10-30 2019-05-30 株式会社テクノリンク Inter-different language communication assisting device and system
WO2020111880A1 (en) * 2018-11-30 2020-06-04 Samsung Electronics Co., Ltd. User authentication method and apparatus

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019101754A (en) * 2017-12-01 2019-06-24 キヤノン株式会社 Summarization device and method for controlling the same, summarization system, and program
KR102530391B1 (en) * 2018-01-25 2023-05-09 삼성전자주식회사 Application processor including low power voice trigger system with external interrupt, electronic device including the same and method of operating the same
JP7131077B2 (en) * 2018-05-24 2022-09-06 カシオ計算機株式会社 CONVERSATION DEVICE, ROBOT, CONVERSATION DEVICE CONTROL METHOD AND PROGRAM
US11429795B2 (en) * 2020-01-13 2022-08-30 International Business Machines Corporation Machine translation integrated with user analysis
CN112085090A (en) * 2020-09-07 2020-12-15 百度在线网络技术(北京)有限公司 Translation method and device and electronic equipment
KR20230067321A (en) * 2021-11-09 2023-05-16 삼성전자주식회사 Electronic device and controlling method of electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000089789A (en) * 1998-09-08 2000-03-31 Fujitsu Ltd Voice recognition device and recording medium
JP2006058567A (en) * 2004-08-19 2006-03-02 Ntt Docomo Inc Voice information summarizing system and voice information summarizing method
JP2007156888A (en) * 2005-12-06 2007-06-21 Oki Electric Ind Co Ltd Information presentation system and information presentation program
JP2010256391A (en) * 2009-04-21 2010-11-11 Takeshi Hanamura Voice information processing device
WO2012023450A1 (en) * 2010-08-19 2012-02-23 日本電気株式会社 Text processing system, text processing method, and text processing program

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9552354B1 (en) * 2003-09-05 2017-01-24 Spoken Traslation Inc. Method and apparatus for cross-lingual communication
US7624093B2 (en) * 2006-01-25 2009-11-24 Fameball, Inc. Method and system for automatic summarization and digest of celebrity news
US7885807B2 (en) * 2006-10-18 2011-02-08 Hierodiction Software Gmbh Text analysis, transliteration and translation method and apparatus for hieroglypic, hieratic, and demotic texts from ancient Egyptian
US8682661B1 (en) * 2010-08-31 2014-03-25 Google Inc. Robust speech recognition
US10235346B2 (en) * 2012-04-06 2019-03-19 Hmbay Patents Llc Method and apparatus for inbound message summarization using message clustering and message placeholders
US20150348538A1 (en) * 2013-03-14 2015-12-03 Aliphcom Speech summary and action item generation
KR20140119841A (en) * 2013-03-27 2014-10-13 한국전자통신연구원 Method for verifying translation by using animation and apparatus thereof
CN106462909B (en) * 2013-12-20 2020-07-10 罗伯特·博世有限公司 System and method for enabling contextually relevant and user-centric presentation of content for conversations
US10409919B2 (en) * 2015-09-28 2019-09-10 Konica Minolta Laboratory U.S.A., Inc. Language translation for display device
US10043517B2 (en) * 2015-12-09 2018-08-07 International Business Machines Corporation Audio-based event interaction analytics
JP6604836B2 (en) * 2015-12-14 2019-11-13 株式会社日立製作所 Dialog text summarization apparatus and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000089789A (en) * 1998-09-08 2000-03-31 Fujitsu Ltd Voice recognition device and recording medium
JP2006058567A (en) * 2004-08-19 2006-03-02 Ntt Docomo Inc Voice information summarizing system and voice information summarizing method
JP2007156888A (en) * 2005-12-06 2007-06-21 Oki Electric Ind Co Ltd Information presentation system and information presentation program
JP2010256391A (en) * 2009-04-21 2010-11-11 Takeshi Hanamura Voice information processing device
WO2012023450A1 (en) * 2010-08-19 2012-02-23 日本電気株式会社 Text processing system, text processing method, and text processing program

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
OHNO T., ET AL.: "Real-time Captioning based on Simultaneous Summarization of Spoken Monologue", INFORMATION PROCESSING SOCIETY OF JAPAN, SIG NOTES, NUM 73, 7 August 2006 (2006-08-07), pages 51 - 56, XP009507779 *
See also references of EP3410432A4 *
SEIICHI YAMAMOTO: "Present state and future works of spoken language translation technologies", IEICE TECHNICAL REPORT, vol. 100, no. 523, 15 December 2000 (2000-12-15), pages 49 - 54, XP009507789 *
SHOGO HATA ET AL.: "Sentence Boundary Detection Focused on Confidence Measure of Automatic Speech Recognition", IPSJ SIG NOTES, vol. 2009 -SL, no. 20, 15 February 2010 (2010-02-15), pages 1 - 6, XP009507777 *
TATSUNORI MORI: "A Term Weighting Method based on Information Gain Ratio for Summarizing Documents retrieved by IR Systems", JOURNAL OF NATURAL LANGUAGE PROCESSING, vol. 9, no. 4, 10 July 2002 (2002-07-10), pages 3 - 32, XP055403101 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019082981A (en) * 2017-10-30 2019-05-30 株式会社テクノリンク Inter-different language communication assisting device and system
WO2020111880A1 (en) * 2018-11-30 2020-06-04 Samsung Electronics Co., Ltd. User authentication method and apparatus
US11443750B2 (en) 2018-11-30 2022-09-13 Samsung Electronics Co., Ltd. User authentication method and apparatus

Also Published As

Publication number Publication date
US20190019511A1 (en) 2019-01-17
JPWO2017130474A1 (en) 2018-11-22
JP6841239B2 (en) 2021-03-10
EP3410432A1 (en) 2018-12-05
US11120063B2 (en) 2021-09-14
EP3410432A4 (en) 2019-01-30

Similar Documents

Publication Publication Date Title
WO2017130474A1 (en) Information processing device, information processing method, and program
US20220230374A1 (en) User interface for generating expressive content
KR102197869B1 (en) Natural assistant interaction
CN110288994B (en) Detecting triggering of a digital assistant
JP6701066B2 (en) Dynamic phrase expansion of language input
US20180330729A1 (en) Text normalization based on a data-driven learning network
KR20230015413A (en) Digital Assistant User Interfaces and Response Modes
JP2023022150A (en) Bidirectional speech translation system, bidirectional speech translation method and program
KR20090129192A (en) Mobile terminal and voice recognition method
CN110992927B (en) Audio generation method, device, computer readable storage medium and computing equipment
KR102193029B1 (en) Display apparatus and method for performing videotelephony using the same
US9558733B1 (en) Audibly indicating secondary content with spoken text
KR101819458B1 (en) Voice recognition apparatus and system
CN110612567A (en) Low latency intelligent automated assistant
TW201510774A (en) Apparatus and method for selecting a control object by voice recognition
CN108628819B (en) Processing method and device for processing
KR102123059B1 (en) User-specific acoustic models
WO2017130483A1 (en) Information processing device, information processing method, and program
US9865250B1 (en) Audibly indicating secondary content with spoken text
WO2019073668A1 (en) Information processing device, information processing method, and program
JP5008248B2 (en) Display processing apparatus, display processing method, display processing program, and recording medium
CN112099721A (en) Digital assistant user interface and response mode
JP2005222316A (en) Conversation support device, conference support system, reception work support system, and program
JP2018072509A (en) Voice reading device, voice reading system, voice reading method and program
KR100777569B1 (en) The speech recognition method and apparatus using multimodal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16888059

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017563679

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016888059

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2016888059

Country of ref document: EP

Effective date: 20180827