WO2017187674A1 - 情報処理装置、情報処理システム、及びプログラム - Google Patents
情報処理装置、情報処理システム、及びプログラム Download PDFInfo
- Publication number
- WO2017187674A1 WO2017187674A1 PCT/JP2017/001912 JP2017001912W WO2017187674A1 WO 2017187674 A1 WO2017187674 A1 WO 2017187674A1 JP 2017001912 W JP2017001912 W JP 2017001912W WO 2017187674 A1 WO2017187674 A1 WO 2017187674A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- output
- translation
- notification
- processing apparatus
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
Definitions
- the present disclosure relates to an information processing apparatus, an information processing system, and a program.
- Patent Document 1 discloses a technique for recognizing spoken speech, translating it into another language, and outputting the translation result as synthesized sound.
- the present disclosure proposes a new and improved information processing apparatus, information processing system, and program capable of proceeding with conversation more smoothly.
- an information processing apparatus including an acquisition unit that acquires status information including status information related to translation output, and a notification information generation unit that generates notification information based on the status information Is done.
- an information processing system comprising: an acquisition unit that acquires situation information including situation information related to translation output; and a notification information generation unit that generates notification information based on the situation information Is provided.
- a program for realizing a function of acquiring situation information including information on a situation related to translation output in a computer and a function of generating notification information based on the situation information Is provided.
- the translation system information processing system
- the translation system makes it possible to more smoothly advance a conversation, for example, by giving a notification based on the translation output status to the speaker who made the utterance.
- FIG. 1 is an explanatory diagram showing an overview of a translation system according to an embodiment of the present disclosure.
- the translation system according to the present embodiment is an information processing system having a speaker side terminal 1 and a listener side terminal 2.
- the speaker U1 and the listener U2 shown in FIG. 1 are users who use different languages. Further, as shown in FIG. 1, the speaker U1 wears the speaker-side terminal 1, and the listener U2 wears the listener-side terminal 2.
- the translation system translates, for example, an utterance by the speaker U1 into a language used by the listener U2, and outputs a translation result (translation output).
- a translation result translation output
- the utterance W1 performed by the speaker U1 in the period T11 is picked up by the speaker side terminal 1, and the translation processing for the utterance W1 is performed.
- the translation output R1 is output from the listener side terminal 2 in the period T12. Sound is output.
- the speaker side terminal 1 outputs a notification to the speaker U1 in accordance with the translation output status of the listener side terminal 2.
- a notification A1 (alert sound) indicating the end of the translation output R1 is output from the speaker-side terminal 1 in a period T13 substantially simultaneously with the end of the translation output R1 (end of the period T12).
- the notification A1 and the translation output R1 are output by voice
- the present technology is not limited to the example.
- the notification A1 and the translation output R1 may be displayed and output using icons, dialogs, or the like on the display units of the speaker side terminal 1 and the listener side terminal 2.
- the notification A1 may be output by the vibration function of the speaker side terminal 1.
- FIG. 1 illustrates an example in which the speaker side terminal 1 and the listener side terminal 2 are spectacle-type devices, but the present technology is not limited to this example, and the speaker side terminal 1 and the listener side terminal 2 are various. It can be realized by a simple information processing apparatus.
- the speaker side terminal 1 and the listener side terminal 2 may be a mobile phone, a smartphone, a hearing aid, a tablet PC (Personal Computer), a collar type device, a headset, or the like.
- the speaker U1 outputs a notification to the speaker U1 based on the status of the translation output, so that the speaker U1 is in a situation related to the translation output. This makes it possible to understand conversations and advance conversations more smoothly.
- the configuration of the present embodiment having such effects will be described in detail.
- FIG. 2 is a block diagram illustrating a configuration example of the speaker side terminal 1 according to the present embodiment.
- the speaker side terminal 1 is an information processing apparatus including a control unit 10, a communication unit 14, a sensor unit 16, and an output unit 18.
- the control unit 10 controls each component of the speaker side terminal 1.
- the control unit 10 also functions as a speech recognition unit 101, a translation processing unit 102, a communication control unit 103, an acquisition unit 104, a notification information generation unit 105, and an output control unit 106, as shown in FIG.
- the speech recognition unit 101 recognizes a user's utterance (speech) collected by a microphone included in the sensor unit 16 described later, converts the speech into a character string, and acquires an utterance text.
- speech a user's utterance
- the voice recognition unit 101 may identify a person who speaks based on the characteristics of the voice, or may estimate the direction of the voice generation source. Further, the voice recognition unit 101 may determine whether or not voice is spoken by the user.
- the voice recognition unit 101 may provide the acquisition unit 104 with information on the status related to voice recognition (for example, whether or not voice recognition is currently being performed, progress status of voice recognition, end time, etc.).
- the translation processing unit 102 performs a translation process for translating the utterance text acquired by the speech recognition unit 101 into a translation destination language (for example, a language used by a listener who is a user of the listener-side terminal 2).
- a translation destination language for example, a language used by a listener who is a user of the listener-side terminal 2.
- the translation target language related to the translation processing by the translation processing unit 102 may be acquired from the listener side terminal 2 via the communication unit 14, for example, may be manually set by the user, or may be set in advance. May be.
- the translation processing unit 102 may provide the acquisition unit 104 with information on the status related to the translation processing (for example, whether the translation processing is currently being performed, the progress status of the translation processing, the end time, etc.).
- the communication control unit 103 controls communication by the communication unit 14. For example, the communication control unit 103 may control the communication unit 14 to transmit information on the translation result (translation result information) by the translation processing unit 102 to the listener side terminal 2. Further, the communication control unit 103 may control the communication unit 14 to receive situation information (an example of situation information) related to the translation output from the listener-side terminal 2.
- situation information an example of situation information
- the acquisition unit 104 acquires status information indicating the current status.
- the situation information acquired by the acquisition unit 104 may include, for example, information on the situation related to the translation output by the listener side terminal 2 received via the communication unit 14.
- the situation information acquired by the acquisition unit 104 may further include information on a situation related to speech recognition provided from the speech recognition unit 101 or information on a situation related to translation processing provided from the translation processing unit 102. . According to such a configuration, for example, as described later, it becomes possible to notify the situation related to the translation in more detail, and the speaker can grasp the situation related to the translation of his utterance in more detail.
- the notification information generation unit 105 generates notification information for notifying the user (speaker) of the speaker side terminal 1 based on the situation information acquired by the acquisition unit 104.
- the notification information generated by the notification information generation unit 105 may include, for example, a notification method (alert sound output, display output, etc.), information on the time or period during which the notification is output, and the like.
- the notification information generated by the notification information generation unit 105 may include, for example, information indicating the end time of translation output by the listener side terminal 2. Further, the notification information generated by the notification information generation unit 105 may include, for example, information for notifying the end of translation output by the listener side terminal 2. According to such a configuration, the speaker can grasp that the translation output related to his / her utterance has been completed, which can be used, for example, to determine whether or not to continue utterance.
- the notification information generated by the notification information generation unit 105 may include information indicating a progress rate, for example.
- the progress rate may be, for example, a progress rate related to translation output, or may be a progress rate related to a process that combines translation processing or voice recognition in addition to translation output. According to such a configuration, for example, as described later, it becomes possible to notify the situation related to the translation in more detail, and the speaker can grasp the situation related to the translation of his utterance in more detail.
- the output control unit 106 controls output from the output unit 18.
- the output control unit 106 controls the output unit 18 to output a notification to the user (speaker) based on the notification information generated by the notification information generation unit 105.
- the output control unit 106 may output a notification indicating the end of translation output by the listener side terminal 2.
- the speaker can grasp that the translation output related to his / her utterance has been completed, which can be used, for example, to determine whether or not to continue utterance.
- the output control unit 106 may output a notification indicating the remaining time until the end of the translation output by the listener side terminal 2. According to such a configuration, the speaker can grasp the situation related to translation of his / her utterance in more detail. An example of such notification will be described later with reference to FIG.
- the output control unit 106 may output a notification indicating the progress rate.
- the speaker can grasp the situation related to translation of his / her utterance in more detail. An example of such notification will be described later with reference to FIG.
- the communication unit 14 is a communication interface that mediates communication with other devices.
- the communication unit 14 supports an arbitrary wireless communication protocol or wired communication protocol, and establishes a communication connection with another device via a communication network (not shown), for example.
- the speaker side terminal 1 can transmit the translation result information to the listener side terminal 2, and can receive the situation information related to the translation output from the listener side terminal 2.
- the sensor unit 16 is a sensor device that senses information around the speaker-side terminal 1.
- the sensor unit 16 may include one or more microphones that collect ambient sounds.
- the sensor unit 16 may include a camera that acquires an image, a distance sensor, a human sensor, a biological information sensor, an acceleration sensor, a gyro sensor, and the like.
- the output unit 18 is controlled by the output control unit 106 and outputs a notification to the user (speaker) of the speaker side terminal 1.
- the output unit 18 may include, for example, a speaker capable of outputting sound, a display or projector capable of displaying output, a light emitting device (for example, LED) capable of outputting light, a vibration device capable of outputting vibration, and the like.
- FIG. 3 is a block diagram illustrating a configuration example of the listener side terminal 2 according to the present embodiment.
- the listener side terminal 2 is an information processing apparatus including a control unit 20, a communication unit 24, and an output unit 28.
- the control unit 20 controls each component of the listener side terminal 2.
- the control unit 20 also functions as a communication control unit 201 and an output control unit 202 as shown in FIG.
- the communication control unit 201 controls communication by the communication unit 24.
- the communication control unit 201 may control the communication unit 24 to receive translation result information from the speaker side terminal 1. Further, the communication control unit 201 may control the communication unit 24 to transmit the situation information (an example of the situation information) related to the translation output to the speaker side terminal 1.
- the output control unit 202 controls output from the output unit 28.
- the output control unit 202 controls the output unit 28 to output the translation result (translation output) based on the translation result information received from the speaker side terminal 1.
- the output control unit 202 may display and output the translation result as text, or may output the translation result by synthesized speech.
- the output control unit 202 provides the communication control unit 201 with information indicating the status of translation output by the output unit 28 (for example, whether or not translation output is currently being performed, progress of translation processing, end time, etc.).
- the communication unit 24 is a communication interface that mediates communication with other devices.
- the communication unit 24 supports an arbitrary wireless communication protocol or a wired communication protocol, and establishes a communication connection with another device via a communication network (not shown), for example.
- the listener-side terminal 2 can transmit situation information (an example of situation information) related to the translation output to the speaker-side terminal 1 and receive translation result information from the listener-side terminal 2.
- the output unit 28 is controlled by the output control unit 202 to output the translation result to the user (listener) of the listener side terminal 2.
- the output unit 18 may include, for example, a speaker capable of outputting sound, a display or projector capable of displaying output, or a light emitting device (for example, LED) capable of emitting light.
- the listener side terminal 2 may have each function of the control unit 10 described above, or the speaker side terminal 1 may have each function of the control unit 20 described above.
- the listener side terminal 2 may have the functions of the voice recognition unit 101 or the translation processing unit 102 described above.
- another information processing apparatus connected via the communication unit 14 or the communication unit 24 may have the functions of the control unit 10 and the control unit 20 described above.
- a server (not shown) (an example of an information processing apparatus) has functions of a voice recognition unit 101, a translation processing unit 102, an acquisition unit 104, and a notification information generation unit 105, and the notification information generated by the server is the speaker side. It may be provided to the terminal 1.
- the translation system according to the present embodiment may be realized by including a plurality of user terminals (information processing apparatuses) having both functions of the speaker side terminal 1 and the listener side terminal 2.
- two users may wear the user terminals, respectively, so that a two-way conversation may be performed via the translation system according to the present embodiment.
- both users participating in the conversation can speak more smoothly while confirming the translation output status to the other party, thereby enabling a more interactive conversation.
- Example of operation >> Subsequently, an operation example of the present embodiment will be described with reference to FIGS. First, a processing flow according to the present embodiment will be described with reference to FIG. 4, and an example of a notification output by the present embodiment will be described with reference to FIGS.
- FIG. 4 is a sequence diagram illustrating an example of a processing flow according to the present embodiment.
- a connection is established between the speaker side terminal 1 and the listener side terminal 2 (S100, S102).
- the speaker side terminal 1 requests the information of the translation destination language from the listener side terminal 2 (S104), and the information of the translation destination language is transmitted from the listener side terminal 2 to the speaker side terminal 1 (S106).
- the sensor unit 16 of the speaker side terminal 1 acquires the utterance (voice) by the user (speaker) of the speaker side terminal 1 (S108), and the voice recognition unit 101 of the speaker side terminal 1 recognizes the utterance, Conversion to a column is performed, and an utterance text is acquired (S110). Subsequently, the translation processing unit 102 translates the utterance text into the target language (S112).
- the listener side terminal 2 transmits the situation information (an example of the situation information) related to the translation output to the speaker side terminal 1 while performing the translation output in step S116 (S118).
- the transmission of the status information related to the translation output is shown only once, but may be transmitted a plurality of times while the translation output is being performed.
- the notification information generation unit 105 of the speaker side terminal 1 generates notification information for notifying the user (speaker) of the speaker side terminal 1 based on the situation information acquired by the acquisition unit 104 (S120). ). Subsequently, the output control unit 106 of the speaker side terminal 1 controls the output unit 18 to output a notification based on the notification information (S122).
- utterance acquisition S108
- speech recognition S110
- translation processing S112
- speech recognition and translation processing may be performed in order from the utterance acquired during utterance. .
- the flow of processing may differ depending on the configuration.
- the listener-side terminal 2 has the functions of the speech recognition unit 101 and the translation processing unit 102
- the processes in step S110 and step S112 may be performed by the listener-side terminal 2.
- notification variations according to the present embodiment will be described.
- the translation system according to the present embodiment is not limited to the example described with reference to FIG. 1 and may output various notifications.
- another example of the notification according to the present embodiment will be described with reference to FIGS.
- the notification demonstrated below may differ according to the notification information which the notification information generation part 105 produces
- FIG. 5 is an explanatory diagram illustrating another example of the notification according to the present embodiment.
- the utterance W2 performed by the speaker U1 in the period T21 is picked up by the speaker side terminal 1, and the translation processing for the utterance W2 is performed. Sound is output.
- the speaker side terminal 1 outputs a notification indicating the remaining time until the end of the translation output to the speaker U1 according to the state of the translation output by the listener side terminal 2.
- a notification A ⁇ b> 2 indicating the remaining time until the end of the translation output R ⁇ b> 2 (end of the period T ⁇ b> 22) is output from the speaker side terminal 1 in the period T ⁇ b> 23.
- the speaker U1 can grasp the situation related to the translation output of his / her utterance in more detail.
- FIG. 6 is an explanatory diagram illustrating another example of the notification according to the present embodiment.
- the period T31, utterance W3, period T32, and translation output R3 shown in FIG. 6 are the same as the period T21, utterance W2, period T22, and translation output R2 described with reference to FIG.
- the output control unit 106 outputs a notification indicating the progress rate.
- a notification (notification screen G ⁇ b> 3) indicating the progress rate related to the translation output is displayed and output on the speaker side terminal 1 in the period T ⁇ b> 33 corresponding to the period T ⁇ b> 32 (for example, the same).
- the speaker can grasp the situation related to the translation output of his / her utterance in more detail.
- FIG. 7 is an explanatory diagram showing an example of a notification screen showing the progress rate displayed on the speaker side terminal 1.
- the notification screen indicating the progress rate may include a bar-shaped progress bar P312 indicating the progress rate or text P314 indicating the progress rate, as in the notification screen G31 illustrated in FIG.
- the notification screen indicating the progress rate may include a circular progress bar P322 indicating the progress rate or a text P324 indicating the progress rate, as in the notification screen G32 illustrated in FIG.
- notification screen indicating the progress rate displayed on the speaker-side terminal 1 is not limited to the examples shown in FIGS. 6 and 7 and may take various forms.
- the notification indicating the progress rate may be output by voice.
- the numerical value of the progress rate may be output by synthesized speech, or the notification indicating the progress rate may be output by outputting the beep sound so that it gradually increases.
- FIG. 8 is an explanatory diagram showing an example in which the translation result is output not only to the listener but also to the speaker.
- the period T41, utterance W4, period T42, and translation output R42 shown in FIG. 8 are the same as the period T21, utterance W2, period T22, and translation output R2 described with reference to FIG.
- the speaker side terminal 1 outputs the translation result by the translation processing unit 102.
- the speaker side terminal 1 outputs the translation output R43 as a voice in a period T43 that is a period (for example, the same) corresponding to the period T42.
- the speaker U1 can grasp the status of the translation output to the listener U2, and can confirm what kind of translation output has actually been performed.
- FIG. 9 is an explanatory diagram illustrating an example in which the retranslation result is output to the speaker.
- the period T51, the utterance W5, the period T52, and the translation output R52 illustrated in FIG. 9 are the same as the period T21, the utterance W2, the period T22, and the translation output R2 described with reference to FIG.
- the speaker side terminal 1 outputs the retranslation result obtained by retranslating the translation result by the translation processing unit 102 into the language used by the speaker.
- the speaker side terminal 1 outputs the translation output R53 by voice in a period T53 that is a period (for example, the same) corresponding to the period T52.
- the retranslation processing may be performed by, for example, the translation processing unit 102 described with reference to FIG.
- a voice obtained by collecting the utterance W5 may be output.
- the acquisition unit 104 may acquire status information (for example, status information related to translation output) from a plurality of other devices (for example, the listener-side terminal 2) that perform translation output. Further, the notification information generation unit 105 may generate notification information for each of the plurality of other devices based on the situation information.
- status information for example, status information related to translation output
- the notification information generation unit 105 may generate notification information for each of the plurality of other devices based on the situation information.
- the output control unit 106 may cause the notification to be output based on the output status of one of the other devices (for example, the device with the latest translation output end time), A notification may be output for each of the plurality of other devices based on the output status of the device.
- the output control unit 106 When the output control unit 106 outputs a notification for each of the plurality of other devices based on the output status of each device, the output control unit 106 uses, for example, 3D sound output technology to output the sound.
- the notification relating to each device may be distinguished and output as a voice.
- the output control unit 106 may distinguish and output the notifications related to each device (each listener) based on the sound presentation pattern and pitch.
- the sound presentation pattern may be, for example, the length or number of alert sounds, and a combination thereof.
- the sound presentation patterns such as “Pippi”, “Pippi”, and “Peepy” may be output separately. Good.
- the above-described 3D sound output technology may be combined so that notifications relating to the respective devices may be distinguished and output as audio.
- the output control unit 106 uses AR (Augmented Reality) technology to distinguish and display the notifications related to each device. May be.
- FIG. 10 is an explanatory diagram for explaining display output using the AR technique.
- the listeners U21 to U23 shown in FIG. 10 are visually recognized by the user (speaker) of the speaker-side terminal 1 as an actual object through the output unit 18 which is a transmissive display. Also, as shown in FIG. 10, the listeners U21 to U23 have listener terminals 2A to 2C, respectively.
- the output unit 18 includes progress bars P21 to P23 indicating the progress rate of translation output related to each of the listener side terminals 2A to 2C, and the listeners U21 to U21 who are users of the listener side terminals 2A to 2C. It is displayed at a position corresponding to U23 (for example, overhead).
- the output control unit 106 outputs a notification based on the output status of one of the plurality of other devices (for example, the device with the latest translation output end time)
- the output unit 18 One progress bar may be displayed.
- the notification information generation unit 105 may generate notification information based further on the contents of the translation output.
- notification information is generated based on the contents of the translation output and the notification is output will be described.
- the content of the translation output described below may be analyzed by, for example, processing based on the content of the translation result, content of the speech text before translation, or processing based on the speech.
- notification information generation unit 105 generates notification information according to the determination result. May be.
- notification information may be generated such that the translation result is output from the output unit 18 when the content of the translation output is not repeated, and the alert sound is output from the output unit 18 when the content is repeated.
- whether or not the content of the translation output is repeated may be determined by comparing the translation results or by comparing the utterance content.
- the notification information generation unit 105 of the speaker side terminal 1 may generate the notification information so that the notification varies depending on the amount of contents of the translation output (for example, the amount of translation result text). For example, when the content amount of the translation output is larger than a predetermined threshold, a notification indicating the remaining time described with reference to FIG. 5 and a notification indicating the progress rate described with reference to FIGS. Notification information may be generated. Also, notification information may be generated so that a notification indicating the end of translation output is output when the amount of content of the translation output is smaller than a predetermined threshold.
- the notification information generation part 105 may generate
- notification information generation unit 105 may generate notification information based on an important part detected from the content of the translation output. For example, notification information may be generated such that an alert sound (notification) is output when output of all parts detected as important parts of the content of the translation output is completed.
- FIG. 11 is a table showing an example of weight types and vocabularies for which the weights are increased.
- the vocabulary weight may be set in accordance with, for example, user context information (for example, schedule content, behavior, location, who the user is, the type of application used, etc.).
- FIG. 12 is a table showing the types of weights that are increased according to the schedule contents.
- FIG. 13 is a table showing the types of weights that are increased according to actions. Further, when the map application is activated in the speaker side terminal 1, the weight of the vocabulary belonging to types such as “time”, “location”, “direction”, and the like may be increased.
- the user context information may be recognized based on user input, or may be recognized based on information acquired by the sensor unit 16 or the like.
- the vocabulary weight may be set based on voice information, or may be set based on sound pressure, frequency, pitch (pitch), speech rate, number of mora, accent location, and the like. For example, when the sound pressure is greater than or equal to a predetermined threshold (for example, 72 db), the weight of vocabulary belonging to types such as “anger” and “joy” may be increased. When the sound pressure is equal to or lower than a predetermined threshold (for example, 54 db), the weight of vocabulary belonging to types such as “sadness”, “discomfort”, “pain”, and “anxiety” may be increased. Also, emotions such as “anger”, “joy”, and “sadness” may be estimated based on the number of mora, the location of the accent, and the like, and the weight of the vocabulary belonging to the type of emotion may be increased.
- a predetermined threshold for example, 72 db
- a predetermined threshold for example, 54 db
- emotions such as “anger”, “joy”, and “
- the vocabulary weight may be set based on the content of the utterance or translation output. For example, the weight may be increased according to the part of speech included in the utterance text or the translation result text. For example, when the translation destination language is Japanese, the verb weight may be increased, and when the translation destination language is English, the noun weight may be increased.
- the speaker can grasp the situation related to the translation output of the important part, and the conversation can be advanced more smoothly.
- the notification information generation unit 105 may generate notification information further based on the speaker user information related to the translation output.
- the notification information may be generated based on a speaker's past utterance tendency, and the notification information is generated so that the notification is output based on an average time of translation output in the past utterance. May be.
- FIG. 14 is a block diagram showing a configuration example of the listener side terminal 2-2 when the speaker does not have a terminal.
- the listener side terminal 2-2 is an information processing apparatus including a control unit 21, a sensor unit 26, and an output unit 29.
- the control unit 21 controls each component of the listener side terminal 2-2. Further, as illustrated in FIG. 14, the control unit 21 also functions as a voice recognition unit 211, a translation processing unit 212, an output control unit 213, an acquisition unit 214, and a notification information generation unit 215.
- the configurations of the speech recognition unit 211, the translation processing unit 212, the acquisition unit 214, the notification information generation unit 215, the sensor unit 26, and the output unit 28 are the speech recognition unit 101 described with reference to FIG.
- the configuration is the same as that of the translation processing unit 102, the acquisition unit 104, the notification information generation unit 105, the sensor unit 16, and the output unit 18.
- the output control unit 213 illustrated in FIG. 14 relates to the function related to the notification output by the output control unit 106 described with reference to FIG. 2 and the translation output of the output control unit 202 described with reference to FIG. And having a function.
- the output control unit 213 may control the output unit 29 to output a notification by light emission output, alert sound output, or the like.
- the speaker terminal 2-2 can output a translation output and a notification so that the speaker can grasp the translation status. Become.
- FIG. 15 is a block diagram illustrating an example of a hardware configuration of the information processing apparatus according to the present embodiment.
- the information processing apparatus 900 illustrated in FIG. 15 can realize, for example, the speaker side terminal 1, the listener side terminal 2, or the listener side terminal 2-2 illustrated in FIGS.
- Information processing by the speaker side terminal 1, the listener side terminal 2, or the listener side terminal 2-2 according to the present embodiment is realized by cooperation of software and hardware described below.
- the information processing apparatus 900 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, a RAM (Random Access Memory) 903, and a host bus 904a.
- the information processing apparatus 900 includes a bridge 904, an external bus 904b, an interface 905, an input device 906, an output device 907, a storage device 908, a drive 909, a connection port 911, a communication device 913, and a sensor 915.
- the information processing apparatus 900 may include a processing circuit such as a DSP or an ASIC in place of or in addition to the CPU 901.
- the CPU 901 functions as an arithmetic processing unit and a control unit, and controls the overall operation in the information processing apparatus 900 according to various programs. Further, the CPU 901 may be a microprocessor.
- the ROM 902 stores programs used by the CPU 901, calculation parameters, and the like.
- the RAM 903 temporarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like.
- CPU901 can form the control part 10 shown in FIG. 2, the control part 20 shown in FIG. 3, and the control part 21 shown in FIG. 14, for example.
- the CPU 901, ROM 902, and RAM 903 are connected to each other by a host bus 904a including a CPU bus.
- the host bus 904 a is connected to an external bus 904 b such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 904.
- an external bus 904 b such as a PCI (Peripheral Component Interconnect / Interface) bus
- PCI Peripheral Component Interconnect / Interface
- the host bus 904a, the bridge 904, and the external bus 904b do not necessarily have to be configured separately, and these functions may be mounted on one bus.
- the input device 906 is realized by a device in which information is input by the user, such as a mouse, a keyboard, a touch panel, a button, a microphone, a switch, and a lever.
- the input device 906 may be, for example, a remote control device using infrared rays or other radio waves, or may be an external connection device such as a mobile phone or a PDA that supports the operation of the information processing device 900.
- the input device 906 may include, for example, an input control circuit that generates an input signal based on information input by the user using the above-described input means and outputs the input signal to the CPU 901.
- a user of the information processing apparatus 900 can input various data and instruct a processing operation to the information processing apparatus 900 by operating the input device 906.
- the output device 907 is formed of a device that can notify the user of the acquired information visually or audibly. Examples of such devices include CRT display devices, liquid crystal display devices, plasma display devices, EL display devices, display devices such as lamps, audio output devices such as speakers and headphones, printer devices, and the like.
- the output device 907 outputs results obtained by various processes performed by the information processing device 900.
- the display device visually displays results obtained by various processes performed by the information processing device 900 in various formats such as text, images, tables, and graphs.
- the audio output device converts an audio signal composed of reproduced audio data, acoustic data, and the like into an analog signal and outputs it aurally.
- the display device can form, for example, the output unit 18 shown in FIG. 2, the output unit 28 shown in FIG. 3, and the output unit 29 shown in FIG.
- the storage device 908 is a data storage device formed as an example of a storage unit of the information processing device 900.
- the storage apparatus 908 is realized by, for example, a magnetic storage device such as an HDD, a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
- the storage device 908 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like.
- the storage device 908 stores programs executed by the CPU 901, various data, various data acquired from the outside, and the like.
- the drive 909 is a storage medium reader / writer, and is built in or externally attached to the information processing apparatus 900.
- the drive 909 reads information recorded on a removable storage medium such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and outputs the information to the RAM 903.
- the drive 909 can also write information to a removable storage medium.
- connection port 911 is an interface connected to an external device, and is a connection port with an external device capable of transmitting data by USB (Universal Serial Bus), for example.
- USB Universal Serial Bus
- the communication device 913 is a communication interface formed by a communication device or the like for connecting to the network 920, for example.
- the communication device 913 is, for example, a communication card for wired or wireless LAN (Local Area Network), LTE (Long Term Evolution), Bluetooth (registered trademark), or WUSB (Wireless USB).
- the communication device 913 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), a modem for various communication, or the like.
- the communication device 913 can transmit and receive signals and the like according to a predetermined protocol such as TCP / IP, for example, with the Internet and other communication devices.
- the communication device 913 can form, for example, the communication unit 14 shown in FIG. 2 and the communication unit 24 shown in FIG.
- the network 920 is a wired or wireless transmission path for information transmitted from a device connected to the network 920.
- the network 920 may include a public line network such as the Internet, a telephone line network, a satellite communication network, various LANs including Ethernet (registered trademark), a WAN (Wide Area Network), and the like.
- the network 920 may include a dedicated line network such as an IP-VPN (Internet Protocol-Virtual Private Network).
- IP-VPN Internet Protocol-Virtual Private Network
- the sensor 915 is various sensors such as an acceleration sensor, a gyro sensor, a geomagnetic sensor, an optical sensor, a sound sensor, a distance measuring sensor, and a force sensor.
- the sensor 915 acquires information on the state of the information processing apparatus 900 itself, such as the posture and movement speed of the information processing apparatus 900, and information on the surrounding environment of the information processing apparatus 900, such as brightness and noise around the information processing apparatus 900.
- Sensor 915 may also include a GPS sensor that receives GPS signals and measures the latitude, longitude, and altitude of the device.
- each of the above components may be realized using a general-purpose member, or may be realized by hardware specialized for the function of each component. Therefore, it is possible to change the hardware configuration to be used as appropriate according to the technical level at the time of carrying out this embodiment.
- a computer program for realizing each function of the information processing apparatus 900 according to the present embodiment as described above can be produced and mounted on a PC or the like.
- a computer-readable recording medium storing such a computer program can be provided.
- the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like.
- the above computer program may be distributed via a network, for example, without using a recording medium.
- the present technology is not limited to such an example.
- the present invention may be applied to unidirectional speech, for example, when a single speaker gives a lecture to a large number of listeners (audiences).
- each step in the above embodiment does not necessarily have to be processed in time series in the order described as a sequence diagram.
- each step in the processing of the above embodiment may be processed in an order different from the order described as the sequence diagram or may be processed in parallel.
- the information processing apparatus according to any one of (1) to (5), wherein the acquisition unit acquires the status information from another apparatus that performs the translation output. (7) The acquisition unit acquires the situation information from a plurality of other devices that perform the translation output, The information processing apparatus according to (6), wherein the notification information generation unit generates the notification information for each of the plurality of other apparatuses based on the situation information. (8) The information processing apparatus according to any one of (1) to (7), wherein the notification information generation unit generates the notification information further based on content of the translation output. (9) The information processing apparatus according to any one of (1) to (8), wherein the notification information generation unit generates the notification information based further on user information of a speaker related to the translation output.
- An acquisition unit for acquiring situation information including information on a situation relating to translation output;
- a notification information generator for generating notification information based on the situation information;
- An information processing system comprising: (15) A function to obtain situation information including situation information related to translation output on a computer; A function for generating notification information based on the situation information;
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Machine Translation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
<<1.概要>>
<<2.構成例>>
<2-1.話し手側端末>
<2-2.聞き手側端末>
<2-3.補足>
<<3.動作例>>
<3-1.処理の流れ>
<3-2.通知のバリエーション>
<<4.変形例>>
<4-1.変形例1>
<4-2.変形例2>
<4-3.変形例3>
<4-4.変形例4>
<4-5.変形例5>
<<5.ハードウェア構成例>>
<<6.むすび>>
まず、図1を参照しながら、本開示の一実施形態の概要を説明する。本実施形態による翻訳システム(情報処理システム)は、発話を行った話し手側に翻訳出力状況に基づく通知を行うことで、例えば会話をより円滑に進めることを可能とする。
以上、本開示の一実施形態による翻訳システムの概要を説明した。続いて、本実施形態による翻訳システムが有する話し手側端末1、及び聞き手側端末2の構成例について、図2、3を参照して順次説明する。
図2は、本実施形態に係る話し手側端末1の構成例を示すブロック図である。図2に示すように、話し手側端末1は、制御部10、通信部14、センサ部16、及び出力部18を備える情報処理装置である。
以上、本実施形態に係る話し手側端末1の構成例を説明した。続いて、本実施形態に係る聞き手側端末2の構成例を説明する。図3は、本実施形態に係る聞き手側端末2の構成例を示すブロック図である。図3に示すように、聞き手側端末2は、制御部20、通信部24、出力部28を備える情報処理装置である。
以上、本実施形態に係る翻訳システムが有する話し手側端末1、及び聞き手側端末2の構成例を説明したが、上記の構成は一例であって、本実施形態は係る例に限定されない。
続いて、本実施形態の動作例について、図4~7を参照して説明する。まず、図4を参照して本実施形態に係る処理の流れを説明し、続いて図5~7を参照して、本実施形態により出力される通知の例を説明する。
図4は、本実施形態に係る処理の流れの一例を示すシーケンス図である。図4に示すように、まず話し手側端末1と聞き手側端末2の間で接続が確立される(S100、S102)。続いて、話し手側端末1が翻訳先言語の情報を聞き手側端末2に要求し(S104)、聞き手側端末2から話し手側端末1へ翻訳先言語の情報が送信される(S106)。
続いて、本実施形態に係る通知のバリエーションについて説明する。本実施形態に係る翻訳システムは、図1を参照して説明した例に限定されず、多様な通知を出力してもよい。以下、図5~7を参照して、本実施形態に係る通知の他の例を説明する。なお、以下に説明する通知は、例えば、通知情報生成部105が生成する通知情報に応じて異なり得る。
図5は、本実施形態に係る通知の他の例を示す説明図である。図5に示す例において、期間T21において話し手U1が行った発話W2が話し手側端末1により収音され、発話W2に対する翻訳処理が行われた後、期間T22において翻訳出力R2が聞き手側端末2から音声出力される。
図6は、本実施形態に係る通知の他の例を示す説明図である。図6に示す期間T31、発話W3、期間T32、及び翻訳出力R3は図5を参照して説明した期間T21、発話W2、期間T22、及び翻訳出力R2と同様であるため、説明を省略する。
以上、本開示の一実施形態を説明した。以下では、本開示の一実施形態の幾つかの変形例を説明する。なお、以下に説明する各変形例は、単独で本開示の実施形態に適用されてもよいし、組み合わせで本開示の実施形態に適用されてもよい。また、各変形例は、本開示の実施形態で説明した構成に代えて適用されてもよいし、本開示の実施形態で説明した構成に対して追加的に適用されてもよい。
上記では、翻訳出力に係る状況に基づく通知が話し手に出力される例を説明したが、本技術は係る例に限定されない。例えば、翻訳出力に係る状況に基づく通知に代えて、または加えて、翻訳結果が話し手に出力されてもよい。
上記では、本開示の一実施形態が2のユーザの会話に適用される場合を例に説明を行ったが、本技術は係る例に限定されない。例えば、本技術は、用いる言語が異なる3以上のユーザが存在する場合に適用されてもよく、ユーザにより用いられる言語の数も3以上であってもよい。係る場合、例えば翻訳先言語や各ユーザの端末の処理能力等に応じて、各装置(各ユーザ)に係る翻訳出力状況が異なり得る。また、翻訳先言語ごとに順番に翻訳処理が行われる場合にも、各装置の翻訳出力状況が異なり得る。
また、通知情報生成部105は、翻訳出力の内容にさらに基づいて、通知情報を生成してもよい。以下に、翻訳出力の内容に基づいて通知情報の生成が行われて通知が出力される例を幾つか説明する。なお、以下に説明する翻訳出力の内容は、例えば、翻訳結果の内容に基づく処理や、翻訳前の発話テキストの内容、発話音声に基づく処理により解析されてもよい。
例えば、話し手側端末1は、翻訳出力の内容が過去の翻訳出力の内容の繰り返しであるか否かを判定し、通知情報生成部105は、当該判定の結果に応じて、通知情報を生成してもよい。例えば、翻訳出力の内容が繰り返しでない場合には、出力部18から翻訳結果が出力され、繰り返しである場合には出力部18からアラート音が出力されるような通知情報を生成してもよい。
また、話し手側端末1の通知情報生成部105は、翻訳出力の内容の量(例えば翻訳結果テキストの量)に応じて通知が異なるように、通知情報を生成してもよい。例えば、翻訳出力の内容の量が所定の閾値より大きい場合には、図5を参照して説明した残り時間を示す通知や、図6,7を参照して説明した進捗率を示す通知が出力されるような通知情報が生成されてもよい。また、翻訳出力の内容の量が所定の閾値より小さい場合、翻訳出力の終了を示す通知が出力されるような通知情報が生成されてもよい。
また、通知情報生成部105は、翻訳出力の内容の解析に基づいて、通知情報を生成してもよい。例えば、翻訳出力の内容が質問文である場合、聞き手の反応を待つ必要があるため、翻訳出力の終了と略同時にアラート音(通知)が出力されるような通知情報が生成されてもよい。また、翻訳出力の内容が話し手から一方的に話される内容である場合、話し手の次の発話を促すように、翻訳出力の終了よりも所定時間早くアラート音が出力されるような通知情報が生成されてもよい。
また、通知情報生成部105は、翻訳出力に係る話し手のユーザ情報にさらに基づいて、通知情報を生成してもよい。例えば、話し手の過去の発話の傾向に基づいて、通知情報が生成されてもよく、過去の発話における翻訳出力に係る時間の平均値に基づいて、通知が出力されるように通知情報が生成されてもよい。
なお、上記では、話し手と聞き手の両方が端末を有している例について説明したが、本技術は係る例に限定されない。例えば、話し手が端末を有していない場合(聞き手のみが端末を有している場合)であっても、本技術は適用され得る。以下に、話し手が端末を有していない場合の、聞き手側端末の構成例について説明する。
以上、本開示の実施形態を説明した。最後に、図15を参照して、本実施形態に係る情報処理装置のハードウェア構成について説明する。図15は、本実施形態に係る情報処理装置のハードウェア構成の一例を示すブロック図である。なお、図15に示す情報処理装置900は、例えば、図2、3、14にそれぞれ示した話し手側端末1、聞き手側端末2、または聞き手側端末2-2を実現し得る。本実施形態に係る話し手側端末1、聞き手側端末2、または聞き手側端末2-2による情報処理は、ソフトウェアと、以下に説明するハードウェアとの協働により実現される。
以上、説明したように、本開示の実施形態によれば、より円滑に会話を進めることが可能である。
(1)
翻訳出力に係る状況の情報を含む、状況情報を取得する取得部と、
前記状況情報に基づいて、通知情報を生成する通知情報生成部と、
を備える情報処理装置。
(2)
前記通知情報は、前記翻訳出力の終了を通知させるための情報を含む、前記(1)に記載の情報処理装置。
(3)
前記通知情報は、進捗率を示す情報を含む、前記(1)または(2)に記載の情報処理装置。
(4)
前記状況情報は、翻訳処理に係る状況の情報をさらに含む、前記(1)~(3)のいずれか一項に記載の情報処理装置。
(5)
前記状況情報は、音声認識に係る状況の情報をさらに含む、前記(1)~(4)のいずれか一項に記載の情報処理装置。
(6)
前記取得部は、前記翻訳出力を行う他の装置から前記状況情報を取得する、前記(1)~(5)のいずれか一項に記載の情報処理装置。
(7)
前記取得部は、前記翻訳出力を行う複数の他の装置から前記状況情報を取得し、
前記通知情報生成部は、前記状況情報に基づいて、前記複数の他の装置ごとに前記通知情報を生成する、前記(6)に記載の情報処理装置。
(8)
前記通知情報生成部は、前記翻訳出力の内容にさらに基づいて、前記通知情報を生成する、前記(1)~(7)のいずれか一項に記載の情報処理装置。
(9)
前記通知情報生成部は、前記翻訳出力に係る話し手のユーザ情報にさらに基づいて、前記通知情報を生成する、前記(1)~(8)のいずれか一項に記載の情報処理装置。
(10)
前記情報処理装置は、前記通知情報に基づいて、通知を出力させる出力制御部をさらに備える、前記(1)~(9)のいずれか一項に記載の情報処理装置。
(11)
前記出力制御部は、前記翻訳出力の終了を示す前記通知を出力させる、前記(10)に記載の情報処理装置。
(12)
前記出力制御部は、前記翻訳出力の終了までの残り時間を示す前記通知を出力させる、前記(10)または(11)に記載の情報処理装置。
(13)
前記出力制御部は、進捗率を示す前記通知を出力させる、前記(10)~(12)のいずれか一項に記載の情報処理装置。
(14)
翻訳出力に係る状況の情報を含む、状況情報を取得する取得部と、
前記状況情報に基づいて、通知情報を生成する通知情報生成部と、
を備える情報処理システム。
(15)
コンピュータに
翻訳出力に係る状況の情報を含む、状況情報を取得する機能と、
前記状況情報に基づいて、通知情報を生成する機能と、
を実現させるための、プログラム。
2 聞き手側端末
10 制御部
14 通信部
16 センサ部
18 出力部
20 制御部
24 通信部
26 センサ部
28 出力部
101 音声認識部
102 翻訳処理部
103 通信制御部
104 取得部
105 通知情報生成部
106 出力制御部
201 通信制御部
202 出力制御部
Claims (15)
- 翻訳出力に係る状況の情報を含む、状況情報を取得する取得部と、
前記状況情報に基づいて、通知情報を生成する通知情報生成部と、
を備える情報処理装置。 - 前記通知情報は、前記翻訳出力の終了を通知させるための情報を含む、請求項1に記載の情報処理装置。
- 前記通知情報は、進捗率を示す情報を含む、請求項1に記載の情報処理装置。
- 前記状況情報は、翻訳処理に係る状況の情報をさらに含む、請求項1に記載の情報処理装置。
- 前記状況情報は、音声認識に係る状況の情報をさらに含む、請求項1に記載の情報処理装置。
- 前記取得部は、前記翻訳出力を行う他の装置から前記状況情報を取得する、請求項1に記載の情報処理装置。
- 前記取得部は、前記翻訳出力を行う複数の他の装置から前記状況情報を取得し、
前記通知情報生成部は、前記状況情報に基づいて、前記複数の他の装置ごとに前記通知情報を生成する、請求項6に記載の情報処理装置。 - 前記通知情報生成部は、前記翻訳出力の内容にさらに基づいて、前記通知情報を生成する、請求項1に記載の情報処理装置。
- 前記通知情報生成部は、前記翻訳出力に係る話し手のユーザ情報にさらに基づいて、前記通知情報を生成する、請求項1に記載の情報処理装置。
- 前記情報処理装置は、前記通知情報に基づいて、通知を出力させる出力制御部をさらに備える、請求項1に記載の情報処理装置。
- 前記出力制御部は、前記翻訳出力の終了を示す前記通知を出力させる、請求項10に記載の情報処理装置。
- 前記出力制御部は、前記翻訳出力の終了までの残り時間を示す前記通知を出力させる、請求項10に記載の情報処理装置。
- 前記出力制御部は、進捗率を示す前記通知を出力させる、請求項10に記載の情報処理装置。
- 翻訳出力に係る状況の情報を含む、状況情報を取得する取得部と、
前記状況情報に基づいて、通知情報を生成する通知情報生成部と、
を備える情報処理システム。 - コンピュータに
翻訳出力に係る状況の情報を含む、状況情報を取得する機能と、
前記状況情報に基づいて、通知情報を生成する機能と、
を実現させるための、プログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP17788964.9A EP3451188A4 (en) | 2016-04-27 | 2017-01-20 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM AND PROGRAM THEREFOR |
JP2018514111A JP7070402B2 (ja) | 2016-04-27 | 2017-01-20 | 情報処理装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016089262 | 2016-04-27 | ||
JP2016-089262 | 2016-04-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017187674A1 true WO2017187674A1 (ja) | 2017-11-02 |
Family
ID=60161477
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/001912 WO2017187674A1 (ja) | 2016-04-27 | 2017-01-20 | 情報処理装置、情報処理システム、及びプログラム |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3451188A4 (ja) |
JP (1) | JP7070402B2 (ja) |
WO (1) | WO2017187674A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62241069A (ja) * | 1986-04-11 | 1987-10-21 | Toshiba Corp | 機械翻訳システム |
JPH07181992A (ja) * | 1993-12-22 | 1995-07-21 | Toshiba Corp | 文書読上げ装置及び方法 |
JPH08185308A (ja) * | 1994-12-27 | 1996-07-16 | Sharp Corp | 電子通訳機 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9614969B2 (en) * | 2014-05-27 | 2017-04-04 | Microsoft Technology Licensing, Llc | In-call translation |
-
2017
- 2017-01-20 JP JP2018514111A patent/JP7070402B2/ja active Active
- 2017-01-20 WO PCT/JP2017/001912 patent/WO2017187674A1/ja active Application Filing
- 2017-01-20 EP EP17788964.9A patent/EP3451188A4/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62241069A (ja) * | 1986-04-11 | 1987-10-21 | Toshiba Corp | 機械翻訳システム |
JPH07181992A (ja) * | 1993-12-22 | 1995-07-21 | Toshiba Corp | 文書読上げ装置及び方法 |
JPH08185308A (ja) * | 1994-12-27 | 1996-07-16 | Sharp Corp | 電子通訳機 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3451188A4 * |
Also Published As
Publication number | Publication date |
---|---|
JPWO2017187674A1 (ja) | 2019-02-28 |
EP3451188A4 (en) | 2019-05-08 |
EP3451188A1 (en) | 2019-03-06 |
JP7070402B2 (ja) | 2022-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11114091B2 (en) | Method and system for processing audio communications over a network | |
US11056116B2 (en) | Low latency nearby group translation | |
JP6402748B2 (ja) | 音声対話装置および発話制御方法 | |
WO2018107489A1 (zh) | 一种聋哑人辅助方法、装置以及电子设备 | |
JP6548045B2 (ja) | 会議システム、会議システム制御方法、およびプログラム | |
JPWO2017130486A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
JP2017509917A (ja) | 空間音響特性に少なくとも部分的に基づく動作指令の決定 | |
JP6904357B2 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
JP2018174439A (ja) | 会議支援システム、会議支援方法、会議支援装置のプログラム、および端末のプログラム | |
WO2021153101A1 (ja) | 情報処理装置、情報処理方法および情報処理プログラム | |
WO2016157993A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
JP2020113150A (ja) | 音声翻訳対話システム | |
JP6065768B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
WO2020079918A1 (ja) | 情報処理装置及び情報処理方法 | |
WO2017029850A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
JP7218143B2 (ja) | 再生システムおよびプログラム | |
KR20220107052A (ko) | 청취 디바이스, 청취 디바이스의 조정 방법 | |
WO2017187674A1 (ja) | 情報処理装置、情報処理システム、及びプログラム | |
JP2018081147A (ja) | コミュニケーション装置、サーバ、制御方法、および情報処理プログラム | |
JP7316971B2 (ja) | 会議支援システム、会議支援方法、およびプログラム | |
JP7384730B2 (ja) | 会議支援システム、会議支援方法、およびプログラム | |
JP7293863B2 (ja) | 音声処理装置、音声処理方法およびプログラム | |
WO2022113189A1 (ja) | 音声翻訳処理装置 | |
WO2021153102A1 (ja) | 情報処理装置、情報処理システム、情報処理方法および情報処理プログラム | |
JP7152454B2 (ja) | 情報処理装置、情報処理方法、情報処理プログラム及び情報処理システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2018514111 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2017788964 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17788964 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017788964 Country of ref document: EP Effective date: 20181127 |