WO2017122657A1 - Dispositif de traduction de parole, procédé de traduction de parole et programme de traduction de parole - Google Patents

Dispositif de traduction de parole, procédé de traduction de parole et programme de traduction de parole Download PDF

Info

Publication number
WO2017122657A1
WO2017122657A1 PCT/JP2017/000564 JP2017000564W WO2017122657A1 WO 2017122657 A1 WO2017122657 A1 WO 2017122657A1 JP 2017000564 W JP2017000564 W JP 2017000564W WO 2017122657 A1 WO2017122657 A1 WO 2017122657A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
input
user
history
speech
Prior art date
Application number
PCT/JP2017/000564
Other languages
English (en)
Japanese (ja)
Inventor
知高 大越
諒俊 武藤
Original Assignee
株式会社リクルートライフスタイル
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社リクルートライフスタイル filed Critical 株式会社リクルートライフスタイル
Publication of WO2017122657A1 publication Critical patent/WO2017122657A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition

Definitions

  • the present invention relates to a speech translation device, a speech translation method, and a speech translation program.
  • a speech translation technique has been proposed in which the text content is machine-translated into the language of the other party and displayed on the screen, or the text content is played back using speech synthesis technology (for example, a patent). Reference 1).
  • a speech translation application that operates on an information terminal such as a smartphone that embodies such speech translation technology has been put into practical use (see, for example, Non-Patent Document 1).
  • An object of the present invention is to provide a speech translation device, a speech translation method, and a speech translation program that can prevent the occurrence of speech.
  • a speech translation apparatus first includes an input unit for inputting a user's speech, a storage unit for storing the content of the input speech, and the content of the input speech.
  • a translation unit that translates the content into different languages, an output unit that outputs the translated content (parallel translation) in voice and / or text, and a history display unit that displays the history of the input content are provided.
  • storage part distinguishes and stores specific input content from other input content from a log
  • examples of the “specific input contents” include frequent phrases (commonly used phrases) used by the user in conversation and contents of fixed phrases.
  • the speech translation device further includes an information acquisition unit that acquires information on user attributes (for example, gender, occupation, type of business, business type, etc.), and the storage unit stores the specific input content to the user. You may comprise so that it may be linked
  • the history display unit may switch the display of the history according to the attribute of the user.
  • the speech translation apparatus may further include a library creation unit that creates a library for each attribute from specific input content stored in association with the user's attribute.
  • the library for each attribute can be shared by the user and other users (that is, among a plurality of users).
  • a speech translation method uses a speech translation device including an input unit, a storage unit, a translation unit, an output unit, and a history display unit, and inputs the user's speech and the content of the input speech Storing the content of the input speech, translating the content of the input speech into content in a different language, outputting the content of the translation in speech and / or text, and displaying the history of the input content.
  • the specific input content is stored separately from the other input content from the history based on the user's instruction or based on the input frequency.
  • the step of displaying the history the specific input content is displayed so that the user can select it.
  • the step of translating when a specific input content is selected, the specific input content is translated into a different language content.
  • a speech translation program includes a computer (not limited to a single type or a single type, and may be a plurality or a plurality of types; the same applies hereinafter), an input unit for inputting a user's voice, As a storage unit for storing the content, a translation unit for translating the content of the input speech into content in different languages, an output unit for outputting the translation content in speech and / or text, and a history display unit for displaying the history of the input content Make it work. Then, the speech translation program according to an aspect of the present disclosure causes the storage unit to store specific input content separately from other input content from the history based on a user instruction or based on the input frequency. Further, specific input contents are displayed on the history display section so that the user can select them. Further, when the specific input content is selected, the translation unit causes the specific input content to be translated into different language content.
  • a user who uses a service related to a speech translation apparatus or a user who installs and uses an application which is a speech translation program in a computer such as an information terminal is used. Examples include filling in an information registration screen and answering a question questionnaire regarding attributes when using a speech translation device.
  • a history of input contents of speech uttered by the user is stored, from which specific input contents such as frequent phrases are stored, and the specific input contents are displayed so that the user can select them. Then, by selecting a desired phrase from the specific input contents, it is possible to save the trouble of uttering frequent phrases and the like, and as a result, it is possible to reduce the burden on the user and improve convenience. it can. In addition, the occurrence of mistranslation can be prevented, so that the accuracy of speech translation can be improved easily and effectively. In addition, as described above, it is possible to improve the accuracy of speech translation by preventing the occurrence of mistranslations, which speeds up processing, saves memory, reduces the amount of communication data, and ensures processing reliability. Can increase the sex.
  • FIG. 1 is a system block diagram schematically illustrating a preferred embodiment such as a network configuration related to a speech translation apparatus according to the present disclosure.
  • FIG. It is a system block diagram showing roughly an example of composition of a user apparatus (information terminal) in a speech translation device by this indication. It is a system block diagram showing roughly an example of composition of a server in a speech translation device by this indication. It is a flowchart which shows an example of the flow (a part) of the process in the speech translation apparatus by this indication.
  • or (D) are top views which show an example of the transition of the display screen in an information terminal.
  • FIG. 1 is a system block diagram schematically illustrating a preferred embodiment such as a network configuration according to a speech translation apparatus according to the present disclosure.
  • the speech translation apparatus 100 includes a server 20 that is electronically connected via a network N to an information terminal 10 (user apparatus) used by a user (speaker or other speaker) (however, this Not limited to).
  • the information terminal 10 employs a user interface such as a touch panel and a display with high visibility, for example.
  • the information terminal 10 here is a portable tablet terminal device including a mobile phone represented by a smartphone having a communication function with the network N.
  • the information terminal 10 further includes a processor 11, a storage resource 12, a voice input / output device 13, a communication interface 14, an input device 15, a display device 16, and a camera 17.
  • the information terminal 10 is operated by the installed speech translation application software (at least a part of the speech translation program according to the embodiment of the present disclosure), so that a part of the speech translation device according to the embodiment of the present disclosure or It functions as a whole.
  • the processor 11 includes an arithmetic logic unit and various registers (program counter, data register, instruction register, general-purpose register, etc.). Further, the processor 11 interprets and executes speech translation application software, which is the program P10 stored in the storage resource 12, and performs various processes.
  • the speech translation application software as the program P10 can be distributed from the server 20 through the network N, for example, and may be installed and updated manually or automatically.
  • the network N includes, for example, a wired network (a short-range communication network (LAN), a wide-area communication network (WAN), a value-added communication network (VAN), etc.) and a wireless network (mobile communication network, satellite communication network, Bluetooth ( Bluetooth (registered trademark), WiFi (Wireless Fidelity), HSDPA (High Speed Downlink Packet Access, etc.).
  • a wired network a short-range communication network (LAN), a wide-area communication network (WAN), a value-added communication network (VAN), etc.
  • LAN short-range communication network
  • WAN wide-area communication network
  • VAN value-added communication network
  • wireless network mobile communication network
  • satellite communication network satellite communication network
  • Bluetooth Bluetooth (registered trademark)
  • WiFi Wireless Fidelity
  • HSDPA High Speed Downlink Packet Access
  • the storage resource 12 is a logical device provided by a storage area of a physical device (for example, a computer-readable recording medium such as a semiconductor memory), and an operating system program, a driver program, various data, etc. used for processing of the information terminal 10 Is stored.
  • a driver program include an input / output device driver program for controlling the audio input / output device 13, an input device driver program for controlling the input device 15, an output device driver program for controlling the display device 16, and the like.
  • the voice input / output device 13 is, for example, a general microphone and a sound player capable of reproducing sound data.
  • the communication interface 14 provides, for example, a connection interface with the server 20 and includes a wireless communication interface and / or a wired communication interface.
  • the input device 15 provides an interface for accepting an input operation by a tap operation such as an icon, a button, or a virtual keyboard displayed on the display device 16, and is externally attached to the information terminal 10 in addition to the touch panel.
  • a tap operation such as an icon, a button, or a virtual keyboard displayed on the display device 16
  • Various input devices can be exemplified.
  • the display device 16 provides various information as an image display interface to a user or a conversation partner as necessary, and examples thereof include an organic EL display, a liquid crystal display, and a CRT display.
  • the camera 17 is for capturing still images and moving images of various subjects.
  • the server 20 is constituted by, for example, a host computer having a high arithmetic processing capability, and expresses a server function by operating a predetermined server program in the host computer, for example, a speech recognition server, a translation server, And a single or a plurality of host computers functioning as a speech synthesis server (in the drawing, it is indicated by a single, but is not limited thereto).
  • Each server 20 includes a processor 21, a communication interface 22, and a storage resource 23 (storage unit).
  • the processor 21 is composed of an arithmetic and logic unit for processing arithmetic operations, logical operations, bit operations and the like and various registers (program counter, data register, instruction register, general-purpose register, etc.), and is stored in the storage resource 23. P20 is interpreted and executed, and a predetermined calculation processing result is output.
  • the communication interface 22 is a hardware module for connecting to the information terminal 10 via the network N.
  • the communication interface 22 is a modulation / demodulation device such as an ISDN modem, an ADSL modem, a cable modem, an optical modem, or a soft modem.
  • the storage resource 23 is a logical device provided by, for example, a storage area of a physical device (a computer-readable recording medium such as a disk drive or a semiconductor memory). Each of the storage resources 23 includes one or more programs P20, various modules L20, various types. A database D20 and various models M20 are stored.
  • the program P10 is the above-described server program that is the main program of the server 20.
  • the various modules L20 perform a series of information processing related to requests and information transmitted from the information terminal 10, so that they are appropriately called and executed during the operation of the program P10 (moduleized subprograms). ).
  • Examples of the module L20 include a speech recognition module, a translation module, and a speech synthesis module.
  • the various databases D20 include various corpora required for speech translation processing (for example, in the case of Japanese and English speech translation, a Japanese speech corpus, an English speech corpus, a Japanese character (vocabulary) corpus, an English character) (Vocabulary) corpus, Japanese dictionary, English dictionary, Japanese-English bilingual dictionary, Japanese-English bilingual corpus, etc.), a speech database described later, a management database for managing information related to users, and the like.
  • examples of the various models M20 include an acoustic model and a language model used for speech recognition described later.
  • FIG. 4 is a flowchart showing an example of a process flow (part) in the speech translation apparatus 100 of the present embodiment.
  • 5A to 5D are plan views illustrating an example of display screen transition in the information terminal 10.
  • a conversation (but not limited to this).
  • a customer language selection screen is displayed on the display device 16 (FIG. 5A; step SJ1).
  • This language selection screen includes a Japanese text T21 for inquiring the language to the customer, an English text T22 for that purpose, and a plurality of typical languages (again, English, Chinese (for example, typeface) 2), a language button 61 indicating Korean) is displayed.
  • the Japanese text T21 and the English text T22 are divided by the processor 11 and the display device 16 into different areas on the screen of the display device 16 of the information terminal 10, and Are displayed in opposite directions (different directions; upside down in the figure).
  • the user can easily confirm the Japanese text T21, while the customer can easily confirm the English text T22.
  • the text T21 and the text T22 are displayed separately, there is an advantage that they can be clearly distinguished and further confirmed.
  • the user can present the display of the text T22 on the language selection screen to the customer and have the customer tap the English button, or can select the language of the customer himself.
  • a standby screen for voice input in Japanese and English is displayed as the home screen (FIG. 5B; step SJ2).
  • the standby screen includes a text T23 asking which of the user's or customer's language is to be spoken, a Japanese input button 62a for inputting Japanese speech, and an English input button for inputting English speech. 62b is displayed.
  • the standby screen includes a history button 63 for displaying a history of input contents, a language selection button 64 for returning to the language selection screen and switching the language of the customer (reselecting the language), and the application software.
  • a setting button 65 for performing various settings is also displayed.
  • FIG. 4 shows a flow of the case classification (step SU2) when paying attention to whether or not the user taps the history button 63, but in normal speech translation processing, the standby shown in FIG. Voice input can be performed from the screen.
  • the flow of speech translation processing in that case that is, “No” in step SU2) will be described first.
  • the processor 21 of the server 20 receives the voice signal through the communication interface 22 and performs voice recognition processing (step SJ4). At this time, the processor 21 calls the necessary module L20, database D20, and model M20 (speech recognition module, Japanese speech corpus, acoustic model, language model, etc.) from the storage resource 23, and obtains “sound” of the input speech. Convert to "reading" (character). As described above, the processor 21 or the server 20 functions as a “voice recognition server” as a whole.
  • the processor 21 shifts to a multilingual translation process for translating the “reading” (characters) of the recognized speech into another language (step SJ5).
  • the processor 21 calls the necessary module L20 and database D20 (translation module, Japanese character corpus, Japanese dictionary, English dictionary, Japanese-English bilingual dictionary, Japanese-English bilingual corpus, etc.) from the storage resource 23 and recognizes them.
  • the resulting input speech “reading” (character string) is properly sorted and converted into Japanese phrases, clauses, sentences, etc., the English corresponding to the conversion result is extracted, and these are sorted according to the English grammar.
  • the processor 21 also functions as a “translation unit”, and the server 20 also functions as a “translation server” as a whole. If the input voice is not recognized well, the voice can be re-input (screen display is not shown).
  • the processor 21 stores the content of the recognized input voice in the storage resource 23.
  • the processor 21 proceeds to speech synthesis processing (step SJ6).
  • the processor 21 calls the necessary module L20, database D20, and model M20 (speech synthesis module, English speech corpus, acoustic model, language model, etc.) from the storage resource 23, and the English phrase that is the translation result, Convert clauses, sentences, etc. to natural speech.
  • the processor 21 also functions as a “speech synthesizer”
  • the server 20 also functions as a “speech synthesizer” as a whole.
  • the processor 21 generates a voice signal for voice output based on the synthesized voice, and transmits the voice signal to the information terminal 10 through the communication interface 22 and the network N.
  • the processor 11 of the information terminal 10 receives the audio signal through the communication interface 14 and performs an audio output process (step SJ7).
  • a display order selection button 66 for switching the order of the list of the contents of the input speech between, for example, “latest order” and “frequency order” is displayed above the list where the text is displayed as a list.
  • the user can appropriately switch between the “latest order” list and the “frequency order” list by tapping the display order selection button 66 as appropriate.
  • a pin-shaped design P is additionally displayed in the text of the contents of each input voice.
  • the content of each input voice displayed on the history display screen is selected from the content that the user frequently utters or the content of the fixed phrase, so to speak, the pin By doing so, you can “clip”.
  • the user taps the pin shape design P of the input contents (specific input contents) displayed in the texts T31, T32, and T33 among the input contents listed in FIG. 5C (in step SU3, “ Yes ").
  • the processor 11 of the information terminal 10 moves the input contents of the texts T31, T32, T33 to the upper area R1 of the screen and displays them together, while moving the other input contents to the lower area R2 of the screen and collects them. And visually distinguish them from each other (step SJ4). Further, in the vicinity of the upper region R1, a text T23 indicating that the input content is clipped with a pin is clearly shown.
  • the processor 11 of the information terminal 10 transmits to the server 20 a command signal indicating that the input contents of the texts T31, T32, and T33 have been selected by the user.
  • the processor 21 of the server 20 sets a flag on the input contents (specific input contents) of the texts T31, T32, and T33 held in the storage resource 23, so as to distinguish them from other input contents.
  • the input contents of the texts T31, T32, and T33 clipped with pins are additionally displayed as x mark designs 67 instead of the pin shape designs P. .
  • the user can unpin the texts T31, T32, and T33 by tapping the cross mark 67 as necessary.
  • the processor 21 of the server 20 removes the flag from the input content stored in the storage resource 23 with a flag, for example, in response to a command signal from the processor 11 of the information terminal 10.
  • the user can select desired input contents from the texts T31, T32, and T33 clipped with pins instead of uttering questions to the customer.
  • the command signal is transmitted from the processor 11 of the information terminal 10 to the server 20.
  • the processor 21 of the server 20 that has received the command signal sequentially executes multilingual translation processing (step SJ5), speech synthesis processing (step SJ6), and speech output processing (step SJ7) for the contents of the selected text T31. To do. Thereby, the user can output a parallel translation of a desired phrase or the like (specific input contents) without performing voice input.
  • step SU3 when the input content to be clipped with the pin is not selected in step SU3 (“No” in step SU3), or when the specific input content is not selected instead of the utterance in step SU4 (in step SU4) “No”), the processor 21 of the server 20 sequentially executes the normal speech translation processing shown in steps SJ3 to SJ7 described above. Specifically, when the user taps the close button 68 on the history display screen shown in FIG. 5C or FIG. 5D, the standby screen shown in FIG. Displayed and can be reverted to normal speech translation processing.
  • a standby screen for selecting a target language for speech translation is displayed on the display device 16 of the information terminal 10.
  • an information registration screen for inputting information related to the user is displayed on the display device 16 of the information terminal 10.
  • Attribute information such as a profession of a user (or a user's store), a business type, a business type, age, sex, a birthplace, a residence, is contained.
  • the processor 11 of the information terminal 10 when the user inputs user information, the processor 11 of the information terminal 10 generates an information signal based on the information input, and transmits the information signal to the server 20 through the communication interface 14 and the network N.
  • the information terminal 10 itself or the processor 11 also functions as an “information acquisition unit”.
  • the processor 21 of the server 20 When the processor 21 of the server 20 receives the information signal through the communication interface 22, the process temporarily shifts to the process after step SJ2 shown in FIG. Then, when the user selects the input content displayed in, for example, the texts T31, T32, and T33 to be clipped in step SU3 (“Yes” in step SU3), as in the first embodiment or the second embodiment, FIG. A history display screen shown in FIG. 5 (C) or FIG. 5 (D) is displayed. On the other hand, the processor 21 of the server 20 distinguishes it from other input contents by flagging the input contents (specific input contents) of the texts T31, T32, and T33 held in the storage resource 23, and the like. Is associated with the user attribute and stored again.
  • the processor 21 extracts or narrows down specific input contents based on any of the user's attributes (especially the user's (or user's store) occupation, business type, and business type). .
  • the processor 21 may collect the specific input contents extracted or narrowed down by the user attribute and the corresponding translation contents as a library for each attribute and store them in the storage resource 23. . It is more useful if the library for each attribute created in this way is shared among a plurality of users.
  • the speech translation method using the speech translation device, and the speech translation program a specific phrase such as a frequent phrase or a fixed sentence is identified from the history of the input content of speech uttered by the user.
  • the input content can be clipped and stored. Therefore, the user can easily call up frequent phrases, fixed phrases, etc., and the user can save time and effort to utter them each time.
  • the burden on the user can be reduced and the convenience can be improved, and the occurrence of mistranslation can be effectively prevented, so that the accuracy of speech translation can be improved easily and effectively. realizable.
  • the clipped specific input content is stored in association with the user attribute, and is displayed on the history display screen, so that frequent phrases, fixed phrases, etc. corresponding to the user attribute can be efficiently selected. Is possible.
  • frequent phrases and fixed phrases necessary for the user can be easily found, so that the burden on the user can be further reduced and convenience can be further improved.
  • frequent phrases etc. are expected to be further standardized, and based on the occupation, type of business, and business type as the user attribute
  • each of the above embodiments is an example for explaining the present invention, and the present invention is not limited to the embodiment.
  • the present disclosure can be variously modified without departing from the gist thereof.
  • those skilled in the art can replace the resources (hardware resources or software resources) described in the embodiments with equivalents, and such replacements are also included in the scope of the present disclosure.
  • each process of speech recognition, translation, and speech synthesis is executed by the server 20
  • these processes may be executed in the information terminal 10.
  • the module L20 used for these processes may be stored in the storage resource 12 of the information terminal 10 or may be stored in the storage resource 23 of the server 20.
  • the database D20 of the voice database and / or the model M20 such as an acoustic model may be stored in the storage resource 12 of the information terminal 10, or may be stored in the storage resource 23 of the server 20.
  • the speech translation apparatus may not include the network N and the server 20.
  • the frequency of the specific input content May be extracted by the processor 21 of the server 20, and a database or library obtained by clipping them may be automatically generated.
  • the input content extracted based on the input frequency by the processor 21 may be displayed on the screen in which the display order selection button 66 shown in FIG. 5C or FIG. it can.
  • a translation result of a predetermined language executed once may be stored together with the specific input content clipped (in association with the specific input content). For example, in the flow shown in FIG. 4, when the user taps and selects the portion of the text T31 (“Yes” in step SU4), the multilingual translation process (step SJ5) is skipped and the speech synthesis process (step SJ6) is performed. You may make it perform.
  • the information terminal 10 is not limited to a portable device, and may be a desktop personal computer, a notebook personal computer, a tablet personal computer, a laptop personal computer, or the like.
  • the speech translation apparatus is as follows: An input unit for inputting the user's voice; A storage unit for storing the contents of the input voice; A translation unit that translates the content of the input speech into content of a different language; An output unit that outputs the translated content in audio and / or text; A history display unit for displaying the history of the input content; With The history display unit displays a design indicating that the user can select a specific input content from the history, attached to each input content in the history, The storage unit stores the specific input content separately from other input content when the user selects the specific input content from the history using the design, The history display unit visually displays the specific input content and the other input content, and the user can select a desired one from the specific input content displayed visually. Display the input contents selectable, The translation unit may translate the desired input content into different language content when the desired input content is selected by the user.
  • the history display unit visually distinguishes and displays the specific input content and the other input content, the user can remove unnecessary input content from the specific input content.
  • the design shown may be displayed along with the specific input content.
  • an information acquisition unit that acquires information about the user's attributes
  • the storage unit stores the specific input content in association with the attribute of the user
  • the history display unit may switch the display of the history according to the attribute of the user.
  • a library creation unit that creates a library for each attribute from the specific input content stored in association with the attribute of the user may be further provided.
  • the library for each attribute may be shared by the user and other users.
  • the speech translation method is: Using a speech translation device including an input unit, a storage unit, a translation unit, an output unit, and a history display unit, Inputting the user's voice; Storing the contents of the input voice; Translating the content of the input speech into content of a different language; Outputting the translated content in audio and / or text; Displaying a history of the input content; Including In the step of displaying the history, a design indicating that the user can select a specific input content from the history is displayed along with each input content in the history, In the storing step, when the user selects the specific input content from the history using the design, the specific input content is stored separately from other input content, In the step of displaying the history, the specific input content and the other input content are visually distinguished and displayed, and the user can visually distinguish the specific input content. Display the desired input content from In the step of translating, when the desired input content is selected by the user, the desired input content may be translated into different language content.
  • the speech translation program is Computer An input unit for inputting the user's voice; A storage unit for storing the contents of the input voice; A translation unit that translates the content of the input speech into content of a different language; An output unit that outputs the translated content in audio and / or text; A history display unit for displaying the history of the input content; To function, In the history display unit, a design indicating that the user can select specific input contents from the history is displayed along with each input content in the history, When the user selects the specific input content from the history using the design, the storage unit stores the specific input content separately from other input content, The history display unit visually displays the specific input content and the other input content, and the user selects a desired one from the specific input content displayed visually. Display the input contents selectable, When the desired input content is selected by the user, the translation unit may translate the desired input content into different language content.
  • the burden on the user in speech translation processing can be reduced and convenience can be improved, and the accuracy of speech translation can be improved easily and effectively by preventing the occurrence of mistranslation.
  • the present invention can be widely used for activities such as designing, manufacturing, providing, and selling programs, apparatuses, systems, and methods in the field of providing services related to conversations between people who cannot understand each other's languages.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)

Abstract

Un dispositif de traduction de parole selon un mode de réalisation de la présente invention est équipé des éléments suivants : une partie de saisie permettant de saisir la parole d'un utilisateur ; une partie de stockage permettant de stocker le contenu de la parole saisie ; une partie de traduction permettant de traduire le contenu de la parole saisie en une langue différente ; une partie de sortie permettant de reproduire le contenu traduit (traduction parallèle) sous forme de parole et/ou de texte ; et une partie d'affichage d'historique permettant d'afficher l'historique d'entrées du contenu saisi. De plus, la partie de stockage stocke une entrée de contenu saisi spécifique dans l'historique séparément des autres entrées de contenu saisies en réponse à une instruction donnée par l'utilisateur ou en fonction de la fréquence de saisie de celle-ci. De plus, lorsque le contenu de saisie spécifique est sélectionné, la partie de traduction traduit le contenu de saisie spécifique en une langue différente.
PCT/JP2017/000564 2016-01-13 2017-01-11 Dispositif de traduction de parole, procédé de traduction de parole et programme de traduction de parole WO2017122657A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016004337A JP5998298B1 (ja) 2016-01-13 2016-01-13 音声翻訳装置、音声翻訳方法、及び音声翻訳プログラム
JP2016-004337 2016-01-13

Publications (1)

Publication Number Publication Date
WO2017122657A1 true WO2017122657A1 (fr) 2017-07-20

Family

ID=56997641

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/000564 WO2017122657A1 (fr) 2016-01-13 2017-01-11 Dispositif de traduction de parole, procédé de traduction de parole et programme de traduction de parole

Country Status (2)

Country Link
JP (1) JP5998298B1 (fr)
WO (1) WO2017122657A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018072568A (ja) * 2016-10-28 2018-05-10 株式会社リクルートライフスタイル 音声入力装置、音声入力方法及び音声入力プログラム
JP6243071B1 (ja) * 2017-04-03 2017-12-06 旋造 田代 通信内容翻訳処理方法、通信内容翻訳処理プログラム、及び、記録媒体

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009110420A (ja) * 2007-10-31 2009-05-21 Sharp Corp 電子機器、その制御方法およびコンピュータプログラム

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3926365B2 (ja) * 2001-01-24 2007-06-06 松下電器産業株式会社 音声変換装置
JP5019367B2 (ja) * 2007-05-01 2012-09-05 シャープ株式会社 電子機器およびその制御方法

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009110420A (ja) * 2007-10-31 2009-05-21 Sharp Corp 電子機器、その制御方法およびコンピュータプログラム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OZAKI, SHUN ET AL.: "Development of Mobile Multilingual Medical Communication Support System and Its Introduction for Medical Field", IPSJ SIG NOTES 2012 (HEISEI 24) NENDO 5, 15 February 2013 (2013-02-15), pages 1 - 8 *

Also Published As

Publication number Publication date
JP5998298B1 (ja) 2016-09-28
JP2017126152A (ja) 2017-07-20

Similar Documents

Publication Publication Date Title
US20200410174A1 (en) Translating Languages
US9355094B2 (en) Motion responsive user interface for realtime language translation
US9484034B2 (en) Voice conversation support apparatus, voice conversation support method, and computer readable medium
CN107632982B (zh) 语音控制外语翻译设备用的方法和装置
JP6141483B1 (ja) 音声翻訳装置、音声翻訳方法、及び音声翻訳プログラム
JP6290479B1 (ja) 音声翻訳装置、音声翻訳方法、及び音声翻訳プログラム
JP5998298B1 (ja) 音声翻訳装置、音声翻訳方法、及び音声翻訳プログラム
WO2017135214A1 (fr) Système, procédé et programme de traduction de parole
JP6353860B2 (ja) 音声翻訳装置、音声翻訳方法、及び音声翻訳プログラム
JP6310950B2 (ja) 音声翻訳装置、音声翻訳方法、及び音声翻訳プログラム
JP6676093B2 (ja) 異言語間コミュニケーション支援装置及びシステム
JP6250209B1 (ja) 音声翻訳装置、音声翻訳方法、及び音声翻訳プログラム
JP6198879B1 (ja) 音声翻訳装置、音声翻訳方法、及び音声翻訳プログラム
JP6110539B1 (ja) 音声翻訳装置、音声翻訳方法、及び音声翻訳プログラム
JP6383748B2 (ja) 音声翻訳装置、音声翻訳方法、及び音声翻訳プログラム
JP6334589B2 (ja) 定型フレーズ作成装置及びプログラム、並びに、会話支援装置及びプログラム
JP2004295578A (ja) 翻訳装置
JP6174746B1 (ja) 音声翻訳装置、音声翻訳方法、及び音声翻訳プログラム
Sharma et al. Exploration of speech enabled system for English
JP2018173910A (ja) 音声翻訳システム及び音声翻訳プログラム
JP6298806B2 (ja) 音声翻訳システム及びその制御方法、並びに音声翻訳プログラム
Jeevitha et al. A study on innovative trends in multimedia library using speech enabled softwares
JP2002288170A (ja) 多言語間コミュニケーション支援システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17738411

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17738411

Country of ref document: EP

Kind code of ref document: A1