JP5653392B2 - Speech translation apparatus, method and program - Google Patents

Speech translation apparatus, method and program Download PDF

Info

Publication number
JP5653392B2
JP5653392B2 JP2012146880A JP2012146880A JP5653392B2 JP 5653392 B2 JP5653392 B2 JP 5653392B2 JP 2012146880 A JP2012146880 A JP 2012146880A JP 2012146880 A JP2012146880 A JP 2012146880A JP 5653392 B2 JP5653392 B2 JP 5653392B2
Authority
JP
Japan
Prior art keywords
example
character string
language
similar
translation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2012146880A
Other languages
Japanese (ja)
Other versions
JP2014010623A (en
Inventor
住田 一男
一男 住田
鈴木 博和
博和 鈴木
建太郎 降幡
建太郎 降幡
聡史 釜谷
聡史 釜谷
知野 哲朗
哲朗 知野
尚義 永江
尚義 永江
康顕 有賀
康顕 有賀
貴史 益子
貴史 益子
Original Assignee
株式会社東芝
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社東芝 filed Critical 株式会社東芝
Priority to JP2012146880A priority Critical patent/JP5653392B2/en
Publication of JP2014010623A publication Critical patent/JP2014010623A/en
Application granted granted Critical
Publication of JP5653392B2 publication Critical patent/JP5653392B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/20Handling natural language data
    • G06F17/28Processing or translating of natural language
    • G06F17/289Use of machine translation, e.g. multi-lingual retrieval, server side translation for client devices, real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/20Handling natural language data
    • G06F17/28Processing or translating of natural language
    • G06F17/2809Data driven translation
    • G06F17/2827Example based machine translation; Alignment
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/20Handling natural language data
    • G06F17/28Processing or translating of natural language
    • G06F17/2809Data driven translation
    • G06F17/2836Machine assisted translation, e.g. translation memory
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • Y10S707/99934Query formulation, input preparation, or translation

Description

  Embodiments described herein relate generally to a speech translation apparatus, method, and program.

  With globalization in recent years, there is an increasing expectation for speech translation devices that support communication between users whose native languages are different languages. In fact, services that provide speech translation functions are also in operation. However, it is difficult to execute speech recognition or machine translation without error. Therefore, if the user who uses the target language, which is the translated language of the spoken source language, cannot understand the translation of the target language, the speaker who uses the source language is encouraged to correct it by specifying the unintelligible part. There is a technique.

Japanese Patent No. 4042360

  However, correction of unintelligible parts requires the user to correct the character string on the source language side, and the user on the target language side must also check the translated sentence one sentence at a time and enter the result of the confirmation. It is difficult to realize a conversation with high responsiveness.

  The present disclosure has been made to solve the above-described problem, and an object thereof is to provide a speech translation apparatus, method, and program capable of providing speech translation that is smooth and highly responsive.

  The speech translation apparatus according to this embodiment includes an acquisition unit, a speech recognition unit, a translation unit, a search unit, a selection unit, and an example presentation unit. The acquisition unit acquires an utterance in the first language as an audio signal. The voice recognition unit sequentially performs voice recognition on the voice signal to obtain a first language character string that is a character string of a voice recognition result. The translation unit translates the first language character string into a second language different from the first language, and obtains a second language character string that is a character string as a translation result. The search unit searches for a similar example that is an example in the first language similar to the first language character string for each first language character string, and when the similar example exists, A parallel example that is the result of translating the similar example into the second language is obtained. The selection unit selects at least one of a first language character string in which the similar example exists and a second language character string in which the parallel example exists as a selected character string according to a user instruction. The example presentation unit presents one or more similar examples and parallel translation examples related to the selected character string.

The block diagram which shows the speech translation apparatus which concerns on this embodiment. The figure which shows an example of the example of the source language stored in an example storage part, and the example of a target language. The flowchart which shows operation | movement of the speech translation apparatus which concerns on this embodiment. The flowchart which shows the detail of an example search process. The flowchart which shows the detail of the presentation process of a similar example and a translation example. The figure which shows the example of mounting of the speech translation apparatus concerning this embodiment. The figure which shows an example of the screen display of a touchscreen display. The figure which shows the 1st process in operation | movement of the speech translation apparatus which concerns on this embodiment. The figure which shows the 2nd process in operation | movement of the speech translation apparatus which concerns on this embodiment. The figure which shows the 3rd process in operation | movement of the speech translation apparatus concerning this embodiment. The figure which shows the 4th process in operation | movement of the speech translation apparatus which concerns on this embodiment. The figure which shows the 5th process in operation | movement of the speech translation apparatus concerning this embodiment. The figure which shows the 6th process in operation | movement of the speech translation apparatus which concerns on this embodiment. The figure which shows the 7th process in operation | movement of the speech translation apparatus which concerns on this embodiment. The figure which shows the 1st process in operation | movement when the user of the source language side selects an example. The figure which shows the 2nd process in operation | movement when the user of the source language side selects an example. The figure which shows the example of a display when a suitable example does not exist. The figure which shows an example of the table stored in the example storage part which concerns on 2nd Embodiment. The figure which shows the specific example of operation | movement of the speech translation apparatus which concerns on 2nd Embodiment. The figure which shows the speech recognition system containing the speech translation apparatus which concerns on 3rd Embodiment.

In recent years, for example, speech translation application software that operates on a smartphone (high performance portable terminal) has been commercialized. A service that provides a speech translation function is also in operation. In these application software and services, a user speaks a voice in a short unit such as one sentence or several sentences, and converts it into a corresponding character string by voice recognition. Furthermore, it translates into a character string of another language by machine translation, and reads out the character string as a translation result by speech synthesis. Users who use the source language are required to speak in short units, and users who use the target language are required to check the translation result or listen to the synthesized speech in that unit.
For this reason, in such a conversation using the conventional application software, a waiting time frequently occurs, and it is difficult to conduct a conversation with good response. Although it is desirable that the content of the utterance without any restriction is transmitted to the other party without providing the user with a restriction that requires the user to speak one sentence at a time, such a function is not provided.

Hereinafter, the speech translation apparatus, method, and program according to the present embodiment will be described in detail with reference to the drawings. Note that, in the following embodiments, the same reference numerals are assigned to the same operations, and duplicate descriptions are omitted as appropriate. In this embodiment, the source language indicating the spoken language is Japanese, the target language indicating the language in which the source language is to be translated is English, and translation between Japanese and English is described as an example. The target language is not limited to these two languages, and any language can be targeted.
(First embodiment)
A speech translation apparatus according to the first embodiment will be described with reference to the block diagram of FIG.
A speech translation apparatus 100 according to the first embodiment includes a speech acquisition unit 101, a speech recognition unit 102, a machine translation unit 103, a display unit 104, an example storage unit 105, an example search unit 106, a pointing instruction detection unit 107, a character string A selection unit 108 and an example presentation unit 109 are included.

The voice acquisition unit 101 acquires a voice uttered by the user in the original language (also referred to as a first language) as a voice signal.
The speech recognition unit 102 receives a speech signal from the speech acquisition unit 101, performs speech recognition processing on the speech signal, and obtains a source language character string that is a source language character string as a result of speech recognition. The speech recognition unit 102 sequentially performs speech recognition for each processing unit in the speech recognition processing while the speech signal is input from the speech acquisition unit 101, and is passed to the subsequent stage every time a source language character string is obtained. The processing unit for speech recognition is determined when pauses or linguistic breaks present in speech, when speech recognition candidates are determined, or at certain time intervals. Moreover, you may notify a user by an event that the result of speech recognition can be taken out. In addition, since the concrete process of speech recognition should just perform a general process, description here is abbreviate | omitted.

The machine translation unit 103 receives the source language character string from the speech recognition unit 102, machine translates the source language character string into a character string of a target language (also referred to as a second language), and performs a target language character that is a character string of a translation result Get a column. Since specific processing of machine translation may be general processing, description thereof is omitted here.
The display unit 104 is, for example, a display, and receives a source language character string from the speech recognition unit 102 and a target language character string from the machine translation unit 103, and displays the source language character string and the target language character string. Also, a similar example and a parallel translation example are received from an example presentation unit 109 described later and displayed. The similar example is an example in the source language similar to the source language character string. The parallel translation example is an example of a result obtained by translating a similar example into a target language.

The example storage unit 105 stores a source language example (hereinafter also referred to as a source language example) and a target language example (hereinafter also referred to as a target language example) in association with each other. An example of the source language and an example of the target language stored in the example storage unit 105 will be described later with reference to FIG.
The example search unit 106 receives the source language character string from the speech recognition unit 102 and searches for a similar example similar to the source language character string from the source language examples stored in the example storage unit 105.
The pointing instruction detection unit 107 acquires position information corresponding to the position instructed by the user on the display unit 104.

The character string selection unit 108 receives position information from the pointing instruction detection unit 107 and selects a source language character string or a target language character string corresponding to the position information from among the character strings displayed on the display unit 104 as a selected character string. To do.
The example presentation unit 109 receives the selected character string from the character string selection unit 108 and the similar example and the parallel translation example related to the selected character string from the example search unit 106, and causes the display unit 104 to display the similar example and the parallel translation example. Also, the example presentation unit 109 highlights the selected character string, the selected similar example, and the parallel translation example.

Next, an example of the source language example and the target language example stored in the example storage unit 105 will be described with reference to FIG.
As shown in FIG. 2, a source language example 201 that is a source language and a target language example 202 that is a target language corresponding to the source language example 201 are stored in association with each other. Specifically, for example, the source language example 201 and the target language example are “I can't walk so long distance.” Which is a translation result of “not so much walking” and “not so much walking”, respectively. 202 is stored.

Next, the operation of the speech translation apparatus 100 according to the present embodiment will be described with reference to the flowchart of FIG. Although not shown in the flowchart, since the speech recognition unit 102 and the machine translation unit 103 operate in parallel, the processing of the speech recognition unit 102 and the machine translation unit 103 is started prior to the processing of FIG. deep.
In step S301, the speech recognition unit 102 obtains a source language character string as a result of performing speech recognition processing.
In step S302, the display unit 104 displays the source language character string.
In step S303, the machine translation unit 103 obtains a target language character string as a result of machine translation processing.

  In step S304, the display unit 104 displays the target language character string. The display unit 104 may not display the source language character string in step S302, but may display the source language character string and the target language character string together after the target language character string is obtained. .

In step S305, the example search unit 106 performs an example search process. The example search process will be described later with reference to the flowchart of FIG.
In step S306, the pointing instruction detection unit 107 detects whether there is an instruction from the user, that is, whether there is pointing to a target language character string whose meaning is unknown. For example, when the display unit 104 is a touch panel display, the user's instruction is detected by touching a symbol indicating that there is a similar example and a parallel translation example, and that the instruction from the user is present. If an instruction from the user is detected, the process proceeds to step S307. If an instruction from the user is not detected, the process returns to step S301, and the same processing is repeated.

In step S307, the voice recognition unit 102 temporarily stops the voice recognition process.
In step S308, the example presentation unit 109 performs an example presentation process. Specific example presentation processing will be described later with reference to the flowchart of FIG.
In step S309, the voice recognition unit 102 restarts the voice recognition process and repeats the same process from step S301. Thereafter, the operation of the speech translation apparatus is terminated when there is no more utterance input or when there is an instruction to terminate speech recognition processing by the user.

Next, details of the operation of step S305 will be described with reference to the flowchart of FIG.
In step S401, the example search unit 106 receives a source language character string.
In step S <b> 402, the example search unit 106 searches the example storage unit 105 to determine whether there is a similar example in the extracted source language character string. In the similar example search, for example, the edit distance between the source language character string and the source language example may be calculated, and when the matching degree is equal to or greater than the threshold, the source language example may be determined to be the similar example. If the degree of coincidence of the number of words is greater than or equal to a threshold value by morphological analysis, it may be determined that the example is similar. If a similar example exists, the process proceeds to step S403. If a similar example does not exist, the processes in steps S305 and S306 are terminated.
In step S403, the example presentation unit 109 causes the display unit 104 to display a symbol indicating that the similar example exists in the source language character string in which the similar example exists, and displays the target language character corresponding to the source language character string. A symbol indicating that a parallel translation example exists in the column is displayed on the display unit 104.

Next, the presentation processing of the similar example and the parallel translation example in step S308 will be described with reference to the flowchart of FIG. Hereinafter, unless otherwise noted, similar examples and parallel translation examples are collectively referred to as examples.
In step S501, the example presentation unit 109 displays an example together with the notification. The notification is a confirmation message indicating that there is an instruction from the user to confirm the meaning. Only one example may be displayed, or a plurality of examples may be presented as a list. For example, the list of examples may be presented in the order of the highest similarity to the speech recognition result, for example, all examples may be presented, or the history of presented examples may be referred to Any method such as presentation may be used.

In step S502, the pointing instruction detection unit 107 detects whether an example is pointed out from the example list, that is, whether an example is selected. If an example is selected, the process proceeds to step S503. If no example is selected, the process proceeds to step S504.
In step S503, the example presentation unit 109 highlights the selected example. Specifically, the character color of the selected parallel translation example may be reversed or highlighted by pointing the parallel translation example, for example. When the parallel translation example is highlighted, the corresponding similar example is also highlighted. The same applies to the reverse case.
In step S504, the example presentation unit 109 presents a confirmation message (also simply referred to as notification). The confirmation message is a message for allowing the user to determine whether or not the selected example is appropriate.

In step S505, the pointing instruction detection unit 107 detects whether there is an instruction regarding deletion. An instruction regarding deletion is detected when there is a deletion instruction when, for example, a delete button is selected. If there is an instruction regarding deletion, the process proceeds to step S506. If there is an instruction regarding deletion, the process returns to step S502, and the same processing is repeated.
In step S506, the example presentation unit 109 causes the display unit 104 to display a confirmation message indicating that the content has not been transmitted to the other party, assuming that there is no appropriate example in the presented examples.
In step S507, the pointing instruction detection unit 107 detects whether or not there is pointing from the user for the confirmation message. If there is a confirmation message, the process proceeds to step S508. If there is no confirmation message, the process waits until there is a pointing from the user.

In step S508, the pointing instruction detection unit 107 detects whether pointing from the user indicates affirmation. If the pointing from the user does not indicate affirmation, the process proceeds to step S509. If the pointing from the user indicates affirmation, the process proceeds to step S510.
In step S509, the example presentation unit 109 hides the confirmation message, cancels the highlighted display of the selected example, returns to the normal display, returns to step S502, and performs the same processing.
In step S510, the example presentation unit 109 adds and displays the selected example at a corresponding location in the display area.
In step S511, the example presentation unit 109 deletes the source language character string and the target language character string to be processed.
In step S512, the example presentation unit 109 hides the example list displayed in step S501. Thus, the example presentation process ends.

Next, an implementation example of the speech translation apparatus will be described with reference to FIG.
FIG. 6 shows an example in which the speech translation apparatus 100 according to the present embodiment is mounted on so-called tablet-shaped hardware. A speech translation apparatus 600 shown in FIG. 6 includes a housing 601, a touch panel display 602, and a microphone 603.
The housing 601 is mounted with a touch panel display 602 and a microphone 603.
If the display is a capacitive type, the touch panel display 602 can display a pointing function (pointing instruction detection unit) capable of detecting that the location is pointed when touched with a finger, a character, an image, and the like. Display function (display unit).
A general microphone may be used as the microphone 603, and description thereof is omitted here.

Next, an example of the screen display of the touch panel display 602 will be described with reference to FIG.
As an example of the layout of the screen display, as shown in FIG. 7, a display area 701 in which the source language character string is displayed on the left half of the screen is displayed, and a display area 702 in which the target language character string is displayed on the right half of the screen. Is displayed. Further, an utterance start button 703, a language switching button 704, a delete button 705, and an end button 706 are displayed on the right end of the screen.

The utterance start button 703 is an area pointed when the user instructs the start of utterance. The language switching button 704 is an area where the user points to switch between the source language and the target language. The delete button 705 is an area pointed to delete an example or the like. An end button 706 is an area pointed to end the voice recognition process.
In addition, the layout and the configuration are not limited to those illustrated in FIG. 7, and any arrangement and configuration may be used, such as a button group popping up as necessary. In addition to the touch panel display, screen display and input such as a combination of a screen and a keyboard may be independent.

Next, a specific example of the operation of the speech translation apparatus according to this embodiment will be described with reference to FIGS. Here, an operation example using the speech translation apparatus 600 shown in FIG. 6 will be described.
FIG. 8 shows a display example when the user on the target language side speaks. The example in FIG. 8 is a case where the utterance of the target language is machine-translated into the source language, but the above-described processing may be performed by replacing the source language Japanese and the target language English. . Specifically, when the user utters the speech voice 801 “Have you already gone around here?”, “Have you already gone around here?” Is displayed in the display area 702 as the voice recognition result 802-E, and the voice recognition result. As the machine translation result 802-J of 802-E, “Is this area already around?” Is displayed in the display area 701.

  FIG. 9 shows a display example when the user on the source language side speaks. Specifically, the speech acquisition unit 101 acquires “Language I want to look around, but I do not want to walk too much, so I want to take a bus tour” as the speech speech 901, and the source language characters that are the result of the speech recognition in sequence. Columns 902-J “I want to look around”, 903-J “I don't want to walk too much”, and 904-J “I like a bus tour” are displayed in the display area 701. In addition, the target language character string 902-E “I would like to look around.”, 903-E “Amari doesn't want to walk.”, 904-E “A”, which is a machine translation result corresponding to the speech recognition result "bus tour is good." is displayed in the display area 702. A symbol 905 is a symbol indicating that a similar example and a parallel translation example exist. Here, it is assumed that the target language character string 903-E is meaningless because of a machine translation error.

  FIG. 10 shows a case where the user on the target language side points to a target language character string 903-E that does not make sense. For example, the pointing method may be selected by touching the symbol 905, or the cursor 1001 may be aligned with the symbol 905. At that time, a confirmation message 1002-E and a corresponding confirmation message 1002-J are displayed. In the example of FIG. 10, a confirmation message 1002-J “What do you want to say?” Is displayed in the display area 701, and a confirmation message 1002-E “Can you see what the partner wants to say?” Is displayed in the display area 702. Is displayed.

  In FIG. 11, as a result of selecting the target language character string by the user, a similar example of the source language character string and a parallel translation example of the corresponding target language character string are displayed in the display areas 701 and 702, respectively. Specifically, referring to the example storage unit 105, similar examples 1101-J “I can't walk too much”, 1102-J “I don't want to walk too much” and 1103-J “I want to walk tomorrow”, and similar examples 1101-E "I can't walk so long distance.", 1102-E "I don't want to walk." And 1103-E "Tomorrow, I'd like to walk." Is done.

  FIG. 12 shows a case where the target language user selects a bilingual example, and for example, the selected bilingual example 1201-E and the corresponding similar example 1201-J are highlighted together. Here, “I ca n’t walk so long distance.” Is selected and highlighted as the parallel translation example 1201-E, and the corresponding similar example 1201-J “cannot walk too much” is highlighted. When the parallel translation example is selected, a confirmation message 1202 “Are you sure you want to say?” Is displayed in the display area 701 on the source language side. When a plurality of similar examples and parallel translation examples are displayed, the scroll bar 1104 may be used to scroll the similar examples and the parallel translation examples.

  In FIG. 13, the user on the source language side points whether or not to accept the contents of the similar example highlighted. Specifically, in FIG. 13, “Yes” or “No” in the confirmation message 1202 is touched or designated with the cursor 1001. As a result, the pointing instruction detection unit 107 detects whether the user has selected “Yes” or “No”.

  In FIG. 14, when the user on the source language side selects “Yes”, the list display of the similar examples and the parallel translation examples is canceled, and the selected similar examples and the corresponding parallel translation examples are additionally displayed in the display areas 701 and 702, respectively. At the same time, the original source language character string and the original target language character string, which are translation errors, are deleted. For example, “I don't want to walk too much” is erased with a strike line as the source language character string 1401-J, and a similar example “I can't walk too much” is displayed on it. On the other hand, as the target language character string 1401-E, “Amari does n’t want to walk.” Is erased by a strike-through line, and a parallel translation example “I ca n’t walk so long distance.” Is displayed thereon. By doing in this way, even if the user on the target language side cannot understand the meaning of the translation result, if the user on the target language side selects the example, the example corresponding to the user on the source language side is displayed. Since the source language user only needs to determine whether or not the selected similar example is appropriate, the user can easily communicate as intended regardless of the source language user's ability to rephrase the sentence. Can do.

The above example shows a case where the target language side user selects the bilingual example, but the source language side user may select the similar example. A specific example in which the user on the source language side selects a similar example will be described with reference to FIGS. 15 and 16.
As shown in FIG. 15, the user on the source language side selects a similar example. Here, when similar example 1501-J “I don't want to walk too much” is selected, it is highlighted. When the similar example 1501-J is selected, a parallel translation example 1501-E “I don't want to walk.” In the display area 702 on the target language side is highlighted. In addition, a confirmation message 1502 “Can you see what the partner wants to say?” Is displayed in the display area on the target language side.

  In FIG. 16, the user on the target language side points with the cursor 1001 or the like as to whether or not he / she accepts the contents of the bilingual example highlighted. In this way, when there is a sentence in which a similar example exists in the source language character string of the content spoken by the user on the source language side, the user on the source language side can select and rephrase the similar example.

  Next, FIG. 17 shows a case where there is no appropriate example in the similar example and the parallel translation example.

When the user on the target language side or the user on the source language side determines that there is no appropriate example and does not select the example, the example is not inserted into the source language character string and the target language character string to be processed. Further, the source language character string and the target language character string to be processed are deleted, and a confirmation message 1701 is displayed. For example, the confirmation message 1701 may display a content such as “I'm sorry, but it didn't appear”.
In this case, the content of the target language character string to be processed is not transmitted to the user on the target language side, but at least the user on the source language side transmits the content of the machine-translated utterance to the user on the target language side. Since it can be seen that there was not, it becomes possible for the user on the source language side to respond by uttering another content.

  According to the first embodiment described above, it is searched whether or not there is a similar example in the source language character string, and when there is a similar example and there is a selection from the user, the similar example and the parallel translation example Present. In this way, the user can cooperate with both sides to select an example of an unintelligible part in the source language character string of the speech recognition result and the target language character string of the machine translation result. You can talk smoothly. Further, since the speech recognition can be stopped and the example can be presented only when the parallel example is selected, it is possible to have a conversation without impairing the responsiveness of the conversation.

(Second Embodiment)
The second embodiment is different from the first embodiment in that the example storage unit 105 stores the annotation in association with the source language example or the target language example. When a source language is translated into a target language, the meaning of the source language may be ambiguous. For example, it is ambiguous whether the Japanese user “it is fine” is intended by the Japanese user to refuse “not necessary” or “it is ok”. Similarly, the English word "You're welcome." Is intended to give thanks to "Don't mention it" whether the English user intends to welcome "Welcom to you". It is ambiguous.
Therefore, in the second embodiment, by associating an annotation with a source language character string or a target language character string, an example that correctly reflects the intentions of the user who speaks the source language and the user who speaks the target language is presented to the user. Can do.

The speech translation apparatus according to the second embodiment is the same as the speech translation apparatus 100 according to the first embodiment, but the example stored in the example storage unit 105 and the operation of the example search unit 106 are different.
The example storage unit 105 associates the source language example and the annotation, and stores the target language example and the annotation in association with each other.
When there is a similar example in the source language character string, the example search unit 106 further searches for whether there is an annotation in the similar example.

Next, an example of a table stored in the example storage unit 105 according to the second embodiment will be described with reference to FIG.
As shown in FIG. 18, the source language example 1801 and the annotation 1802 are associated with each other, and the target language example 1803 and the annotation 1804 are stored in association with each other. Specifically, the source language example 1805-J “OK” is associated with the annotation 1805-1 “OK”, the source language example 1806-J “OK” and the annotation 1806-1 “unnecessary”. Stored in association. As described above, the source language examples having a plurality of meanings are annotated corresponding to the respective meanings.
Here, in the example of the target language that is the translation of the source language example in which these annotations exist, the translation in the target language corresponding to the annotation is stored, not the source language example. That is, “That's good.” Is stored in association with the target language example 1805-E corresponding to the source language example 1805-J “OK” and the annotation 1805-1 “OK”. In addition, “No thank you.” Is stored in association with the target language example 1806-E corresponding to the source language example 1806-J “Nice” and the annotation 1806-1 “Not required”.

  In addition, when an annotation is present in the target language example, the target language example 1807-E “You're welcome.” And the annotation 1807-1 “Welcome to you.” Are associated with each other, and the target language example 1808-E “You” 're welcome.' is associated with the annotation 1808-1 "Don't mention it." Here, in the source language corresponding to the target language example in which these annotations exist, the source language corresponding to the annotation is stored as in the case of the source language example in which the annotations exist. For example, the source language example 1807-J “I welcome” is the target language example 1807-E “You're welcome.” And the annotation 1807-1 “Welcome to You” is a translation in the source language of the annotation 1807-1 “Welcome to you”. Stored in association with “to you”.

  Similarly, the original language example 1808-E “not outrageous” which is the translation of the annotation 1808-1 “Welcome to you” in the source language is the target language example 1808-E “You're welcome.” And the annotation 1807-1 “ Stored in association with "Welcome to you." Thus, even in the same source language example, when an annotation exists, the translation corresponding to the annotation is stored in association with the target language example. On the other hand, if an annotation exists even in the same target language example, the translation corresponding to the annotation is stored in association with the source language example.

Next, a specific example of the operation of the speech translation apparatus according to the second embodiment will be described with reference to FIG.
FIG. 19 is similar to the example shown in FIG. 11, but shows an example in which an annotation is displayed together with a similar example when a list of examples is displayed. Specifically, as a similar example, “Nice (no problem)” and “Nice (unnecessary)” are displayed as a list. It should be noted that it is desirable to distinguish the symbol 1901 when there is an annotation in the similar example from the symbol when there is no annotation in the similar example. For example, if there is no annotation, the symbol is outlined, and if there is an annotation, the symbol is filled. As a result, the user can recognize that the meaning is ambiguous and the annotation exists.

  In addition, in the example of FIG. 19, two similar usage examples are presented as the similar usage example 1902-J “It's fine [OK]” and 1903-J “It ’s fine [unnecessary]”. , 1902-E1 “That's fine,”, 1902-E2 “All right.” And 1903-E “No thank you.”. This is because when the similar example corresponding to the parallel translation example is selected, if the similar example and the annotation overlap, it is only necessary to display one.

  According to the second embodiment described above, when an annotation is associated with an example, both the target language side and the source language side are displayed by displaying the example and the annotation when displaying the example. The user can refer to the annotation, and can select an example that shows the appropriate meaning for an ambiguous example.

(Third embodiment)
The first and second embodiments described above assume a configuration in a single device, but the processing may be distributed to a plurality of devices. In the third embodiment, it is assumed that processing is realized separately for a server and a client.
In general, when speech translation processing is performed by a device such as a mobile phone or a tablet PC that has limited computing resources and storage resources, restrictions are imposed on the amount of data and the degree of freedom of search space. Therefore, the processing amount of the client can be reduced by operating the processing of voice recognition, machine translation, and example search, which have a large processing load, on a server with easy expansion of computing resources and storage resources.

Here, a speech recognition system including the speech translation apparatus according to the third embodiment will be described with reference to the block diagram of FIG.
The voice recognition system shown in FIG. 20 includes a server 2000 and a client 2500.

The server 2000 includes a speech recognition unit 2001, a machine translation unit 2002, an example search unit 2003, an example storage unit 2004, a server communication unit 2005, and a server control unit 2006.
The speech recognition unit 2001, the machine translation unit 2002, the example search unit 2003, and the example storage unit 2004 are the same as the speech recognition unit 102, the machine translation unit 103, the example search unit 106, and the example storage unit 105 according to the first embodiment. Since the operation is performed, the description here is omitted.
A server communication unit 2005 transmits / receives data to / from a client communication unit 2506 described later.
A server control unit 2006 controls the operation of the entire server.

The client 2500 includes a voice acquisition unit 2501, a display unit 2502, a pointing instruction detection unit 2503, a character string selection unit 2504, an example presentation unit 2505, a client communication unit 2506, and a client control unit 2507.
The voice acquisition unit 2501, the display unit 2502, the pointing instruction detection unit 2503, the character string selection unit 2504, and the example presentation unit 2505 are the voice acquisition unit 101, the display unit 104, the pointing instruction detection unit 107, and the characters according to the first embodiment. Since the same processing as that of the column selection unit 108 and the example presentation unit 109 is performed, description thereof is omitted here.
A client communication unit 2506 transmits and receives data to and from the server communication unit 2005.
A client control unit 2507 performs overall control of the client 2500.

Next, an example of speech translation processing by the server 2000 and the client 2500 will be described.
In the client 2500, the audio acquisition unit 2501 acquires audio from the user, and the client communication unit 2506 transmits an audio signal to the server 2000.
In server 2000, server communication unit 2005 receives a voice signal from client 2500, and voice recognition unit 2001 performs voice recognition processing on the voice signal. Thereafter, the machine translation unit 103 performs machine translation processing on the speech recognition result. The server communication unit 2005 transmits the speech recognition result and the machine translation result to the client 2500. Further, the example search unit 2003 searches for a similar example similar to the speech recognition result, and if there is a similar example, the similar example and the corresponding parallel translation example are transmitted to the client 2500.

  In the client 2500, the client communication unit 2506 receives the speech recognition result and the machine translation result, and the similar use example and the parallel translation example, respectively, and the display unit 2502 displays the speech recognition result and the machine translation result. When the pointing instruction detection unit 2503 detects an instruction from the user, the example presentation unit 2505 presents a parallel translation example and a similar example related to the selected character string.

  In the case where there are similar examples in the speech recognition result, the client 2500 is set to receive an arbitrary number of extracted similar examples and corresponding bilingual examples without receiving all the similar examples. In some cases. In this case, the client 2500 transmits a request to the server 2000 to receive another similar example that has not been received or a corresponding parallel translation example. The example search unit 2003 of the server 2000 extracts unextracted similar examples and corresponding parallel translation examples, and the server communication unit 2005 transmits these similar examples and parallel translation examples. In the client 2500, the client communication unit 2506 may receive these similar usage examples and parallel translation examples and display new similar usage examples and parallel translation examples.

  Further, the server 2000 may transmit only a flag indicating that a similar example exists to the client 2500. When there is a pointing from the user, the client 2500 sends a request for the similar example and the translation example regarding the selected character string to the server 2000, and the server 2000 sends the similar example and the translation example to the client 2500 in response to the request. That's fine. In this way, the example search process is performed only when necessary, so that the operation of the speech translation process can be performed at a higher speed in the client.

  According to the third embodiment described above, the processing amount of the client can be increased by operating the processing of speech recognition, machine translation, and example search, which have a large processing load, on a server that can easily expand computing resources and storage resources. Can be reduced.

The instructions shown in the processing procedure shown in the above-described embodiment can be executed based on a program that is software. A general-purpose computer system stores this program in advance and reads this program, so that it is possible to obtain the same effect as that obtained by the speech translation apparatus described above. The instructions described in the above-described embodiments are, as programs that can be executed by a computer, magnetic disks (flexible disks, hard disks, etc.), optical disks (CD-ROM, CD-R, CD-RW, DVD-ROM, DVD). ± R, DVD ± RW, Blu-ray (registered trademark) Disc, etc.), semiconductor memory, or a similar recording medium. As long as the recording medium is readable by the computer or the embedded system, the storage format may be any form. If the computer reads the program from the recording medium and causes the CPU to execute instructions described in the program based on the program, the same operation as the speech translation apparatus of the above-described embodiment can be realized. Of course, when the computer acquires or reads the program, it may be acquired or read through a network.
In addition, the OS (operating system), database management software, MW (middleware) such as a network, etc. running on the computer based on the instructions of the program installed in the computer or embedded system from the recording medium implement this embodiment. A part of each process for performing may be executed.
Furthermore, the recording medium in the present embodiment is not limited to a medium independent of a computer or an embedded system, but also includes a recording medium in which a program transmitted via a LAN, the Internet, or the like is downloaded and stored or temporarily stored.
Further, the number of recording media is not limited to one, and when the processing in this embodiment is executed from a plurality of media, it is included in the recording medium in this embodiment, and the configuration of the media may be any configuration.

The computer or the embedded system in the present embodiment is for executing each process in the present embodiment based on a program stored in a recording medium. The computer or the embedded system includes a single device such as a personal computer or a microcomputer. The system may be any configuration such as a system connected to the network.
In addition, the computer in this embodiment is not limited to a personal computer, but includes an arithmetic processing device, a microcomputer, and the like included in an information processing device. ing.

  Although several embodiments of the present invention have been described, these embodiments are presented by way of example and are not intended to limit the scope of the invention. These novel embodiments can be implemented in various other forms, and various omissions, replacements, and changes can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and the equivalents thereof.

100, 600: speech translation apparatus, 101, 2501: speech acquisition unit, 102, 2001 ... speech recognition unit, 103, 2002 ... machine translation unit, 104, 2502 ... display unit, 105 , 2004 ... Example storage unit, 106, 2003 ... Example search unit, 107, 2503 ... Pointing instruction detection unit, 108, 2504 ... Character string selection unit, 109, 2505 ... Example presentation unit 201, 1801, 1805, 1806, 1807, 1808 ... Examples for source language, 202, 1803, 1805, 1806, 1807, 1808 ... Examples for target language, 601 ... Case, 602 ... Touch panel display 603 ... Microphone, 701, 702 ... Display area, 703 ... Speech start button, 704 ... Button, 05 ... delete button, 706 ... end button, 801, 901 ... uttered speech, 802-E ... speech recognition result, 802-J ... machine translation result, 902-J, 903-J , 904-J, 1401-J, ... source language character string, 902-E, 903-E, 904-E, 1401-E, ... target language character string, 905, 1901 ... symbol, 1001 ... Cursor, 1002, 1202, 1502, 1701 ... Confirmation message, 1101-J, 1102-J, 1103-J, 1201-J, 1501-J1902-J, 1903-J ... Similar examples 1101 -E, 1102-E, 1103-E, 1201-E, 1501-E1902-E1, 1902-E2, 1903-E ... translation example 1104 ... scroll bar, 180 , 1804, 1805 ... annotation, 2000 ... server, 2005 ... server communication unit, 2006 ... server control unit, 2500 ... client, 2506 ... client communication unit, 2507 ... client. Control unit.

Claims (11)

  1. An acquisition unit for acquiring speech in a first language as an audio signal;
    A speech recognition unit that sequentially performs speech recognition on the speech signal and obtains a first language character string that is a character string of a speech recognition result;
    A translation unit that translates the first language character string into a second language different from the first language, and obtains a second language character string that is a character string of a translation result;
    A similar example that is an example in the first language that is similar to the first language character string is searched for each first language character string, and when the similar example exists, the similar example and the similar example are A search unit for obtaining a bilingual example that is the result of translation into two languages;
    A selection unit that selects, as a selection character string, at least one of a first language character string in which the similar example exists and a second language character string in which the parallel example exists;
    A speech translation apparatus, comprising: an example presentation unit that presents one or more similar examples and parallel translation examples related to the selected character string.
  2. A display unit for displaying the first language character string and the similar example, and the second language character string and the bilingual example, respectively;
    When there is a similar example in the first language character string, the example presentation unit associates the first symbol indicating that the example exists with the first language character string and the corresponding second language character string, and The speech translation apparatus according to claim 1, wherein the speech translation apparatus is displayed on a display unit.
  3.   The speech translation apparatus according to claim 1, wherein the example presentation unit presents a list of a plurality of similar examples and a plurality of parallel translation examples when the selected character string is selected.
  4.   When either the similar example or the parallel translation example is selected, the example presentation unit highlights both the similar example and the parallel translation example, and the emphasized similar example or the emphasized parallel translation example is appropriate. The speech translation apparatus according to any one of claims 1 to 3, wherein a first notification for prompting a determination as to whether or not is present is presented.
  5.   5. The speech translation apparatus according to claim 1, further comprising a storage unit that stores the similar example and the bilingual example corresponding to the similar example in association with each other.
  6. The storage unit stores the similar example, the parallel translation example, and an annotation for explaining the intention of at least one of the similar translation example and the parallel translation example ,
    The display unit displays both the similar example and the annotation when the annotation is related to the similar example, and displays both the parallel example and the annotation when the annotation is related to the parallel example. The speech translation apparatus according to claim 5 .
  7. The storage unit stores the similar example, the parallel translation example, and an annotation for explaining the intention of at least one of the similar translation example and the parallel translation example,
    The example presentation unit, when there is a similar example in the first language character string and the annotation is associated with the similar example, displays a second symbol indicating that the annotation exists in the first language character string and The speech translation apparatus according to claim 5 , wherein the speech translation apparatus is displayed on the display unit in association with a corresponding second language character string.
  8.   The said example presentation part displays the 2nd notification for prompting confirmation in the said 1st language on the said display part, when the said 2nd language character string is selected. 8. The speech translation apparatus according to any one of items 7.
  9. An acquisition unit for acquiring speech in a first language as an audio signal;
    A first language character string, which is a character string of a speech recognition result that is sequentially speech-recognized for the speech signal, and a translation result character string obtained by translating the first language character string into a second language different from the first language A display unit for displaying the second language character string,
    A detection unit for detecting a position on the display unit instructed by a user;
    A selection unit that selects at least one of the first language character string and the second language character string as a selection character string based on the position;
    With respect to the selected character string, one or more similar examples that are examples in the first language that are similar to the first language character string, and one or more parallel examples that are the result of translating the similar examples into the second language; An example presentation unit for presenting
    The speech translation device, wherein the display unit further displays the presented similar example and the parallel translation example.
  10. Utterances in the first language are acquired as audio signals,
    Sequentially performing speech recognition on the speech signal to obtain a first language character string that is a character string of a speech recognition result;
    Translating the first language character string into a second language different from the first language, obtaining a second language character string that is a character string of a translation result;
    A similar example that is an example in the first language that is similar to the first language character string is searched for each first language character string, and when the similar example exists, the similar example and the similar example are The bilingual example that is the result of translation into two languages
    According to a user instruction, at least one of a first language character string in which the similar example exists and a second language character string in which the parallel example exists is selected as a selection character string,
    Presenting one or more similar examples and bilingual examples related to the selected character string.
  11. Computer
    Obtaining means for obtaining speech in a first language as an audio signal;
    Speech recognition means for sequentially performing speech recognition on the speech signal and obtaining a first language character string that is a character string of a speech recognition result;
    Translation means for translating the first language character string into a second language different from the first language and obtaining a second language character string that is a character string of a translation result;
    A similar example that is an example in the first language that is similar to the first language character string is searched for each first language character string, and when the similar example exists, the similar example and the similar example are A search means for obtaining a bilingual example resulting from translation into two languages;
    Selection means for selecting, as a selection character string, at least one of a first language character string in which the similar example exists and a second language character string in which the parallel example exists, according to a user instruction;
    A speech translation program for functioning as example presentation means for presenting one or more similar examples and parallel translation examples related to the selected character string.
JP2012146880A 2012-06-29 2012-06-29 Speech translation apparatus, method and program Active JP5653392B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012146880A JP5653392B2 (en) 2012-06-29 2012-06-29 Speech translation apparatus, method and program

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2012146880A JP5653392B2 (en) 2012-06-29 2012-06-29 Speech translation apparatus, method and program
US13/859,152 US9002698B2 (en) 2012-06-29 2013-04-09 Speech translation apparatus, method and program
CN201310130904.1A CN103514153A (en) 2012-06-29 2013-04-16 Speech translation apparatus, method and program
US14/670,064 US20150199341A1 (en) 2012-06-29 2015-03-26 Speech translation apparatus, method and program

Publications (2)

Publication Number Publication Date
JP2014010623A JP2014010623A (en) 2014-01-20
JP5653392B2 true JP5653392B2 (en) 2015-01-14

Family

ID=49778997

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2012146880A Active JP5653392B2 (en) 2012-06-29 2012-06-29 Speech translation apparatus, method and program

Country Status (3)

Country Link
US (2) US9002698B2 (en)
JP (1) JP5653392B2 (en)
CN (1) CN103514153A (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5653392B2 (en) * 2012-06-29 2015-01-14 株式会社東芝 Speech translation apparatus, method and program
CA2906399A1 (en) 2013-03-15 2014-10-09 Translate Abroad, Inc. Systems and methods for displaying foreign character sets and their translations in real time on resource-constrained mobile devices
US8965129B2 (en) 2013-03-15 2015-02-24 Translate Abroad, Inc. Systems and methods for determining and displaying multi-line foreign language translations in real time on mobile devices
US9747899B2 (en) * 2013-06-27 2017-08-29 Amazon Technologies, Inc. Detecting self-generated wake expressions
JP6235280B2 (en) 2013-09-19 2017-11-22 株式会社東芝 Simultaneous audio processing apparatus, method and program
JP6178198B2 (en) 2013-09-30 2017-08-09 株式会社東芝 Speech translation system, method and program
JP2015153108A (en) 2014-02-13 2015-08-24 株式会社東芝 Voice conversion support device, voice conversion support method, and program
US9524293B2 (en) * 2014-08-15 2016-12-20 Google Inc. Techniques for automatically swapping languages and/or content for machine translation
JP2016095727A (en) * 2014-11-14 2016-05-26 シャープ株式会社 Display device, server, communication support system, communication support method, and control program
USD749115S1 (en) 2015-02-20 2016-02-09 Translate Abroad, Inc. Mobile device with graphical user interface
JP6090757B2 (en) * 2015-04-14 2017-03-08 シントレーディング株式会社 Interpreter distribution device, interpreter distribution method, and program
US9836457B2 (en) 2015-05-25 2017-12-05 Panasonic Intellectual Property Corporation Of America Machine translation method for performing translation between languages
USD797764S1 (en) * 2015-11-05 2017-09-19 Samsung Electronics Co., Ltd. Display screen or portion thereof with animated graphical user interface
USD791182S1 (en) * 2015-11-26 2017-07-04 Guangzhou Shenma Mobile Information Technology Co., Ltd. Display screen with graphical user interface
USD791823S1 (en) * 2015-11-26 2017-07-11 Guangzhou Shenma Mobile Information Technology Co., Ltd. Display screen with graphical user interface
WO2017138076A1 (en) * 2016-02-08 2017-08-17 三菱電機株式会社 Input display control device, input display control method, and input display system
CN106055544A (en) * 2016-06-18 2016-10-26 哈尔滨理工大学 Foreign language learning translation device
JP6448838B2 (en) * 2018-06-12 2019-01-09 三菱電機株式会社 Display control apparatus, display control method, and program

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10162005A (en) * 1996-11-27 1998-06-19 Sony Corp Storage medium, retreival device and retrieval method
JP2003029776A (en) * 2001-07-12 2003-01-31 Matsushita Electric Ind Co Ltd Voice recognition device
JP4042360B2 (en) * 2001-07-18 2008-02-06 日本電気株式会社 Automatic interpretation system, method and program
US20030154080A1 (en) * 2002-02-14 2003-08-14 Godsey Sandra L. Method and apparatus for modification of audio input to a data processing system
JP4559946B2 (en) * 2005-09-29 2010-10-13 株式会社東芝 Input device, input method, and input program
CN101008942A (en) * 2006-01-25 2007-08-01 北京金远见电脑技术有限公司 Machine translation device and method thereof
JP4786384B2 (en) * 2006-03-27 2011-10-05 株式会社東芝 Audio processing apparatus, audio processing method, and audio processing program
JP4557919B2 (en) * 2006-03-29 2010-10-06 株式会社東芝 Audio processing apparatus, audio processing method, and audio processing program
CN101295296A (en) * 2007-04-28 2008-10-29 东 舒 Simultaneous translator
JP5100445B2 (en) * 2008-02-28 2012-12-19 株式会社東芝 Machine translation apparatus and method
JP2009205579A (en) * 2008-02-29 2009-09-10 Toshiba Corp Speech translation device and program
JP5413622B2 (en) * 2009-04-30 2014-02-12 日本電気株式会社 Language model creation device, language model creation method, and program
JP5403696B2 (en) * 2010-10-12 2014-01-29 株式会社Nec情報システムズ Language model generation apparatus, method and program thereof
JP2013206253A (en) * 2012-03-29 2013-10-07 Toshiba Corp Machine translation device, method and program
JP5653392B2 (en) * 2012-06-29 2015-01-14 株式会社東芝 Speech translation apparatus, method and program

Also Published As

Publication number Publication date
US20150199341A1 (en) 2015-07-16
JP2014010623A (en) 2014-01-20
US20140006007A1 (en) 2014-01-02
CN103514153A (en) 2014-01-15
US9002698B2 (en) 2015-04-07

Similar Documents

Publication Publication Date Title
US9165257B2 (en) Typing assistance for editing
AU2012227212B2 (en) Consolidating speech recognition results
Schalkwyk et al. “Your word is my command”: Google search by voice: A case study
JP3822990B2 (en) Translation device, recording medium
KR101324910B1 (en) Automatically creating a mapping between text data and audio data
JP5362095B2 (en) Input method editor
TWI293455B (en) System and method for disambiguating phonetic input
JP2014517397A (en) Context-aware input engine
JP2007280364A (en) Method and device for switching/adapting language model
US8954329B2 (en) Methods and apparatus for acoustic disambiguation by insertion of disambiguating textual information
US9519641B2 (en) Photography recognition translation
US9298287B2 (en) Combined activation for natural user interface systems
US20090249198A1 (en) Techniques for input recogniton and completion
TWI488174B (en) Automatically creating a mapping between text data and audio data
JP2010524138A (en) Multiple mode input method editor
KR101781557B1 (en) Method and system for facilitating text input
US9378739B2 (en) Identifying corresponding positions in different representations of a textual work
KR20130082339A (en) Method and apparatus for performing user function by voice recognition
KR20100029221A (en) Detecting name entities and new words
US20160203125A1 (en) Building conversational understanding systems using a toolset
US20140304605A1 (en) Information processing apparatus, information processing method, and computer program
KR20140097516A (en) Real-time natural language processing of datastreams
EP3243199B1 (en) Headless task completion within digital personal assistants
US20140379334A1 (en) Natural language understanding automatic speech recognition post processing
US8386231B2 (en) Translating languages in response to device motion

Legal Events

Date Code Title Description
RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20131219

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20131226

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20140109

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20140325

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20140723

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140729

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140929

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20141021

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20141118

R151 Written notification of patent or utility model registration

Ref document number: 5653392

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151