JP2008083446A - Pronunciation learning support device and pronunciation learning support program - Google Patents

Pronunciation learning support device and pronunciation learning support program Download PDF

Info

Publication number
JP2008083446A
JP2008083446A JP2006263945A JP2006263945A JP2008083446A JP 2008083446 A JP2008083446 A JP 2008083446A JP 2006263945 A JP2006263945 A JP 2006263945A JP 2006263945 A JP2006263945 A JP 2006263945A JP 2008083446 A JP2008083446 A JP 2008083446A
Authority
JP
Japan
Prior art keywords
pronunciation
phrase
user voice
area
specified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2006263945A
Other languages
Japanese (ja)
Other versions
JP4840052B2 (en
Inventor
Toshihisa Nakamura
利久 中村
Original Assignee
Casio Comput Co Ltd
カシオ計算機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casio Comput Co Ltd, カシオ計算機株式会社 filed Critical Casio Comput Co Ltd
Priority to JP2006263945A priority Critical patent/JP4840052B2/en
Publication of JP2008083446A publication Critical patent/JP2008083446A/en
Application granted granted Critical
Publication of JP4840052B2 publication Critical patent/JP4840052B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Abstract

To learn pronunciation used in a desired area.
In the electronic dictionary device 1, after the dictionary database 85a of the English-Japanese dictionary “Lee ○ 's” for American English is designated, that is, after “America” is designated as the pronunciation learning target country (FIG. (A)) When the learning target phrase “blessed” is input by the operation of the character key 13c or the like (FIG. (B)), the phonetic symbol data and the explanation information data of the headword corresponding to the learning target phrase “blessed” Is displayed (FIG. (C)). When the pronunciation learning key 13j is operated, user voice data is generated from the user voice input to the recording unit 4 (FIG. (D)), and corresponds to the user voice data and the pronunciation learning target country “USA”. The user voice and the model voice are subjected to frequency analysis based on the model voice data to be performed, and after these analysis results are compared and evaluated, the evaluation result is displayed on the display 10 (FIG. (E)).
[Selection] Figure 8

Description

  The present invention relates to a pronunciation learning support device and a pronunciation learning support program for supporting pronunciation learning.

  2. Description of the Related Art Conventionally, pronunciation learning support devices such as an electronic dictionary device capable of outputting voices enhance the pronunciation learning effect by comparing and evaluating the model voice stored inside and the user voice generated by the user. (For example, refer to Patent Document 1).

By the way, the pronunciation of a phrase may be different for each region such as country, region, race, generation, and occupation. In other words, it can be said that the pronunciation of a word may differ depending on the region if the region is grouped with a predetermined concept. For example, taking English as an example, there are countries such as the UK, the United States, Canada, Austria, and Hong Kong in terms of countries and regions, and there are areas such as white people, black people, Indians, Japanese people, etc. There are areas such as children, young people, and elderly people, and there are areas such as workers and IT-related companies in occupations, and the pronunciation of English is different in each area.
Japanese Patent Laid-Open No. 11-296060

  However, the electronic dictionary device of Patent Document 1 stores only typical model sounds in American English and British English, and can pronounce pronunciations used in each desired region such as country, person, or occupation. I can't learn.

  An object of the present invention is to provide a pronunciation learning support device and a pronunciation learning support program that can learn pronunciation used in a desired region (country, race, occupation, etc.).

The invention according to claim 1 is a pronunciation learning support device (for example, the electronic dictionary device 1 of FIG. 1).
A region-specific pronunciation information storage means (for example, the dictionary databases 85a to 85e in FIG. 2) that stores a phrase and an exemplary pronunciation information (for example, pronunciation symbol) of the phrase in association with each region;
Based on a user operation, a phrase area input means (for example, the input unit 5 in FIG. 2; FIG. 6) that inputs any word or area stored in the regional pronunciation information storage means as a specified phrase or designation area. Steps S1 to S2)
User voice input means (for example, the recording unit 4 in FIG. 2; step S6 in FIG. 6) for capturing the user voice for the specified phrase;
User voice evaluation means (for example, the CPU 6 and the pronunciation learning support program 84 in FIG. 2) that evaluates the pronunciation of the user voice captured by the user voice input means on the basis of the model pronunciation information corresponding to the designated phrase and the designated area. Step S8) of FIG.
It is characterized by providing.

  Here, the region includes each region by country and region, each region by type, each region by generation, and each region by occupation.

The invention according to claim 2 is the pronunciation learning support device according to claim 1,
It further comprises model voice output means (for example, voice output unit 3 in FIG. 2) that outputs a model voice of the specified word / phrase based on the model pronunciation information corresponding to the specified word / phrase and the specified area.

The invention according to claim 3 is the pronunciation learning support device according to claim 1 or 2,
Pronunciation trend storage means (for example, pronunciation tendency storage table 87 in FIG. 2) for storing pronunciation tendency information related to the pronunciation tendency in the area to which the user belongs,
The user voice evaluation means
Weighting evaluation means (for example, the CPU 6 and the pronunciation learning support program 84 in FIG. 2; step S8 in FIG. 6) that performs weighting based on the pronunciation tendency information and evaluates the pronunciation of the user voice is provided.

Invention of Claim 4 is the pronunciation learning assistance apparatus as described in any one of Claims 1-3,
Phonetic symbol display means for displaying the phonetic symbol of the specified phrase (for example, the display unit 2 in FIG. 2);
The phonetic symbol display means displays phonetic symbols in the same notation regardless of the designated area.

  Here, displaying phonetic symbols in the same notation regardless of the specified area means displaying phonetic symbols in the same notation, meaning displaying different phonetic symbols in the specified area in the same way. is not.

The invention according to claim 5 is a pronunciation learning support device (for example, the electronic dictionary device 1A of FIG. 1).
A pronunciation information storage means (for example, the dictionary database of FIG. 2) for each area (for example, country) grouped by a predetermined concept and storing a phrase and model pronunciation information (for example, pronunciation symbol) of the phrase in association with each other 85a-85e),
A phrase input unit (for example, the input unit 5 in FIG. 2; step S2 in FIG. 6) for inputting any of the phrases stored in the regional pronunciation information storage unit as a specified phrase based on a user operation;
User voice input means (for example, the recording unit 4 in FIG. 2; step S6 in FIG. 6) for capturing the user voice for the specified phrase;
User voice evaluation means (for example, the CPU 6 and pronunciation learning support in FIG. 2) that evaluates the pronunciation of the user voice captured by the user voice input means for each area based on the exemplary pronunciation information of each area corresponding to the specified phrase. Program 84A; steps T1, T3, T5, T7) of FIG.
It is characterized by providing.

The invention according to claim 6 is the pronunciation learning support device according to any one of claims 1 to 5,
Reference pronunciation information storage means (for example, dictionary databases 85a and 85c in FIG. 2) that stores the phrase and the reference pronunciation information of the phrase in association with each other,
The region-specific pronunciation information storage means includes:
It has a correction information storage means (for example, pronunciation correction tables 86a and 86b in FIG. 2) for storing correction information for the reference pronunciation information as model pronunciation information of a predetermined area.

The invention described in claim 7 is a pronunciation learning support program (for example, the pronunciation learning support program 84 in FIG. 2).
On the computer,
An area-specific pronunciation information storage function for storing a phrase and model pronunciation information (for example, pronunciation symbol) of the phrase in association with each area (for example, country);
A phrase area input function (for example, steps S1 to S2 in FIG. 6) for inputting any word or area stored by the area-specific pronunciation information storage function as a specified phrase or specified area based on a user operation;
A user voice input function (for example, step S6 in FIG. 6) for capturing a user voice for the specified phrase;
A user voice evaluation function (for example, step S8 in FIG. 6) that evaluates the pronunciation of the user voice captured by the user voice input function based on the model pronunciation information corresponding to the specified phrase and the specified area;
It is characterized by realizing.

The invention according to claim 8 is a pronunciation learning support program (for example, pronunciation learning support program 84A).
On the computer,
An area-specific pronunciation information storage function for storing a phrase and model pronunciation information (for example, pronunciation symbol) of the phrase in association with each area (for example, country);
A phrase input function (for example, step S2 in FIG. 6) for inputting any phrase stored by the region-specific pronunciation information storage function as a specified phrase based on a user operation;
A user voice input function (for example, step S6 in FIG. 6) for capturing a user voice for the specified phrase;
A user voice evaluation function (for example, steps T1, T3 in FIG. 18) that evaluates the pronunciation of the user voice captured by the user voice input function for each area based on the model pronunciation information of each area corresponding to the specified phrase. T5, T7),
It is characterized by realizing.

  According to the first and seventh aspects of the present invention, the pronunciation of the user voice is evaluated based on the model pronunciation information corresponding to the designated word and the designated area. You can learn the pronunciation used in.

  According to the second aspect of the present invention, since the model voice of the specified phrase is output based on the model pronunciation information corresponding to the specified area, the pronunciation learning efficiency can be further improved.

  According to the third aspect of the invention, since the pronunciation of the user voice is evaluated by weighting based on the pronunciation tendency information in the region to which the user belongs, the pronunciation learning effect can be further enhanced.

  According to the fourth aspect of the present invention, the phonetic symbols in the same notation are displayed regardless of the designated area, so that it is possible to save the trouble of understanding the phonetic notation in each area. It can be further increased.

  According to the fifth and eighth aspects of the present invention, the pronunciation of the user voice is evaluated for each region based on the exemplary pronunciation information of each region corresponding to the specified word / phrase. The pronunciation used in a desired area can be learned.

  According to the sixth aspect of the present invention, the correction information for the reference pronunciation information is stored as the exemplary pronunciation information of the predetermined area, so even if the pronunciation of each word is not determined systematically, When the pronunciation in this area has a change rule with respect to the model pronunciation in the other area, the pronunciation used in the area can be learned by using the change rule as correction information.

  Hereinafter, an embodiment of an electronic dictionary device to which a pronunciation learning support device according to the present invention is applied will be described with reference to the drawings.

<First Embodiment>
[Appearance configuration]
FIG. 1A is a perspective external view of the electronic dictionary device 1 in the present embodiment.
As shown in this figure, the electronic dictionary device 1 includes a display 10, a speaker 11, a microphone 12, and a key group 13.

  The display 10 is a part that displays various data such as characters and codes according to the operation of the key group 13 by the user, and is configured by an LCD (Liquid Crystal Display), an ELD (Electronic Luminescent Display), or the like.

The speaker 11 is a part that outputs a voice of a phrase according to the operation of the key group 13 by the user.
The microphone 12 is a part that captures external sound propagated outside, such as model sound output from the speaker 11 and user sound uttered by the user. In the present embodiment, the microphone 12 is integrated with the speaker 11. Yes.

  The key group 13 has various keys for the user to operate the electronic dictionary device 1. Specifically, as shown in FIG. 1B, the key group 13 includes a translation / decision key 13b, a character key 13c, a dictionary selection key 13d, a cursor key 13e, a shift key 13f, and a return key 13g. A voice output key 13h, a recording key 13i, a pronunciation learning key 13j, and the like.

  The translation / decision key 13b is a key used for executing a search, determining a headword, and the like. The character key 13c is a key used for inputting characters by the user, and includes “A” to “Z” keys in the present embodiment. The dictionary selection key 13d is a key used for selecting dictionary databases 85a to 85e (see FIG. 2) described later.

  The cursor key 13e is a key used for moving the cursor indicated by reverse display or the like in the display 10. The shift key 13f is a key used when a Japanese word is set as a search target. The return key 13g is a key used when returning to the previously displayed screen.

  The audio output key 13h is a key that is used when the speaker 11 outputs an exemplary voice of a phrase. The recording key 13 i is a key used when recording external sound through the microphone 12.

  The pronunciation learning key 13j is a key used when executing a pronunciation learning support process (see FIG. 6) described later.

[Internal configuration]
FIG. 2 is a block diagram showing a schematic configuration of the electronic dictionary device 1.
As shown in this figure, the electronic dictionary device 1 includes a display unit 2, an audio output unit 3, a recording unit 4, an input unit 5, a CPU 6, a flash ROM 8 and a RAM 7.

  The display unit 2 includes the display 10 described above, and displays various information on the display 10 based on a display signal input from the CPU 6.

  The audio output unit 3 includes the speaker 11 described above, and causes the speaker 11 to reproduce audio data based on an audio output signal input from the CPU 6.

  The recording unit 4 includes the microphone 12 described above, and records the external sound captured by the microphone 12 based on a recording signal input from the CPU 6 to create audio data.

  The input unit 5 includes the key group 13 described above, and outputs a signal corresponding to the pressed key to the CPU 6.

  The CPU 6 executes processing based on a predetermined program in accordance with an input instruction, performs an instruction to each function unit, data transfer, and the like, and comprehensively controls the electronic dictionary device 1. . Specifically, the CPU 6 reads various programs stored in the flash ROM 8 in accordance with an operation signal or the like input from the input unit 5 and executes processing according to the program. Then, the CPU 6 stores the processing result in the RAM 7 and appropriately outputs a signal for displaying and outputting the processing result to the display unit 2 and the audio output unit 3 to display and output the corresponding contents. .

  The flash ROM 8 is a memory that stores programs and data for realizing various functions of the electronic dictionary device 1. In the present embodiment, the flash ROM 8 includes a dictionary search program 81, a speech synthesis program 82, a pronunciation learning support program 84 according to the present invention, a dictionary database group 85, a pronunciation correction table group 86, and a pronunciation tendency memory. A table 87 and the like are stored.

  The dictionary search program 81 is a program for causing the CPU 6 to execute a conventionally known dictionary search process, that is, a process for searching and displaying explanatory information corresponding to a designated headword designated by a user operation.

  The speech synthesis program 82 is a program that causes the CPU 6 to execute processing for converting phonetic symbols into speech data. Note that a conventionally known process can be used as such a process.

  The pronunciation learning support program 84 is a program for causing the CPU 6 to execute a later-described pronunciation learning support process (see FIG. 6).

  The dictionary database group 85 has at least one type of dictionary database, and in the present embodiment, an English-Japanese dictionary “Lee's” for learning English in the United States (hereinafter referred to as American English). , "G * As" dictionary database 85a, 85b, English-English dictionary "Ok Ford" dictionary database 85c for learning English in the UK (hereinafter referred to as British English), English in Australia ( An English-English dictionary database 85d for learning English (hereinafter referred to as Australian English), an English-English dictionary database 85e for learning Canadian English (hereinafter referred to as Canadian English), etc. ing.

  These dictionary databases 85a to 85e are, as shown in an example in FIG. 3, for example, a plurality of headwords, phonetic symbols as model phonetic information of the headwords, and explanatory information that explains the headwords in detail. Are stored in association with each other.

  Here, phonetic symbols corresponding to the model pronunciation in American English are stored in the dictionary databases 85a and 85b for American English, and the model database 85c for British English corresponds to the model pronunciation in British English. The phonetic symbol to be stored is stored.

  On the other hand, in Australian English and Canadian English, the pronunciation of each word is not systematically determined, but a predetermined word has a change rule with respect to model pronunciation in British English or American English. Specifically, for example, Australian English has a rule that the exemplary pronunciation [dei] of the English word “day” in American English changes to [dai]. Therefore, the dictionary database 85d for Australian English stores phonetic symbols corresponding to exemplary pronunciations in British English, and the dictionary database 85e for Canadian English stores phonetic symbols corresponding to exemplary pronunciations in American English. Is stored.

  In this embodiment, an international phonetic alphabet (IPA) is used as a phonetic symbol, and this phonetic symbol is expressed by a notation common to the whole world.

  The pronunciation correction table group 86 includes an Australian English pronunciation correction table 86a and a Canadian English pronunciation correction table 86b.

  These pronunciation correction tables 86a and 86b store pronunciation correction information for converting the model pronunciation in the standard British English and American English into the model pronunciation in Australian English and Canadian English. More specifically, as shown in FIG. 4, the Australian English pronunciation correction table 86a shows the correspondence between the phonetic symbols of the model pronunciation in British English and the phonetic symbols of the model pronunciation in Australian English as shown in FIG. Is stored as pronunciation correction information. Similarly, the pronunciation correction table 86b for Canadian English stores the correspondence relationship between the phonetic symbol of the model pronunciation in American English and the phonetic symbol of the model pronunciation in Canadian English as the pronunciation correction information for the predetermined phrase. .

  The pronunciation tendency storage table 87 stores pronunciation tendency information related to the pronunciation tendency for each country, and more specifically, as shown in FIG. 5, the country in which the electronic dictionary device 1 is used, the pronunciation symbol, and its The weighting coefficient is stored in association with each other. Note that the weighting factor in the present embodiment is a value larger than 1 when the tendency to pronounce incorrectly in the corresponding country of use is high, and from 1 when the tendency to pronounce correctly is high. Is also a small value.

  Further, as shown in FIG. 2 described above, the RAM 7 is a memory that temporarily holds various programs executed by the CPU 6 and data related to the execution of these programs. In the present embodiment, the designated dictionary type A storage area 71, a learning target phrase storage area 72, a phonetic symbol storage area 73, a model voice data storage area 74, a user voice data storage area 75, and a country of use storage area 76 are provided.

  In the designated dictionary type storage area 71, dictionary types of dictionary databases 85a to 85e selected in a pronunciation learning support process (see FIG. 6) described later are stored.

  In the learning target phrase storage area 72, a learning target phrase specified as a learning target phrase in a pronunciation learning support process (see FIG. 6) described later is stored.

  The phonetic symbol storage area 73 stores phonetic symbol data of the model pronunciation of the learning target phrase in the pronunciation learning support process (see FIG. 6) described later.

  The model voice data storage area 74 stores voice data of model voice (hereinafter referred to as model voice data) in a pronunciation learning support process (see FIG. 6) described later.

The user voice data storage area 75 stores voice data of user voice (hereinafter referred to as user voice data) in pronunciation learning support processing (see FIG. 6) described later.
In the country of use storage area 76, the country in which the electronic dictionary device 1 is used is stored.

[Pronunciation learning support processing]
Next, the operation of the electronic dictionary device 1 will be described. FIG. 6 is a flowchart for explaining the operation of the pronunciation learning support process in which the CPU 6 reads the pronunciation learning support program 84 from the flash ROM 8 and executes it.

  As shown in this figure, first, one of the dictionary databases 85a to 85e is designated by the operation of the dictionary selection key 13d or the like, that is, the country to be pronunciation learning (hereinafter referred to as the learning target country) is “America. ”,“ UK ”,“ Australia ”,“ Canada ”(step S1), the CPU 6 stores the dictionary type of the specified dictionary database 85a to 85e in the specified dictionary type storage area 71. .

  Next, when a learning target phrase is input or specified by operating the character key 13c or the like and the translation / decision key 13b is operated (step S2), the CPU 6 stores the learning target phrase in the learning target phrase storage area 72. Let Further, the CPU 6 executes the dictionary search program 81 to read out phonetic symbol data and explanation information data of the headword corresponding to the learning target phrase from the designated dictionary database 85a to 85e (step S3). The phonetic symbols and the explanation information are displayed on the display 10 and the phonetic symbol data is stored in the phonetic symbol storage area 73.

  At this time, if it is determined that dictionary databases 85d and 85e for Australian English or Canadian English are specified and the pronunciation correction information for the learning target phrase is stored in the pronunciation correction tables 86a and 86b, The CPU 6 corrects the phonetic symbol data in the phonetic symbol storage area 73 based on the phonetic correction information, generates phonetic symbol data of model pronunciation in Australian English or Canadian English, and displays the obtained phonetic symbol data. 10 and stored in the phonetic symbol storage area 73 to update the phonetic symbol data.

  Next, when the pronunciation learning key 13j is operated, the CPU 6 generates model voice data based on the phonetic symbol data in the phonetic symbol storage area 73, that is, model phonetic symbol data corresponding to the pronunciation learning target country. Then, it is stored in the model voice data storage area 74 (step S4).

  Next, the CPU 6 determines whether or not the user voice about the learning target phrase is input to the recording unit 4 (step S5). If it is determined that the user voice is not input (step S5; No), the step S5 is performed. Repeat the process.

  On the other hand, if it is determined in step S5 that user voice has been input (step S5; Yes), the CPU 6 generates user voice data from the input user voice and stores it in the user voice data storage area 75 (step S5). S6).

  Next, the CPU 6 is based on the user voice data in the user voice data storage area 75 and the model voice data in the model voice data storage area 74, that is, the model voice data of the model pronunciation corresponding to the pronunciation learning target country. The user voice and the model voice are subjected to frequency analysis (step S7), and the analysis results are compared and evaluated (step S8).

  More specifically, the CPU 6 reads out the pronunciation tendency information of the country of use from the pronunciation tendency storage table 87 based on the country of use information stored in the country of use storage area 76, and performs weighting based on the pronunciation tendency information. The user's pronunciation is evaluated by comparing the results of frequency analysis for the user voice and the model voice.

  For example, as shown in FIG. 7, when the learning target phrase is “refrigerator”, the CPU 6 first compares the frequency analysis results of the user voice and the model voice and evaluates each syllable of the learning target phrase. A score (SP) is calculated. Here, as a method for calculating the evaluation score (SP), a conventionally known method can be used. Further, the CPU 6 reads out the weighting coefficient (K) corresponding to the pronunciation symbol of each syllable from the pronunciation tendency storage table 87 as the pronunciation tendency information of the country of use of the electronic dictionary device 1. Next, the CPU 6 multiplies the evaluation score (SP) and the weighting coefficient (K) for each syllable to calculate an evaluation score (SO) after weight correction. And CPU6 calculates the average value of evaluation score (SO), and makes it an evaluation result.

  Here, as described above, the weighting coefficient has a value larger than 1 for a phonetic symbol that tends to be pronounced incorrectly in the country of use. Therefore, according to the above evaluation method, the evaluation is softly calculated for the pronunciation parts that tend to be weak, and the evaluation is strictly calculated for the pronunciation parts that tend to be good. For example, in Australian English, day tends to be pronounced inaccurately as [dai], but for Australian English, even if you pronounce it as [dai], [dei] is pronounced poorly. Will be.

  Then, the CPU 6 displays the evaluation result on the display 10 (step S9), and ends the pronunciation learning support process.

[Operation example]
Next, the pronunciation learning support process will be specifically described.

(Operation example (1))
First, as shown in FIGS. 8A to 8C, the dictionary database 85a of the English-Japanese dictionary “Lee's” for American English is designated by the operation of the dictionary selection key 13d or the like, that is, pronunciation learning. After “USA” is specified as the target country (step S1), the learning target phrase “blessed” is input by operating the character key 13c and the translation / decision key 13b is operated (step S2). The phonetic symbol data and explanation information data of the headword corresponding to the learning target phrase “blessed” are read from the dictionary database 85a (step S3), and the headword, phonetic symbol, and explanation information are displayed on the display 10. The phonetic symbol data is stored in the phonetic symbol storage area 73.

  Next, as shown in FIG. 8D, when the pronunciation learning key 13j is operated, model voice data is generated for the learning target phrase “blessed” based on the phonetic symbol data in the phonetic symbol storage area 73. At the same time, user voice data is generated from the user voice input to the recording unit 4 (steps S4 and S5).

  Then, as shown in FIG. 8E, the user voice and the model voice are subjected to frequency analysis based on the user voice data and the model voice data of the model pronunciation corresponding to the pronunciation learning target country “America” (steps). S7) After these analysis results are comparatively evaluated (step S8), the evaluation results are displayed on the display 10 (step S9).

(Operation example (2))
First, as shown in FIGS. 9A to 9C, the dictionary database 85c of the English-English dictionary “Ok Ford” for British English is designated by the operation of the dictionary selection key 13d or the like, that is, pronunciation learning. After “UK” is designated as the target country (step S1), the learning target phrase “blessed” is input by operating the character key 13c and the translation / decision key 13b is operated (step S2). The phonetic symbol data and explanation information data of the headword corresponding to the learning target phrase “blessed” are read from the dictionary database 85a (step S3), and the headword, phonetic symbol, and explanation information are displayed on the display 10. The phonetic symbol data is stored in the phonetic symbol storage area 73.

  Next, as shown in FIG. 9 (d), when the pronunciation learning key 13 j is operated, model voice data is generated for the learning target phrase “blessed” based on the phonetic symbol data in the phonetic symbol storage area 73. At the same time, user voice data is generated from the user voice input to the recording unit 4 (steps S4 and S5). In this operation example (2), the pronunciation symbol of the learning target phrase “blessed” is displayed at the timing of capturing the user voice.

  Then, as shown in FIG. 9E, the user voice and the model voice are frequency-analyzed based on the user voice data and the model voice data of the model pronunciation corresponding to the pronunciation learning target country “UK” (step) S7) After these analysis results are comparatively evaluated (step S8), the evaluation results are displayed on the display 10 (step S9).

(Operation example (3))
First, as shown in FIGS. 10 (a) to 10 (c), the dictionary database 85b of the English-Japanese dictionary “G.US” for American English is designated by the operation of the dictionary selection key 13d or the like. After “USA” is designated as the country (step S1), the learning target phrase “air” is input by operating the character key 13c and the translation / decision key 13b is operated (step S2). The phonetic symbol data and explanation information data of the headword corresponding to the learning target phrase “air” are read from the dictionary database 85a (step S3), and the headword, phonetic symbol, and explanation information are displayed on the display 10, Phonetic symbol data is stored in the phonetic symbol storage area 73.

  Next, as shown in FIG. 10D, when the pronunciation learning key 13j is operated, the model voice data is generated for the learning target phrase “air” based on the phonetic symbol data in the phonetic symbol storage area 73. At the same time, user voice data is generated from the user voice input to the recording unit 4 (steps S4 and S5).

  Then, as shown in FIG. 10E, the user voice and the model voice are subjected to frequency analysis based on the user voice data and the model voice data of the model pronunciation corresponding to the pronunciation learning target country “America” (steps). S7) After these analysis results are comparatively evaluated (step S8), the evaluation results are displayed on the display 10 (step S9).

(Operation example (4))
First, as shown in FIGS. 11A to 11C, the dictionary database 85c of the English-English dictionary “Ok Ford” for British English is designated by the operation of the dictionary selection key 13d or the like, that is, pronunciation learning. After “UK” is designated as the target country (step S1), the learning target phrase “air” is input by operating the character key 13c and the translation / decision key 13b is operated (step S2). The phonetic symbol data and explanation information data of the headword corresponding to the learning target phrase “air” are read from the dictionary database 85a (step S3), and these headword, phonetic symbol, and explanation information are displayed on the display 10. The phonetic symbol data is stored in the phonetic symbol storage area 73.

  Next, as shown in FIG. 11 (d), when the pronunciation learning key 13 j is operated, model voice data is generated for the learning target phrase “air” based on the pronunciation symbol data in the pronunciation symbol storage area 73. At the same time, user voice data is generated from the user voice input to the recording unit 4 (steps S4 and S5).

  Then, as shown in FIG. 11E, the user voice and the model voice are subjected to frequency analysis based on the user voice data and the model voice data of the model pronunciation corresponding to the pronunciation learning target country “UK” (step S7) After these analysis results are comparatively evaluated (step S8), the evaluation results are displayed on the display 10 (step S9).

(Operation example (5))
First, as shown in FIGS. 12A to 12C, the dictionary database 85c of the English-English dictionary “Ok Ford” for British English is designated by the operation of the dictionary selection key 13d or the like, that is, pronunciation learning. After “UK” is designated as the target country (step S1), the learning target phrase “refrigerator” is input by operating the character key 13c and the translation / decision key 13b is operated (step S2). The phonetic symbol data and explanation information data of the headword corresponding to the learning target phrase “refrigerator” are read from the dictionary database 85a (step S3), and the headword, phonetic symbol, and explanation information are displayed on the display 10. The phonetic symbol data is stored in the phonetic symbol storage area 73.

  Next, as shown in FIG. 12 (d), when the pronunciation learning key 13 j is operated, model voice data is generated for the learning target phrase “refrigerator” based on the pronunciation symbol data in the pronunciation symbol storage area 73. At the same time, user voice data is generated from the user voice input to the recording unit 4 (steps S4 and S5).

  Then, as shown in FIG. 12E, the user voice and the model voice are subjected to frequency analysis based on the user voice data and the model voice data of the model pronunciation corresponding to the pronunciation learning target country “UK” (step S7) After these analysis results are comparatively evaluated (step S8), the evaluation results are displayed on the display 10 (step S9).

  According to the electronic dictionary device 1 described above, as shown in steps S3 to S8 of FIG. 6 and FIGS. 8 to 12, learning target words and pronunciation learning target countries (designated dictionary databases 85a to 85e). Since the pronunciation of the user voice is evaluated based on the model voice data of the model pronunciation corresponding to the above, the pronunciation used in the desired country can be learned even if the pronunciation is different in each country.

  Further, as shown in step S8 of FIG. 6 and FIG. 7, the pronunciation of the user voice is evaluated while performing weighting based on the pronunciation tendency information in the country to which the user belongs, that is, the country in which the electronic dictionary device 1 is used. Therefore, the pronunciation learning effect can be further enhanced.

  In addition, as shown in step S3 of FIG. 6, model phonetic symbols in British English and American English, and model pronunciations in British English and American English, as phonetic symbols of model pronunciation in Australian English and Canadian English. And pronunciation correction information to convert the pronunciation to model pronunciation in Australian English and Canadian English, so even if the pronunciation of Australian or Canadian English is not systematically determined, the pronunciation is learned can do.

  Also, as shown in FIGS. 8 to 12, the phonetic symbols of international phonetic symbols are displayed in the same notation regardless of the pronunciation learning country, so that it is possible to save the trouble of understanding the phonetic notation in each country. As much as possible, the pronunciation learning effect can be further enhanced.

  In the first embodiment described above, the pronunciation learning target country is specified by specifying the dictionary databases 85a to 85e. For example, as shown in FIG. 13, the specification of the dictionary databases 85a to 85e is used. Regardless, it is also possible to display a selection screen for the learning target country and model pronunciation before capturing the user voice, and to specify the learning target country by this selection.

<Second Embodiment>
Next, a second embodiment of an electronic dictionary device to which the pronunciation learning support device according to the present invention is applied will be described with reference to FIGS. 2 and 14 to 17. In addition, the same code | symbol is attached | subjected to the component similar to said 1st Embodiment, and the description is abbreviate | omitted.

[Internal configuration]
As shown in FIG. 2, the electronic dictionary device 1A according to the present embodiment includes a flash ROM 8A. The flash ROM 8A performs pronunciation learning support processing (see FIGS. 6 and 14 to 16) described later on the CPU 6. A pronunciation learning support program 84A for execution is stored.

[Pronunciation learning support processing]
Next, the operation of the electronic dictionary device 1A will be described. 6 and 14 to 16 are flowcharts for explaining the operation of the pronunciation learning support process in which the CPU 6 reads and executes the pronunciation learning support program 84A from the flash ROM 8A. Note that the pronunciation learning support process in this embodiment differs only in that the following country-specific pronunciation evaluation process is performed after step S9 (see FIG. 6) in the pronunciation learning support process of the first embodiment. A description of the processing is omitted.

  First, when a predetermined user operation is performed in a state where the evaluation result is displayed on the display 10 in step S9, the CPU 6 performs a country-specific pronunciation evaluation process for evaluating a user voice with respect to a model pronunciation in each country.

  In this country-specific pronunciation evaluation process, as shown in FIG. 14, first, the CPU 6 performs a pronunciation evaluation process based on a British model voice (hereinafter referred to as a British pronunciation evaluation process) (step T1).

  Specifically, as shown in FIG. 15, first, the CPU 6 reads phonetic symbol data from the British English dictionary database 85c (step U11), stores it in the phonetic symbol storage area 73, and based on this phonetic symbol data. The model voice data is generated and stored in the model voice data storage area 74 (step U12). Then, the CPU 6 performs frequency analysis of the user voice and the model voice based on the user voice data and the model voice data of the model pronunciation corresponding to the pronunciation learning target country “UK”, similarly to the above steps S7 to S8. These analysis results are compared and evaluated (steps U13 to U14), and the English pronunciation evaluation process is terminated.

  Next, as shown in FIG. 14, after displaying the evaluation result on the display 10 (step T <b> 2), the CPU 6 performs a pronunciation evaluation process based on an Australian model voice (hereinafter referred to as an Australian pronunciation evaluation process). (Step T3).

  Specifically, as shown in FIG. 16, first, the CPU 6 reads phonetic symbol data of the learning target phrase from the Australian English dictionary database 85d (step U21), stores the phonetic symbol data, and stores the phonetic symbol storage area 73. When the phonetic symbol data is updated, it is determined whether or not pronunciation correction information is stored in the Australian English pronunciation correction table 86a for the learning target phrase (step U22). In (Step U22; No), the process proceeds to Step U24 described later.

  On the other hand, when it is determined in step U22 that the pronunciation correction information is stored (step U22; Yes), the CPU 6 uses the pronunciation symbol in the pronunciation symbol storage area 73 based on the pronunciation correction information, that is, British English. After correcting the phonetic symbols of the model phonetic and generating phonetic symbols of the model phonetic in Australian English (step U23), the phonetic symbol data is stored in the phonetic symbol storage area 73 and the phonetic symbol data is updated. .

  Next, the CPU 6 generates model voice data based on the phonetic symbol data in the phonetic symbol storage area 73 and stores it in the model voice data storage area 74 (step U24). Then, the CPU 6 frequency-analyzes the user voice and the model voice based on the user voice data and the model voice data of the model pronunciation corresponding to the pronunciation learning target country “Australia”, similarly to steps S7 to S8. These analysis results are compared and evaluated (steps U25 to U26), and the Australian pronunciation evaluation process is terminated.

  Next, as shown in FIG. 14, after displaying the evaluation result on the display 10 (step T4), the CPU 6 performs a pronunciation evaluation process based on an American model voice (hereinafter referred to as an American pronunciation evaluation process). (Step T5).

  Specifically, as shown in FIG. 15, the CPU 6 first reads phonetic symbol data from the American English dictionary databases 85a and 85b (step U31), stores it in the phonetic symbol storage area 73, and stores the phonetic symbol data in the phonetic symbol data. Based on this, model voice data is generated and stored in the model voice data storage area 74 (step U32). Then, the CPU 6 performs frequency analysis of the user voice and the model voice based on the user voice data and the model voice data of the model pronunciation corresponding to the pronunciation learning target country “America”, similarly to the steps S7 to S8. These analysis results are compared and evaluated (steps U33 to U34), and the American pronunciation evaluation process is terminated.

  Next, as shown in FIG. 14, after displaying the evaluation result on the display 10 (step T6), the CPU 6 performs a pronunciation evaluation process (hereinafter referred to as a Canadian pronunciation evaluation process) based on the Canadian model voice. (Step T7).

  Specifically, as shown in FIG. 16, first, the CPU 6 reads phonetic symbol data of the learning target phrase from the Canadian English dictionary database 85e (step U41), stores the phonetic symbol data, and stores the phonetic symbol storage area 73. When the phonetic symbol data is updated, it is determined whether or not pronunciation correction information is stored in the Canadian English pronunciation correction table 86a for the learning target phrase (step U42). In (Step U42; No), the process proceeds to Step U44 described later.

  On the other hand, when it is determined in step U42 that the pronunciation correction information is stored (step U42; Yes), the CPU 6 uses the pronunciation symbol in the pronunciation symbol storage area 73 based on the pronunciation correction information, that is, American English. After correcting the phonetic symbols of the model phonetic and generating the phonetic symbols of the model phonetic in Canadian English (step U43), the phonetic symbol data is stored in the phonetic symbol storage area 73 and the phonetic symbol data is updated. .

  Next, the CPU 6 generates model voice data based on the phonetic symbol data in the phonetic symbol storage area 73 and stores it in the model voice data storage area 74 (step U44). Then, the CPU 6 frequency-analyzes the user voice and the model voice based on the user voice data and the model voice data of the model pronunciation corresponding to the pronunciation learning target country “Canada”, as in steps S7 to S8. These analysis results are compared and evaluated (steps U45 to U46), and the Canadian pronunciation evaluation process is terminated.

  Then, as shown in FIG. 14, the CPU 6 displays the evaluation result on the display 10 (step T8), ends the country-specific pronunciation evaluation process, and then ends the pronunciation learning support process.

[Operation example]
Next, the pronunciation learning support process will be specifically described.

(Operation example (6))
First, as shown in FIGS. 17A to 17C, after the dictionary database 85c of the English-English dictionary “Ok Ford” for British English is designated by the operation of the dictionary selection key 13d or the like (step S1), When the learning target phrase “thursday” is input by the operation of the character key 13c and the translation / decision key 13b is operated (step S2), the finding corresponding to the learning target phrase “thursday” is specified from the designated dictionary database 85a. The phonetic symbol data and the explanation information data of the word are read (step S 3), these head words, phonetic symbols and explanation information are displayed on the display 10, and the phonetic symbol data is stored in the phonetic symbol storage area 73.

  Next, as shown in FIG. 17D, when the pronunciation learning key 13j is operated, the model voice data is generated for the learning target phrase “thursday” based on the phonetic symbol data in the phonetic symbol storage area 73. At the same time, user voice data is generated from the user voice input to the recording unit 4 (step S5).

  Then, as shown in FIG. 17 (e), the user voice and the model voice are subjected to frequency analysis based on the user voice data and the model voice data corresponding to the pronunciation learning target country “America” (step S7). After these analysis results are comparatively evaluated (step S8), the evaluation results are displayed on the display 10 (step S9).

  Then, as shown in FIG. 17 (f), when the national pronunciation evaluation process is executed by a predetermined user operation, the user voice data and the countries of “UK”, “Australia”, “USA”, “Canada” are displayed. The user voice and the model voice are subjected to frequency analysis (steps U13, U25, U33, and U45) based on the model voice data corresponding to, and the analysis results are compared and evaluated (steps U14, U26, U34, and U46). ) The evaluation result is displayed on the display 10 (steps T2, T4, T6, T8). In this operation example (6), the phonetic symbol of the model pronunciation in the country is displayed along with the evaluation result of each country.

  According to the electronic dictionary device 1A described above, the same effects as those of the electronic dictionary device 1 can be obtained, as well as steps T1, T3, T5, T7 in FIG. 14 and FIG. 17 (e). As described above, the pronunciation of the user voice is evaluated for each country based on the phonetic symbol of the model voice of each country corresponding to the learning target phrase, so even if the pronunciation differs from country to country, it is used in the desired country. Can learn the pronunciation.

  The embodiments to which the present invention can be applied are not limited to the above-described embodiments, and can be appropriately changed without departing from the spirit of the present invention.

  For example, in the above-described embodiment, it has been described that the voice output unit 3 does not operate in the pronunciation learning support process. However, the voice output unit 3 may output the model voice of the learning target phrase. In this case, the pronunciation learning efficiency can be further increased.

  Further, although the example voice data is generated from the phonetic symbols, the model voice data may be stored in advance in the flash ROM 8 or the like.

It is a figure which shows schematic structure of the electronic dictionary apparatus to which the pronunciation learning assistance apparatus which concerns on this invention is applied, (a) is a general-view figure, (b) is a partial top view. It is a block diagram which shows schematic structure of the electronic dictionary apparatus to which the pronunciation learning assistance apparatus which concerns on this invention is applied. It is a figure which shows the data structure of a dictionary database. It is a figure which shows the data structure of a pronunciation correction table. It is a figure which shows the data structure of a pronunciation tendency memory table. It is a flowchart which shows pronunciation learning assistance processing. It is a figure for demonstrating the evaluation method of a user voice. It is a figure which shows the display content of a display. It is a figure which shows the display content of a display. It is a figure which shows the display content of a display. It is a figure which shows the display content of a display. It is a figure which shows the display content of a display. It is a figure which shows the display content of a display. It is a flowchart which shows a national pronunciation evaluation process. It is a flowchart which shows a British (American) pronunciation evaluation process. It is a flowchart which shows an Australian system (Canadian system) pronunciation evaluation process. It is a figure which shows the display content of a display.

Explanation of symbols

1,1A Electronic dictionary device (pronunciation learning support device)
2 Display (phonetic symbol display means)
3. Audio output unit (exemplary audio output means)
4 Recording unit (user voice input means)
5 Input part (phrase area input means, phrase input means)
6 CPU (user voice evaluation means, weighting evaluation means)
84 Pronunciation learning support programs 85a to 85e Dictionary database (phonetic information storage means by region)
85a, 85c Dictionary database (standard pronunciation information storage means)
86a, 86b Pronunciation correction table (correction information storage means)
87 Pronunciation tendency memory table (pronunciation tendency storage means)

Claims (8)

  1. A region-specific pronunciation information storage means for storing the phrase and the exemplary pronunciation information of the phrase in association with each region;
    A phrase area input means for inputting any word or phrase stored in the area-specific pronunciation information storage means as a specified phrase or specified area based on a user operation;
    User voice input means for capturing user voice for the specified phrase;
    User voice evaluation means for evaluating the pronunciation of the user voice captured by the user voice input means based on the model pronunciation information corresponding to the designated word and the designated area;
    A pronunciation learning support device comprising:
  2. The pronunciation learning support device according to claim 1,
    A pronunciation learning support device comprising: model voice output means for outputting a model voice of the specified word / phrase based on the specified word / phrase and model pronunciation information corresponding to the specified region.
  3. The pronunciation learning support device according to claim 1 or 2,
    Providing pronunciation tendency storage means for storing pronunciation tendency information related to the pronunciation tendency in the area to which the user belongs,
    The user voice evaluation means
    A pronunciation learning support apparatus comprising weighting evaluation means for performing weighting based on the pronunciation tendency information and evaluating the pronunciation of a user voice.
  4. In the pronunciation learning support device according to any one of claims 1 to 3,
    A phonetic symbol display means for displaying the phonetic symbol of the specified phrase;
    The phonetic symbol display means displays a phonetic symbol in the same notation regardless of the designated area.
  5. A region-specific pronunciation information storage means for storing the phrase and the exemplary pronunciation information of the phrase in association with each region;
    Based on a user operation, a phrase input unit that inputs any of the phrases stored in the regional pronunciation information storage unit as a specified phrase;
    User voice input means for capturing user voice for the specified phrase;
    User voice evaluation means for evaluating the pronunciation of the user voice captured by the user voice input means for each area based on exemplary pronunciation information of each area corresponding to the specified phrase;
    A pronunciation learning support device comprising:
  6. In the pronunciation learning support device according to any one of claims 1 to 5,
    A reference pronunciation information storage means for storing the phrase and the reference pronunciation information of the phrase in association with each other;
    The region-specific pronunciation information storage means includes:
    A pronunciation learning support apparatus comprising correction information storage means for storing correction information for the reference pronunciation information as model pronunciation information of a predetermined region.
  7. On the computer,
    An area-specific pronunciation information storage function for storing a phrase and the exemplary pronunciation information of the phrase in association with each area;
    A word / phrase area input function for inputting any word / phrase and area stored by the pronunciation information storage function for each area as a specified word / phrase based on a user operation;
    A user voice input function for capturing a user voice for the specified phrase;
    A user voice evaluation function for evaluating the pronunciation of the user voice captured by the user voice input function based on the model pronunciation information corresponding to the specified word and the specified area;
    Pronunciation learning support program characterized by realizing
  8. On the computer,
    An area-specific pronunciation information storage function for storing a phrase and the exemplary pronunciation information of the phrase in association with each area;
    Based on a user operation, a phrase input function for inputting any of the phrases stored by the regional pronunciation information storage function as a specified phrase;
    A user voice input function for capturing a user voice for the specified phrase;
    A user voice evaluation function that evaluates the pronunciation of the user voice captured by the user voice input function for each area based on exemplary pronunciation information of each area corresponding to the specified phrase;
    Pronunciation learning support program characterized by realizing
JP2006263945A 2006-09-28 2006-09-28 Pronunciation learning support device and pronunciation learning support program Active JP4840052B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006263945A JP4840052B2 (en) 2006-09-28 2006-09-28 Pronunciation learning support device and pronunciation learning support program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006263945A JP4840052B2 (en) 2006-09-28 2006-09-28 Pronunciation learning support device and pronunciation learning support program

Publications (2)

Publication Number Publication Date
JP2008083446A true JP2008083446A (en) 2008-04-10
JP4840052B2 JP4840052B2 (en) 2011-12-21

Family

ID=39354382

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006263945A Active JP4840052B2 (en) 2006-09-28 2006-09-28 Pronunciation learning support device and pronunciation learning support program

Country Status (1)

Country Link
JP (1) JP4840052B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101435015B1 (en) * 2013-06-10 2014-08-28 고려대학교 산학협력단 Device and method for information processing for analysis
JP2015191431A (en) * 2014-03-28 2015-11-02 株式会社ゼンリンデータコム Katakana expression of foreign language creation device, katakana expression of foreign language creation method and katakana expression of foreign language creation program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11202889A (en) * 1997-11-17 1999-07-30 Internatl Business Mach Corp <Ibm> Speech discriminating device, and device and method for pronunciation correction
JP2001159865A (en) * 1999-09-09 2001-06-12 Lucent Technol Inc Method and device for leading interactive language learning
JP2002023613A (en) * 2000-07-05 2002-01-23 Tomoe Bosai Tsushin Kk Language learning system
JP2004347786A (en) * 2003-05-21 2004-12-09 Casio Comput Co Ltd Speech display output controller, image display controller, and speech display output control processing program, image display control processing program
JP2005031604A (en) * 2003-07-12 2005-02-03 Ikuo Nishimoto English learning system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11202889A (en) * 1997-11-17 1999-07-30 Internatl Business Mach Corp <Ibm> Speech discriminating device, and device and method for pronunciation correction
JP2001159865A (en) * 1999-09-09 2001-06-12 Lucent Technol Inc Method and device for leading interactive language learning
JP2002023613A (en) * 2000-07-05 2002-01-23 Tomoe Bosai Tsushin Kk Language learning system
JP2004347786A (en) * 2003-05-21 2004-12-09 Casio Comput Co Ltd Speech display output controller, image display controller, and speech display output control processing program, image display control processing program
JP2005031604A (en) * 2003-07-12 2005-02-03 Ikuo Nishimoto English learning system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101435015B1 (en) * 2013-06-10 2014-08-28 고려대학교 산학협력단 Device and method for information processing for analysis
JP2015191431A (en) * 2014-03-28 2015-11-02 株式会社ゼンリンデータコム Katakana expression of foreign language creation device, katakana expression of foreign language creation method and katakana expression of foreign language creation program

Also Published As

Publication number Publication date
JP4840052B2 (en) 2011-12-21

Similar Documents

Publication Publication Date Title
KR101670150B1 (en) Systems and methods for name pronunciation
JP6251958B2 (en) Utterance analysis device, voice dialogue control device, method, and program
US10037758B2 (en) Device and method for understanding user intent
TWI532035B (en) Method for building language model, speech recognition method and electronic apparatus
JP5327054B2 (en) Pronunciation variation rule extraction device, pronunciation variation rule extraction method, and pronunciation variation rule extraction program
US6778962B1 (en) Speech synthesis with prosodic model data and accent type
US7149690B2 (en) Method and apparatus for interactive language instruction
JP3944159B2 (en) Question answering system and program
US8504350B2 (en) User-interactive automatic translation device and method for mobile device
US6397185B1 (en) Language independent suprasegmental pronunciation tutoring system and methods
US8275618B2 (en) Mobile dictation correction user interface
US5787230A (en) System and method of intelligent Mandarin speech input for Chinese computers
CN101346758B (en) Emotion recognizer
US7124080B2 (en) Method and apparatus for adapting a class entity dictionary used with language models
CN1206620C (en) Transcription and display input speech
EP1143415B1 (en) Generation of multiple proper name pronunciations for speech recognition
EP1941344B1 (en) Combined speech and alternate input modality to a mobile device
JP5322655B2 (en) Speech recognition system with huge vocabulary
JP4757599B2 (en) Speech recognition system, speech recognition method and program
JP5517458B2 (en) Speech recognition in large lists using fragments
US8666743B2 (en) Speech recognition method for selecting a combination of list elements via a speech input
EP1473708B1 (en) Method for recognizing speech
KR100815115B1 (en) An Acoustic Model Adaptation Method Based on Pronunciation Variability Analysis for Foreign Speech Recognition and apparatus thereof
JP4038211B2 (en) Speech synthesis apparatus, speech synthesis method, and speech synthesis system
US20030144842A1 (en) Text to speech

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20090907

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110318

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110329

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110524

RD02 Notification of acceptance of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7422

Effective date: 20110524

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110906

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110919

R150 Certificate of patent or registration of utility model

Ref document number: 4840052

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20141014

Year of fee payment: 3