JP2004347786A - Speech display output controller, image display controller, and speech display output control processing program, image display control processing program - Google Patents

Speech display output controller, image display controller, and speech display output control processing program, image display control processing program Download PDF

Info

Publication number
JP2004347786A
JP2004347786A JP2003143499A JP2003143499A JP2004347786A JP 2004347786 A JP2004347786 A JP 2004347786A JP 2003143499 A JP2003143499 A JP 2003143499A JP 2003143499 A JP2003143499 A JP 2003143499A JP 2004347786 A JP2004347786 A JP 2004347786A
Authority
JP
Japan
Prior art keywords
image
pronunciation
accent
display
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2003143499A
Other languages
Japanese (ja)
Other versions
JP4370811B2 (en
Inventor
Takashi Koshiro
Yoshiyuki Murata
嘉行 村田
孝 湖城
Original Assignee
Casio Comput Co Ltd
カシオ計算機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casio Comput Co Ltd, カシオ計算機株式会社 filed Critical Casio Comput Co Ltd
Priority to JP2003143499A priority Critical patent/JP4370811B2/en
Publication of JP2004347786A publication Critical patent/JP2004347786A/en
Application granted granted Critical
Publication of JP4370811B2 publication Critical patent/JP4370811B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To make the distinct expression of the timing of an accent possible in display of an image synchronized with speech output with a speech display output controller for outputting data, such as speech, text and image, in synchronization. <P>SOLUTION: In synchronization with the pronunciation speech output of a retrieval key word "low", identification displays HL of the key word "low" and the pronunciation and phonetic symbol are sequentially performed in a window W 1, and pronunciation mouth-shaped images 12e (No 36 to No 9 to No 8) corresponding to the respective pronunciation and phonetic symbols are sequentially switched, synthesized and displayed based on a set character image 12d (No 3) for that mouth image area in a window W 2. Even more, at the time of of the switching, synthesizing and displaying of the identification displays HL and the mouth-shaped image 12e (No 9) in synchronization with the pronunciation speech output for the accent character "o", the synthesis destination image l2d (No 3) thereof is changed and displayed to the face image 12d (No 3') corresponding to the accent expressing the strong pronunciation by, for example, the sweating at the head and the oscillation at the mouth. <P>COPYRIGHT: (C)2005,JPO&NCIPI

Description

[0001]
TECHNICAL FIELD OF THE INVENTION
The present invention relates to a sound display output control device, an image display control device, a sound display output control processing program, and an image display control processing program for synchronously outputting data such as sound, text, and image.
[0002]
[Prior art]
2. Description of the Related Art Conventionally, as a language learning device, for example, there is a device that outputs speech of a language and displays its mouth shape.
[0003]
In this language learning device, voice information of a native language user and mouth-shaped image data are recorded in a sample data memory in advance by a microphone and a camera. Then, the learner's voice information and mouth-shaped image data are recorded by the microphone and the camera, and the waveforms of the respective voice information of the learner and the native language user previously recorded in the sample data memory and The corresponding image data of each mouth type is compared and displayed in a chart format.
[0004]
In this way, it is intended to clearly analyze and display the difference in language pronunciation between the native language user and the learner (for example, see Patent Document 1).
[0005]
[Patent Document 1]
JP 2001-318592 A
[0006]
[Problems to be solved by the invention]
By using such a conventional language learning device, it is possible to know the pronunciation voice of the native language user who is a model and its mouth shape image. Is emphasized, and there is no clear difference in the mouth image itself. Therefore, there is a problem that it is difficult to understand the timing of the accent in each learning language.
[0007]
The present invention has been made in view of the above-described problems, and in the display of an image synchronized with an audio output, an audio display output control device, an image display control device, which can clearly show accent timing. It is another object of the present invention to provide an audio display output control processing program and an image display control processing program.
[0008]
[Means for Solving the Problems]
In the voice display output control device according to claim 1 of the present invention, voice data is output by voice data output means, and text is displayed in synchronization with the output of the voice data by text synchronous display control means, and image display control means is provided. An image including at least the mouth portion is displayed by the mouth image display control means, for the mouth portion included in the display image, in synchronization with the audio data to be audio output, a mouth-shaped image corresponding to the audio data Is displayed. Then, the presence or absence of an accent of the voice data or the text is detected by an accent detection unit, and the mouth-shaped image displayed by the image display control unit is changed by the image change display control unit in response to the detection of the presence of the accent. Let it.
[0009]
According to this, not only the display of the text and the image synchronized with the output of the audio data and the display of the mouth-shaped image corresponding to the audio data at the mouth portion included in the image, but also the detection of the accent of the audio data or the text can be performed. The mouth-shaped display image can be changed, and the timing of the accent can be clearly expressed.
[0010]
The voice display output control device according to a second aspect of the present invention is the voice display output control device according to the first aspect, further comprising: searching dictionary data corresponding to the headword inputted by the dictionary search means. The dictionary data display control means displays the dictionary data corresponding to the dictionary-searched entry. The voice data is the pronunciation voice data of the headword searched by the dictionary search means, the text is the text of the headword searched by the dictionary search means, and the headword pronunciation by the voice data output means. The output of the voice data, the display of the headword text synchronized with the headword pronunciation voice data by the text synchronous display control means, and the display of the image by the image display control means are performed by the dictionary data display control means. This is performed in the display state of the dictionary data corresponding to the headword.
[0011]
According to this, along with the search and display of the dictionary data corresponding to the input headword, the output of the headword pronunciation voice data, the display of the headword text and the display of the image synchronized with this, and the display of the mouth-shaped image Synchronous display can be performed, and the timing of the headword accent can be clearly expressed by a change in the display image according to the accent detection.
[0012]
In the voice display output control device according to claim 3 of the present invention, the word storage means stores a plurality of words in correspondence with correct accented phonetic symbols and error accented phonetic symbols of each word, and outputs the voice data. Means for outputting correct accent pronunciation voice data or incorrect accent pronunciation voice data for the stored word, and displaying the text of the word in synchronism with the pronunciation voice data for the word output by the text synchronization display control means. The image display control means displays an image including at least the mouth part in a different display form when the sound data output means outputs correct accent pronunciation sound data and when the incorrect accent pronunciation sound data is output. And further included in the display image by the mouth image display control means. That the portion of the mouth, to display an image of the mouth type in synchronization with the sound audio data outputted corresponding to the sound audio data by the audio data outputting means. Then, with the synchronous display of the word text by the text synchronous display control means by the accent detection means, the accent of the word is detected from the accented pronunciation symbol of the corresponding word stored by the word storage means, and the image change display control means The image displayed by the image display control means is changed according to the accent detection.
[0013]
According to this, not only the pronunciation sound data of the correct accent and the pronunciation sound data of the erroneous accent for the word stored by the word storage means can be output, but also the display and display image of the word text synchronized with the pronunciation sound data Mouth-shaped images corresponding to the pronunciation voice data of the mouth part to be displayed can be displayed, and the displayed image can be changed according to the detection of word accents, so that correct accents and erroneous accents for words can be easily and clearly learned. become.
[0014]
The voice display output control device according to claim 4 of the present invention is the voice display output control device according to claim 3, wherein the word stored by the correct accent display control means is associated with the word. The correct accented phonetic symbols and the incorrect accented phonetic symbols are displayed side by side, and the correct / erroneous accent selecting means selects either the correct accented phonetic symbols or the incorrect accented phonetic symbols of the words displayed side by side. Then, the voice data output means outputs the correct accent pronunciation speech data or the incorrect accent pronunciation speech data of the corresponding word in accordance with the correct / incorrect selection of the word accent by the correct / false accent selection means.
[0015]
According to this, it is further possible to select the correct accented phonetic symbol or the incorrect accented phonetic symbol for the word stored by the word storage means and output the phonetic sound data, and furthermore, the word synchronized with the phonetic sound data Mouth-shaped images corresponding to pronunciation voice data for the mouth part included in the text display and display image can be displayed, and the display image can be changed according to the detection of word accents. Learning can be done easily and at a clear timing.
[0016]
In the voice display output control device according to the fifth aspect of the present invention, the storage means stores the plurality of headwords and the pronunciation voice data of at least two or more areas of each of the headwords in association with each other, and specifies the area. By means, one of the pronunciation sound data of two or more areas of the stored headword is designated. Then, the voice data output means outputs the pronunciation voice data of the designated area of the headword in accordance with the area designation of the pronunciation voice data, and the text synchronous display control means outputs the specified area of the headword to be voice-output. And displaying the text of the headword in synchronization with the pronunciation voice data of the subject, and displaying the image including at least the mouth portion in a different display form according to the designated area by the image display control means, Means for displaying a mouth-shaped image corresponding to the pronunciation sound data in synchronization with the sound output speech data for the mouth portion included in the display image. Then, with the synchronous display of the headword text by the accent detection means, the accent of the headword is detected, and the image displayed by the image display control means in response to the detection of the accent by the image change display control means. To change.
[0017]
According to this, it is possible to designate and output pronunciation voice data having the same headword and different regional dialects, and to synchronize the headword text and the mouth portion of the display image middle part in synchronization with the output of the pronunciation voice data. It is possible to display a type image, display an image in a different display form according to the specified area, and display the change of the image by detecting the accent, so that the pronunciation sound data and the timing of the accent in the specified area can be easily and clearly learned. become.
[0018]
An image display control device according to a sixth aspect of the present invention is an image display control device for changing and controlling a face image having a mouth or a facial expression in accordance with a display of a sequence of pronunciation target data including a headword of a word. The first storage means stores a plurality of sets of the pronunciation target data and the pronunciation symbols including accented pronunciation symbols in association with each other, and the second storage means stores the pronunciation symbols including accented pronunciation symbols and the corresponding pronunciation symbols. A plurality of sets of voice and face image are stored in association with each other. The first control means reads out the phonetic symbols corresponding to the sounding target data from the first storage means in accordance with the display of the sequence of the sounding target data in the order of sounding, and corresponds to the read phonetic symbols. The read voice and the face image are read from the second storage means, and the read voice is output to the outside, and the read face image is controlled so as to be displayed. When outputting voice to the outside by the first control, it is determined whether or not the read phonetic symbols include accented phonetic symbols, and it is determined that accented symbols are included. In this case, a voice and a face image corresponding to the accented phonetic symbol are read from the second storage means, and the read voice is output to the outside, and the read face image is displayed. It is controlled to be.
[0019]
According to this, along with the display of the pronunciation order of the pronunciation target data such as the headword of a word, the voice output corresponding to the pronunciation symbol of the pronunciation target data and the face image display can be performed, and the accent portion of the data includes the accent symbol. Voice output corresponding to phonetic symbols and facial image display can be performed easily and clearly, such as pronunciation sounds such as words and the expression of the face accompanying this pronunciation, the pronunciation voice at the accent part and the expression of the face accompanying the pronunciation of this accent part Will be able to learn.
[0020]
In the image display control device according to claim 7 of the present invention, in the image display control device according to claim 6, the phonetic symbols including accented phonetic symbols stored in the second storage means are: It consists of phonetic symbols with accent marks and phonetic symbols without accent marks, and the voice and face image stored in association with the phonetic symbols with accent marks and the accent marks are not attached. The voice and the face image stored in association with the phonetic symbols are different.
[0021]
According to this, the pronunciation voice and the expression of the face in the part without the accent mark of the pronunciation target data such as the headword of the word, and the pronunciation sound and the expression of the face in the part with the accent mark are added. You will be able to learn the differences more clearly.
[0022]
Further, in the image display control device according to claim 8 of the present invention, the image display control for changing and controlling the face image having the mouth or the facial expression according to the display of the pronunciation order of the series of pronunciation target data including the headword of the word. An apparatus, wherein a plurality of sets of the sound-generating data and its voice and face image are stored in association with each other by a storage means, and a detection signal is stored in an accent portion of the sound-generating data in the stored signal waveform of the voice. A peak portion of a corresponding signal waveform is detected, and a face image corresponding to the voice of the detected accent portion is read from the storage device by the display control unit, and the read face image is replaced with another signal other than the accent portion. Control is performed so that the face image corresponding to the sound of the waveform portion is displayed in a different display form.
[0023]
According to this, with the display of the pronunciation order of the pronunciation target data such as a headword of a word, a face image corresponding to the pronunciation voice can be displayed, and the accent portion detected by the peak portion of the audio signal waveform differs. The face image in the display form can be displayed, and the expression of the face accompanying the pronunciation at the accent portion can be learned more clearly.
[0024]
In the image display control device according to a ninth aspect of the present invention, in the image display control device according to the eighth aspect, the display control means includes a sound target corresponding to an accent portion detected by the detection means. A text display control means is provided for controlling the display of the data portion to be displayed in a display form different from the display of the portion of the sound target data corresponding to the signal waveform portion other than the accent portion.
[0025]
According to this, in addition to the display of the face image corresponding to the pronunciation sound of the pronunciation target data, the display of the accent portion of the pronunciation target data is displayed in a display form different from the display of the pronunciation target data other than the accent portion. Therefore, it is possible to more clearly learn the accent portion of the pronunciation target data and the expression of the face accompanying the utterance of the pronunciation sound.
[0026]
BEST MODE FOR CARRYING OUT THE INVENTION
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
[0027]
(1st Embodiment)
FIG. 1 is a block diagram showing a configuration of an electronic circuit of a portable device 10 according to an embodiment of a voice display output control device (image display control device) of the present invention.
[0028]
The portable device (PDA: personal digital assistants) 10 is configured by a computer which reads a program recorded on various recording media or a program transmitted and transmitted, and whose operation is controlled by the read program. The electronic circuit includes a CPU (central processing unit) 11.
[0029]
The CPU 11 stores a PDA control program stored in advance in the FLASH memory 12A in the memory 12, a PDA control program read from the external recording medium 13 such as a ROM card into the memory 12 via the recording medium reading unit 14, or an Internet In accordance with a PDA control program read into the memory 12 from another computer terminal (30) on the communication network N via the transmission control unit 15, the operation of each circuit unit is controlled. The stored PDA control program is an input signal corresponding to a user operation from an input unit 17a composed of switches and keys and a coordinate input device 17b composed of a mouse and a tablet, or an input signal received by the transmission control unit 15 on the communication network N. Communication from other computer terminals (30) No., or the external communication device that is received via the communication unit 16 by the short-range wireless connection or a wired connection using Bluetooth (R) (PC: personal computer) is activated in response to the communication signal from 20.
[0030]
The CPU 11 is connected to the memory 12, the recording medium reading unit 14, the electric transmission control unit 15, the communication unit 16, the input unit 17a, and the coordinate input device 17b. And a stereo sound output unit 19b having left and right channel speakers L and R and outputting sound.
[0031]
The CPU 11 has a built-in timer for measuring the processing time.
[0032]
The memory 12 of the portable device 10 includes a flash memory (EEP-ROM) 12A and a RAM 12B.
[0033]
The FLASH memory (EEP-ROM) 12A has a network program for performing data communication with each computer terminal (30) on the communication network N via a system program that controls the entire operation of the mobile device 10 and the transmission control unit 15. A program, an external device communication program for performing data communication with an external communication device (PC) 20 via the communication unit 16 are stored, a schedule management program, an address management program, and a dictionary entry search and search. Synchronous reproduction of various data such as voice, text, and face image (including mouth-shaped composite image) corresponding to the headword, setting of the type of the face image (character), and performing the test for headword accent Various PDA control programs such as the dictionary processing program 12a are stored.
[0034]
The FLASH memory (EEP-ROM) 12A further includes a dictionary database 12b (see FIG. 2), dictionary audio data 12c, character image data 12d (see FIG. 3), voice-specific mouth (type) image data 12e (FIG. 4). And a dictionary time code file 12f (see FIGS. 5 and 6).
[0035]
As the dictionary database 12b, data of various dictionaries such as an English-Japanese dictionary, a Japanese-English dictionary, and a Japanese language dictionary are stored, and as shown in FIG. No. of a time code file and a storage destination address for easily performing synchronous playback of voice / text / image, No. and a storage destination address of an HTML file for setting an image playback window, No. and a storage destination address of a text file, The number and storage address of the text-to-mouth synchronization file in which each character of the text, the phonetic symbol, and the mouth type number are associated, the number and storage address of the sound file that is the audio data, the data number and the storage address of the dictionary contents, Each is linked and stored.
[0036]
In each embodiment, for pronunciation symbols described in the specification, similar characters are substituted for formal pronunciation symbols because it is difficult to input them, and formal pronunciation symbols are described on the drawings.
[0037]
FIG. 2 is a diagram showing synchronous reproduction link data for one headword "low" in the dictionary database 12b stored in the memory 12 of the portable device 10. FIG. A table showing the storage address, FIG. 4B shows the text data “low” stored according to the text file No., and FIG. 4C shows a text character stored according to the text mouth synchronization file No. , Phonetic symbols, and mouthpiece numbers.
[0038]
As the dictionary voice data 12c, voice data for pronunciation for each headword in the dictionary database 12b is stored in association with the sound file No. and the address.
[0039]
FIG. 3 is a diagram showing character image data 12d stored in the memory 12 of the portable device 10 and selectively used by user setting for synchronous display of the pronunciation mouth type image in the dictionary entry search.
[0040]
In the case of the present embodiment, three types of character images (face images) No1 to No3 are prepared as the character image data 12d, and each character image No1, No2, and No3 has a synthesized rectangular area of the mouth-shaped image. Mouth image area data (X1, Y1, X2, Y2) for designating as coordinates of two diagonal points are stored in association with each other.
[0041]
Note that these three types of character images (face images) No1 to No3 further include accent face images No1 'to No3' for expressing emphasis of pronunciation at the timing of accent of the dictionary-searched entry. 12 (C) (2) and FIG. 13 (B) (2)) are stored, and further, U.S. English character images No1US to No3US when U.S.A. or English pronunciation sounds are designated (see FIG. 15). And English character images No1UK to No3UK (see FIG. 16) and their accent face images No1US 'to No3US' (see (2) in FIG. 15B) and No1UK 'to No3UK' (see (B) 2 in FIG. 16). Reference) is stored.
[0042]
FIG. 4 shows the mouth image areas (X1, Y1, X2) of the character images (12d: No1 to No3) stored in the memory 12 of the portable device 10 and synchronously displaying the pronunciation mouth type images in the dictionary entry search. , Y2) is a diagram showing mouth-specific mouth image data 12e synthesized and displayed.
[0043]
The mouth type images 12e1, 12e2,... Associated with the phonetic symbols required for pronunciation of all the headwords stored in the dictionary database 12b are the mouth number Nos. n and stored.
[0044]
The dictionary time code file 12f stored in the memory 12 of the portable device 10 is used for synchronously reproducing voice / text / face images (including mouth-shaped composite images) corresponding to the dictionary-searched entry. A command file (see FIG. 5) prepared not for every headword but for each of a plurality of headwords having the same number of characters and phonetic symbols and their pronunciation timing, and is compressed / encrypted by a predetermined algorithm. Have been.
[0045]
FIG. 5 is a diagram showing a time code file 12f23 (12i) of the file No. 23 associated with the headword "low" in the dictionary time code file 12f stored in the memory 12 of the portable device 10.
[0046]
The time code file 12fn includes a time code for performing command processing for synchronously reproducing various data (voice, text, and image) in a reference processing unit time (for example, 25 ms) at a predetermined time interval described and set in advance as header information H. Are described and arranged, and each time code is a reference number or a designated numerical value for associating a command code designating a command with data contents (text file / sound file / image file, etc.) related to the command. It is configured by a combination with parameter data consisting of
[0047]
For example, when the preset reference processing unit time is 25 ms, the file playback time of the headword “low” time code file 12f23 shown in FIG. 5 is 1 second after a playback process consisting of 40 steps of time code. Become.
[0048]
FIG. 6 is a diagram showing the command codes of various commands described in the dictionary time code file 12fn (see FIG. 5) of the portable device 10 in association with the command contents analyzed based on the parameter data.
[0049]
Commands used for the time code file 12fn include a standard command and an extended command. The standard command includes LT (i-th text load). VD (i-th text segment display). BL (character counter reset / i-th phrase block specification). HN (no highlight, character counter count up). HL (highlight / character count up to i-th character). LS (1 line scroll / character counter count up). DH (i-th HTML file display). DI (i-th image file display). PS (i-th sound file play). CS (clear all file). PP (stop for basic time i seconds). FN (processing end). There are NP (invalid) commands.
[0050]
Also, in the RAM 12B in the memory 12, a search entry word memory 12g in which entry words associated with the search processing of the dictionary database 12b are read and stored according to the entry word numbers, and correspond to the searched entry words. A dictionary data memory 12h for reading and storing dictionary data such as meanings to be read from the dictionary database 12b according to the dictionary data number, and synchronizing voice, text, and image corresponding to the searched headword. A reproduction time code file 12fn (see FIG. 5) for performing reproduction is read from the dictionary time code file 12f in accordance with the time code file No. in the dictionary database 12b, expanded and decoded, and stored. A memory 12i is prepared.
[0051]
Further, an HTML file for setting windows W1 and W2 (see FIGS. 12 and 13) for synchronously reproducing text and images on the entry search screen G2 is stored in the RAM 12B in the memory 12 in the dictionary database. A synchronization HTML file memory 12j which is read and stored from the dictionary database 12b according to the HTML file No., and a text file memory for synchronization which reads and stores the text data of the search entry from the dictionary database 12b according to the text file No. 12k, a sound file memory for synchronization 12m in which pronunciation sound data of a search entry is read from the dictionary audio data 12c in accordance with the sound file No. in the dictionary database 12b, and a pronunciation image of the search entry. Character image set by user for display A synchronization image file memory 12n read out from the character image data 12d (see FIG. 3) and stored, and a mouth indicating a synthetic area of a mouth-shaped image in the character image stored in the synchronization image file memory 12n. Mouth image area memory 12p in which image area data (X1, Y1; X2, Y2) is stored, and voice / text according to the time code file 12fn corresponding to the search term stored in the time code file memory 12i. An image expansion buffer 12q or the like is prepared in which a character image to be reproduced synchronously and a mouth image are expanded and combined and stored.
[0052]
That is, the dictionary search program 12a stored in the FLASH memory 12A of the portable device (PDA) 10 is activated, and the entry searched for is "low". The time code file 12f read out from the memory and stored in the reproduction time code file memory 12i is, for example, the time code file 12f23 shown in FIG. 5, and the third command code " When the "DI" and the parameter data "00" are read, since the command "DI" is the i-th image file display command, it is stored in the synchronization image file 12n linked from the parameter data i = 00. The read character image 12dn is read and displayed.
[0053]
When the fourth command code “PS” and the parameter data “00” are read in accordance with the command processing for each set processing unit time, the command “PS” is the i-th sound file reproduction command. The audio data 12cn stored in the synchronization sound file 12m linked from the parameter data i = 00 is read and output.
[0054]
When the sixth command code “VD” and the parameter data “00” are read in accordance with the command processing for each set processing unit time, the command “VD” is the i-th text segment display instruction. In accordance with the parameter data i = 00, the text file of the 0th phrase of the text (in this case, the text file “low” of the search entry stored in the synchronization text file memory 12k is displayed.
[0055]
Further, when the ninth command code “NP” and the parameter data “00” are read in accordance with the command processing for each set processing unit time, since the command “NP” is an invalid command, the current file output The state is maintained.
[0056]
The detailed operation of the synchronized playback of the pronunciation voice, text, and image (mouth image) corresponding to the search entry based on the time code file 12f23 (12i) of the file content shown in FIG. 5 will be described later. Will be explained again.
[0057]
Next, various operations performed by the portable device 10 having the above configuration will be described.
[0058]
FIG. 7 is a flowchart showing the main processing of the portable device 10 according to the dictionary processing program 12a.
[0059]
FIG. 8 is a flowchart showing headword synchronous reproduction processing accompanying the main processing of the portable device 10.
[0060]
FIG. 9 is a flowchart showing a text-corresponding mouth display process executed by interruption in response to the highlight display of each headword character accompanying the headword synchronous reproduction process of the portable device 10.
[0061]
FIG. 10 is a diagram showing a setting display state of a character image for synchronous reproduction in a character setting process in the main process of the portable device 10.
[0062]
When the mode is switched to the character image setting mode by operating the “setting” key 17a1 and the cursor key 17a2 of the input unit 17a (step S1 → S2), for example, three types of character image data 12d1 (No1) stored in the FLASH memory 12A ), 12d2 (No2), 12d3 (No3) [see FIG. 3] are read out and displayed on the display unit 18 as a character image list selection screen G1 as shown in FIG. 10 (step S3).
[0063]
In the character image list selection screen G1, the selection frame X of each character image is moved and operated by operating the cursor key 17a3 to select a character image desired by the user (for example, 12d3 (No. 3)), and at the same time, "translate / determine". When the selection of the character image is detected by the determination operation using the (voice) ”key 17a4 (step S4), the character image 12dn thus detected is read and transferred to the synchronization image file memory 12n in the RAM 12B. Is performed (step S5). Further, the mouth image area data (X1, Y1; X2, Y2) indicating the synthesis area of the mouth image of the character image 12dn detected and selected is also read and transferred to the mouth image area memory 12p in the RAM 12B. (Step S6).
[0064]
In this way, the character image to be mouth-shaped image synthesis target to be displayed in synchronization with the pronunciation voice of the headword is selected and set in accordance with the headword search.
[0065]
FIG. 11 is a view showing a search entry display screen G2 associated with entry search processing in the main processing of the portable device 10.
[0066]
In order to perform an entry search based on dictionary data of, for example, an English-Japanese dictionary stored in the dictionary database 12b, the search target is set after setting the English-Japanese dictionary search mode by operating the “English-Japanese” key 17a5 of the input unit 17a. Is input (eg, "low") (steps S7 → S8), a plurality of headwords that match the input headword and include a matching character at the beginning are searched from the dictionary data of the English-Japanese dictionary. Is read out and displayed on the display unit 18 as a list of search entry terms (not shown) (step S9).
[0067]
On the search entry list screen, an entry (in this case, "low") that matches the entry entered by the user is designated by the cursor key and a "translation / decision (voice)" key 17a4 is input. Is operated (step S10), the selected and detected entry word "low" is stored in the entry word memory 12g in the RAM 12B, and the pronunciation / part-of-speech / The dictionary data such as the meaning is read out and stored in the entry word dictionary data memory 12h in the RAM 12B, and is displayed on the display unit 18 as a search entry word display screen G2 as shown in FIG. S11).
[0068]
Here, for the headword "low" searched and displayed, the pronunciation voice is output and at the same time, in order to synchronously display the characters, the pronunciation symbols, and the mouth-shaped images of the pronunciation of the headword, "translation" is used. When the “/ decision (voice)” key 17a4 is operated (step S12), the process proceeds to the synchronous reproduction process in FIG. 8 (step SA).
[0069]
FIG. 12 shows a headword character display window W1 and a sound opening type window displayed on the search headword display screen G2 in the setting state of the character image No. 3 with the synchronous playback processing in the headword search processing of the portable device 10. FIG. 7A is a diagram showing a display state of a display window W2, and FIG. 7A is a diagram showing a setting display state of an entry word character display window W1 and a sound opening type display window W2 with respect to a search entry word display screen G2. (B) is a diagram showing the change state of the headword character display window W1 and the accent-unsupported sound opening type display window W2 synchronized with the output of the pronunciation voice, and FIG. It is a figure which shows the change state of the word output character display window W1 and the pronunciation opening type display window W2 corresponding to an accent.
[0070]
When the synchronous reproduction process (step SA) in FIG. 8 is started in response to the operation of the “translation / decision (voice)” key 17a4 while the search entry display screen G2 is displayed, the contents in the RAM 12B are displayed. Initialization processing such as clearing of each work area is performed (step A1). First, synchronous reproduction link data (see FIG. 2) for the current search entry “low” stored in the dictionary database 12b is added. Based on the entry word search screen G2, an HTML file for setting the windows W1 and W2 for synchronous reproduction of text and images (see FIG. 12) is read out according to the HTML file No. 3 and written into the synchronization HTML file memory 12j. It is. The text data “low (with phonetic symbols)” of the search entry is read out according to the text file No. 4222 and written into the text file memory for synchronization 12k. Further, the pronunciation voice data of the search entry is read out according to the sound file No. 4222 and written into the synchronization sound file memory 12m (step A2).
[0071]
Note that the character image (12d3 (No. 3) in this case) set by the user to display the pronunciation image of the search entry word has already been converted from the character image data 12d (see FIG. 3) according to step S5 accompanying the character setting process. The character image data (X1, Y1; X2, Y2), which is read out from the inside and written into the synchronization image file memory 12n, and which is the sounding mouth type image synthesis area in the character image 12d3 (No3), is also set as the character setting. According to step S6 accompanying the processing, the data has already been written in the mouth image area memory 12p.
[0072]
Then, from the time code file 12fn for synchronous playback of encrypted voice / text / image corresponding to various head words stored as the dictionary time code file 12f in the FLASH memory 12A, the current search head word is searched. The time code file 12f23 (see FIG. 5) corresponding to “low” is decoded and decoded according to the time code file No. 23 described in the synchronous reproduction link data (see FIG. 2), and is read out. The data is transferred and stored in the code file memory 12i (step A3).
[0073]
In this way, the reading setting of various files for synchronous reproduction of the pronunciation voice, text, and pronunciation mouth type image corresponding to the search entry word "low" into the RAM 12B, and the time code file 12f23 for synchronous reproduction of these files. When the transfer setting to the RAM 12B is completed, the processing unit time (for example, 25 ms) of the time code file (CAS file) 12f23 (see FIG. 5) stored in the time code file memory 12i by the CPU 11 is stored in the header of the time code file 12f23. It is set as information H (step A4).
[0074]
A read pointer is set at the head of the time code file 12f23 stored in the time code file memory 12i, and a read pointer is set at the head of various files written in the synchronization file memories 12j, 12k, 12m, and 12n. Is set (step A5), and a timer for measuring the reproduction processing timing of each synchronous file is started (step A6).
[0075]
In step A6, when the processing timer is started, the read pointer set in step A5 is set for each processing unit time (25 ms) corresponding to the current time code file 12f23 set in step A4. The command code and its parameter data of the time code file 12f23 (see FIG. 5) at the initial position are read (step A7).
[0076]
Then, it is determined whether or not the command code read from the time code file 12f23 (see FIG. 5) is “FN” (step A8). A stop process of the reproduction process is instructed (steps A8 → A9).
[0077]
On the other hand, when it is determined that the command code read from the time code file 12f23 (see FIG. 5) is not “FN”, a process corresponding to the content of the command code (see FIG. 6) is executed. (Step A10).
[0078]
If it is determined that the time measured by the timer has reached the next processing unit time (25 ms), the read pointer for the time code file 12f23 (see FIG. 5) stored in the RAM 12B moves to the next position. It is set (step A11 → A12), and the processing from the reading of the command code and the parameter data of the time code file 12f23 (see FIG. 5) at the position of the read pointer in step A7 is repeated (step A12 → A7 to A10). .
[0079]
Here, the synchronized reproduction output operation of the pronunciation voice / text / pronunciation mouth type image file based on the time code file 12f23 of the search entry “low” shown in FIG. 5 will be described in detail.
[0080]
That is, in the time code file 12f23, command processing is executed for each (reference) processing unit time (for example, 25 ms) previously described and set in the header H. First, the time code file 12f23 (see FIG. 5) When the first command code "CS" (clear all file) and its parameter data "00" are read, an instruction to clear the output of all the files is issued, and the output of the text / audio / image file is cleared ( Step A10).
[0081]
When the second command code “DH” (i-th HTML file display) and its parameter data “00” are read out, the synchronization in the RAM 12B is performed according to the parameter data (i = 0) read out together with the command code DH. The headword text / image frame data of the HTML data is read out from the HTML file memory 12j for use, and the text / image synchronization on the headword search screen G2 with respect to the display unit 18 as shown in FIG. The reproduction windows W1 and W2 are set (step A10).
[0082]
When the third command code “DI” (i-th image file display) and its parameter data “00” are read, the synchronization in the RAM 12B is performed according to the parameter data (i = 0) read with the command code DI. The character image 12d (No. 3 in this case) set and stored in the character setting process (steps S2 to S6) is read out from the image file memory 12n for use, and as shown in FIG. The image set in the HTML file is displayed on the window W2 for synchronous reproduction on the screen G2 (step A10).
[0083]
When the fourth command code “PS” (i-th sound file play) and its parameter data “00” are read, the synchronization in the RAM 12B is performed according to the parameter data (i = 0) read with the command code PS. The pronunciation sound data corresponding to the search entry "low" set and stored in step A2 is read from the sound file memory 12m for use, and the sound output from the stereo sound output unit 19b is started (step A10). .
[0084]
When the fifth command code “LT” (i-th text load) and its parameter data “00” are read, the synchronization command in the RAM 12B is read according to the parameter data (i = 0) read together with the command code LT. One phrase of text data "l", "o", and "w" (including phonetic symbols) corresponding to the search entry word "low" set and stored in step A2 is specified in the text file memory 12k (step A10). ).
[0085]
When the sixth command code “VD” (i-th text segment display) and its parameter data “00” are read, the fifth command is read in accordance with the parameter data (i = 0) read together with the command code VD. The text data “l”, “o”, and “w” (including phonetic symbols) of one phrase specified according to the code “LT” are read out, and as shown in FIG. (Step A10).
[0086]
When the seventh command code “BL” (character counter reset / i-th phrase block designation) and its parameter data “00” are read, the character of the search entry “low” being displayed in the text synchronous playback window W1 is read. The counter is reset (step A10).
[0087]
Then, when the eighth command code “HL” (highlight / character count up to the i-th character) and its parameter data “01” are read out, according to the parameter data (i = 1) read out together with the command code HL. As shown in FIG. 12A, the first character “l” of the search entry word “low” (including phonetic symbols) displayed in the text synchronous playback window W1 and its corresponding pronunciation. Up to the symbol, a highlight (identification) display HL is displayed by color change display, reverse display, underline display, etc., and the character counter is counted up to the second character and its corresponding phonetic symbol (step A10).
[0088]
When highlighting (identifying) each character of the search term "low" and its corresponding phonetic symbol by the time code file 12f23, the text corresponding port display processing in FIG. 9 is interrupted.
[0089]
That is, when the character “l” of the search entry “low” highlighted (identified) display HL is detected this time (step B1), the pronunciation mouth type image corresponding to the detected character “l” is stored in the dictionary. According to the mouth number “36” corresponding to the text “l” indicated by the text mouth synchronization file (see FIG. 2C) in the database 12b, the sound mouth type is selected from the voice-specific mouth image data 12e (see FIG. 4). The image is read as the image 12e2 (No. 36) (step B2). The pronunciation mouth type image 12e2 (No. 36) corresponding to the character "l" of the search term "low" displayed with the highlight (identification) is shown in FIG. 12 (A) (FIG. 12 (B) (1)). As shown, for the mouth image synthesis area of the character image 12d (No. 3) displayed in the image synchronous playback window W2 on the headword search screen G2, the mouth image area memory 12p in the RAM 12B is stored. The images are synthesized and displayed according to the mouth image areas (X1, Y1; X2, Y2) (step B3).
[0090]
Here, it is determined whether or not there is an accent mark for the phonetic symbol of the current highlight (identification) display text "l" indicated by the text mouth synchronization file (see FIG. 2C) (step B4). In the case of the phonetic symbol [l] of the highlight (identification) display text "l", it is determined that there is no accent mark, so that the display of the character image 12d (No. 3) as its normal face image is maintained. (Step B4 → B5).
[0091]
If it is determined that there is an accent mark, the character image 12d (No. 3) is changed and displayed as an accent face image No. 3 'for pronunciation emphasis expression (see (2) in FIG. 12 (C)). (Step B4 → B6).
[0092]
Then, the output timing of the pronunciation voice data corresponding to the search entry word “low” that has been started to be output from the stereo voice output unit 19b in response to the fourth command code “PS”, and the processing unit by the time code file 12f23 Since the time code file 12f23 is created in advance in correspondence with the identification display timing of each character of the search entry “low” according to the time (25 ms), the search entry “low” is generated. In the identification display of the first character "l" and the synthesizing display of the pronunciation mouth-shaped image 12e (No. 36), the pronunciation voice for reading out the corresponding pronunciation symbol is synchronously output.
[0093]
Thereby, the identification display of the first character "l" of the search entry word "low", the synthesized display of the pronunciation mouth image 12e3 (No36) on the set character image 12d (No3), and the output of the pronunciation voice Are performed synchronously.
[0094]
When the ninth command code “NP” is read, the synchronized display screen of the character image and the text data corresponding to the current search entry “low” and the synchronized output state of the pronunciation voice data are maintained.
[0095]
Thereafter, in accordance with the twelfth command code “HL” and the thirty-fifth command code “HL”, as shown in FIG. 12 (C) (2) and FIG. 12 (C) (3), in the window W1 for text synchronous reproduction, The text data “low” and its phonetic symbols of the search entry word are sequentially changed to high, such as the second character “o” and phonetic symbol [o], and the third character “w” and phonetic symbol [u]. Light (identification) display HL is performed (step A10). At the same time, in the window for image synchronous reproduction W2, the image is synthesized with the mouth image area (X1, Y1; X2, Y2) of the set character image 12d (No. 3) in accordance with the mouth display process corresponding to the text in FIG. In accordance with the text mouth synchronization file (see FIG. 2C), the sounding mouth type image 12e (No. 9) corresponding to mouth number 9 and the sounding mouth type image 12e (No. 8) corresponding to mouth number 8 are also required. ) Are read out of the audio-specific mouth images 12e, sequentially synthesized, and displayed synchronously (steps B1 to B3).
[0096]
Further, the pronunciation voice data of the search entry word “low” output from the stereo voice output unit 19b in response to the fourth command code “PS” also includes the highlight (identification) of the text “low” and its phonetic symbols. ) The sound of reading out the display portion is sequentially and synchronously output.
[0097]
In addition, each pronunciation mouth type image 12e (No36) by the text corresponding mouth display process synchronized with the highlight (identification) display HL of each character "l""o""w" of the search entry word "low" 12e (No. 9) → 12e (No. 8) At the time of the composite switching display (steps B1 to B5) for the character image 12d (No. 3), as shown in FIG. When the pronunciation mouth type image 12e (No. 9) is synthesized and displayed along with the highlight (identification) display HL of the pronunciation symbol, it is determined that the pronunciation symbol of the highlight (identification) display text "o" has an accent mark. Therefore, as shown in FIG. 12 (C) {circle around (2)}, the character image 12d (No. 3) at this time is changed to an accent face image No. 3 'for pronunciation emphasis expression and displayed. That (step B4 → B6).
[0098]
In other words, when switching between the highlight (identification) display HL and the pronunciation mouth type image 12e (No. 9) synchronized with the output of the pronunciation voice for the accent character "o" of the search entry word "Low" shown in FIG. The normal setting character (face) image 12d (No. 3) shown in FIG. 12B (2), which is the synthesis destination of the mouth image 12e (No. 9), is shown in FIG. 12 (C) (2). For example, since the face image 12d (No3 ') corresponding to the accent sound expressing the state of strong pronunciation due to sweating of the head or wrinkles at the mouth is displayed, the user can find the pronunciation voice of the search entry "Low" and its pronunciation. The utterance timing and the corresponding portions of the characters "L", "o", "w" and their phonetic symbols, and further, each of the pronunciation mouth type images 12e (No36 → No9 → No8) can be easily learned by their respective synchronized reproduction. Not Rubakari, made a speech emphasizing timing in accordance with the accent to be able to learn in the real.
[0099]
FIG. 13 shows a headword character display window W1 and a sound opening type window displayed on the search headword display screen G2 in the setting state of the character image No. 1 along with the synchronous playback processing in the headword search processing of the portable device 10. FIG. 7A is a diagram showing a display state of a display window W2, and FIG. 7A is a diagram showing a setting display state of an entry word character display window W1 and a sound opening type display window W2 with respect to a search entry word display screen G2. (B) is a diagram showing a change state of the entry word character display window W1 and the sound opening type display window W2 synchronized with the output of the pronunciation sound.
[0100]
That is, in the character setting process in steps S1 to S6 in FIG. 7, an animation tone is selected from three types of character image data 12d (No1), 12d (No2), and 12c (No3) (see FIG. 3) stored in advance. In the state in which the character image 12d (No1) is selected and set, as in steps S7 to SA, the headword search processing and the synchronous reproduction processing for the headword "low" to be searched, and the text correspondence in FIG. When the mouth display processing is performed, as shown in FIGS. 13A and 13B, the search entry word display window W1 for the search entry word display screen G2 displays the search entry word “ The highlight (identification) display HL synchronized with the pronunciation voice output of “low” and its pronunciation symbol is sequentially performed. Along with this, in the sound opening type display window W2, the sound output of the sound and the text (with the character image 12d (No1) set in the character setting process (steps S1 to S6) as a basic face image are displayed. Each of the sound opening type images 12e (No36 → No9 → No8) synchronized with the highlight display HL of the included pronunciation symbols) is sequentially switched and synthesized and displayed.
[0101]
Then, as shown by (2) in FIG. 13 (B), the second character “o” of the search entry “low” and the highlight (identification) display HL of the phonetic symbol thereof are accompanied by the pronunciation mouth type image 12e. When (No9) is synthesized and displayed, the phonetic symbol of the highlight (identification) display text "o" is determined to have an accent mark. Therefore, the animation-like character image 12d (No1) at this time is pronounced emphasis expression. Is changed to the accent face image No. 1 'for display (step B4 → B6).
[0102]
That is, when the animation-like character image 12d (No1) shown in FIG. 13 is selected and set, the highlight (identification) display synchronized with the output of the pronunciation voice for the accent character "o" of the search entry "Low" Even when the HL and the sound mouth-shaped image 12e (No9) are switched and displayed, the normal animation-like character (face) image 12d (No1) to which the mouth-shaped image 12e (No9) is synthesized is, for example, sweating on the head. And the face image 12d (No1 ') corresponding to an accent expressing the state of being strongly pronounced due to the movement of the body, the user can make the pronunciation sound of the search entry "Low", its utterance timing and each character. The corresponding portions of "L", "o", "w" and their phonetic symbols, and further, each of the sound opening type images 12e (No36 → No9 → No8) are used for their respective synchronized reproduction. Ri easily not only be learning, the speech emphasizes timing will be able to learn in real depending on the accent.
[0103]
In the synchronous reproduction processing of text, pronunciation voice, and pronunciation mouth type image accompanying the entry search described with reference to FIGS. 11 to 13, the content of English-Japanese dictionary data stored in advance as the dictionary database 12 b is Although the description has been given of the case where the content corresponds only to the pronunciation of the United States one country, for example, as described with reference to FIGS. 14 to 16 below, the content of the English-Japanese dictionary data stored in advance as the dictionary database 12b If the user has a content corresponding to the pronunciation of two countries of the United States and the United Kingdom, the text, pronunciation voice, and pronunciation accompanying the entry search by specifying the pronunciation form of either the United States or the United Kingdom Synchronous reproduction processing of a mouth-shaped image may be performed.
[0104]
FIG. 14 is a diagram showing a search entry display screen G2 when an English-Japanese dictionary containing pronunciation forms of two countries, the United States and the United Kingdom, is used in the entry search processing in the main processing of the portable device 10. .
[0105]
To perform a search for a headword based on dictionary data of an English-Japanese dictionary containing pronunciation forms of, for example, two countries, the United States and the United Kingdom, stored in the dictionary database 12b, the user operates the “English-Japanese” key 17a5 of the input unit 17a. After setting the search mode of the English-Japanese dictionary, when a search term (eg, “lough”) to be searched is input (step S7 → S8), a plurality of words including a match and a matching character at the beginning with the input word are entered. The headword is searched and read out from the dictionary data of the English-Japanese dictionary, and is displayed on the display unit 18 as a list of search headwords (not shown) (step S9).
[0106]
On the search entry list screen, an entry (in this case, “lough”) that matches the entry entered by the user is designated by the cursor key, and a “translation / decision (voice)” key 17a4 is input. Is operated (step S10), the selected and detected headword "lough" is stored in the headword memory 12g in the RAM 12B, and the US / UK corresponding to the headword "lough" is stored. Dictionary data such as pronunciation / part of speech / meaning contents of the two countries is read and stored in the headword-corresponding dictionary data memory 12h in the RAM 12B, and as shown in FIG. 18 is displayed (step S11).
[0107]
Here, for the headword "lough" searched and displayed, either one of the American pronunciation [laef] or the English pronunciation [la: f] is selectively output at the same time. In order to synchronously display the characters of the headword, phonetic symbols, and pronunciation mouth images corresponding to the word, identifiers of US or British dialects displayed in the dictionary data on the search headword display screen G2 [US] Alternatively, when any one of [English] is designated (step S11a) and the "translation / decision (voice)" key 17a4 is operated (step S12), the processing shifts to the synchronous reproduction processing in FIG. 8 (step SA). ).
[0108]
FIG. 15 shows an entry word character display window W1 which is displayed on the search entry word display screen G2 when the American pronunciation [US] is designated along with the synchronous reproduction process in the entry word search process of the portable device 10. FIG. 9A is a diagram showing a display state of a pronunciation entry type display window W2, and FIG. 9A shows a setting display state of a headword character display window W1 and a pronunciation entry type display window W2 with respect to a search entry word display screen G2. FIG. 11B is a diagram showing a change state of the entry word display window W1 and the pronunciation opening type display window W2 in synchronization with the output of the American pronunciation sound.
[0109]
That is, either the US dialect or the British dialect identifier [US] or [English] displayed in the dictionary data on the search entry word display screen G2 is designated, and the process proceeds to the synchronous reproduction process in FIG. Then, in step A2 of the synchronous reproduction process, if, for example, the U.S. dialect identifier [US] is specified, it corresponds to the animated character image 12d (No1) preset in the character setting process (steps S2 to S6). Then, the US-language character image 12d (No. 1US) is read and transferred to the synchronization image file memory 12n in the RAM 12B. At the same time, on the basis of the link reproduction link data (see FIG. 2) for the current search entry “lough” stored in the dictionary database 12b, the text / image is synchronously reproduced on the entry search screen G2. An HTML file for setting the windows W1 and W2 (see FIG. 15) is read out according to the HTML file No. and written into the synchronization HTML file memory 12j. The text data “lough (with US dialects)” of the search entry is read out according to the text file No. and written into the synchronization text file memory 12k. The pronunciation voice data of the U.S. dialect of the search entry is read out according to the sound file No. and written into the synchronization sound file memory 12m (step A2).
[0110]
Then, from the time code file 12fn for synchronous playback of encrypted voice / text / image corresponding to various head words stored as the dictionary time code file 12f in the FLASH memory 12A, the current search head word is searched. The time code file 12fn (see FIG. 5) corresponding to “lough” is decoded and decoded according to the time code file No. described in the synchronous reproduction link data (see FIG. 2), and is read out. The data is transferred and stored in the code file memory 12i (step A3).
[0111]
Then, when the synchronized playback processing of the pronunciation voice, the entry word character, and the pronunciation mouth type image according to the time code file 12fn corresponding to the search entry word “lough” is the search entry word “low” described above. Similarly to the above, when the reproduction processing according to each command code in steps A7 to A12 and the text corresponding port display processing in FIG. 9 are started, the window W1 for text synchronous reproduction on the search entry display screen G2 includes A phonetic symbol of the U.S. dialect is displayed together with the search entry "lough", and a character image for a U.S. language with a set animated character image, for example, a design having a U.S. flag F is displayed in an image synchronous reproduction window W2. 12d (No. 1US) is displayed as a target image for mouth-shaped image synthesis.
[0112]
As a result, in synchronization with the pronunciation voice output of the U.S. dialect of the search entry "lough", as shown in (1)-(3) of FIG. The word "lough" and highlight (identification) display HL from the first character of the phonetic symbol are sequentially displayed, and in the image synchronous reproduction window W2, based on the U.S. English character image 12d (No1US), its mouth is displayed. For the image area (X1, Y1; X2, Y2), the sound mouth type image 12e (Non 1 → Non 2 → Non 3) corresponding to the mouth number of each sound symbol is read out from the sound-specific mouth image data 12e and sequentially switched. They are combined and displayed.
[0113]
Also in this case, in accordance with the same text-corresponding mouth display processing as described above, the highlight (identification) display HL and the sound-mouth type image 12e synchronized with the output of the pronunciation voice for the accent character "au" of the search entry "Laugh" In switching and displaying (Non2), the U.S. character (face) image 12d (No1US) to which the mouth-shaped image 12e (Non2) is synthesized sounds strongly due to, for example, sweating of the head or movement of the body. Since the face image 12d (No1US ') corresponding to the accent representing the state is changed and displayed, the user can pronounce the U.S. dialect of the search headword "Laugh" and its utterance timing, and each character "L""au"" gh ”and the corresponding part of the phonetic symbol, and further, each of the sound opening type images 12e (Non1 → Non2 → Non3) is synchronized with the respective portions. Not only can be easily learned by the raw, the speech emphasizes timing will be able to learn in the real according to the US dialect accent.
[0114]
FIG. 16 is a headword character display window W1 that is window-displayed on the search headword display screen G2 when the English pronunciation [English] is designated along with the synchronous playback processing in the headword search processing of the portable device 10. FIG. 9A is a diagram showing a display state of a pronunciation entry type display window W2, and FIG. 9A shows a setting display state of a headword character display window W1 and a pronunciation entry type display window W2 with respect to a search entry word display screen G2. FIG. 7B is a diagram showing a change state of the headword character display window W1 and the sound opening type display window W2 synchronized with the output of the English pronunciation sound.
[0115]
That is, among the identifiers [US] or [English] of US dialects or British dialects displayed in the dictionary data on the search entry word display screen G2 shown in FIG. 14, for example, the British dialect identifier [English] is designated. (Step S11a), the process proceeds to the synchronous reproduction process (Step SA) in FIG. 8, and in Step A2 of the synchronous reproduction process, the animation character image set in advance in the character setting process (Steps S2 to S6) The English character image 12d (No1UK) is read corresponding to the 12d (No1) and transferred to the synchronization image file memory 12n in the RAM 12B. At the same time, on the basis of the link reproduction link data (see FIG. 2) for the current search entry “lough” stored in the dictionary database 12b, the text / image is synchronously reproduced on the entry search screen G2. The HTML file for setting the windows W1 and W2 (see FIG. 16) is read out according to the HTML file No. and written into the synchronization HTML file memory 12j. Further, the text data “lough (with English dialect pronunciation symbols)” of the search entry is read out according to the text file No. and written into the synchronization text file memory 12k. The pronunciation voice data of the English dialect of the search entry is read out according to the sound file No. and written into the synchronization sound file memory 12m (step A2).
[0116]
Then, from the time code file 12fn for synchronous playback of encrypted voice / text / image corresponding to various head words stored as the dictionary time code file 12f in the FLASH memory 12A, the current search head word is searched. The time code file 12fn (see FIG. 5) corresponding to “lough” is decoded and decoded according to the time code file No. described in the synchronous reproduction link data (see FIG. 2), and is read out. The data is transferred and stored in the code file memory 12i (step A3).
[0117]
Then, when the synchronized playback processing of the pronunciation voice, the entry word character, and the pronunciation mouth type image according to the time code file 12fn corresponding to the search entry word “lough” is the search entry word “low” described above. Similarly to the above, when the reproduction processing according to each command code in steps A7 to A12 and the text corresponding port display processing in FIG. 9 are started, the window W1 for text synchronous reproduction on the search entry display screen G2 includes A phonetic symbol of the English dialect is displayed together with the search entry "lough", and an image-synchronous reproduction window W2 has a design with an animated character image, for example, a British cap M1 and a stick M2. The English character image 12d (No1UK) is displayed as a target image for mouth-shaped image synthesis.
[0118]
As a result, in synchronism with the pronunciation voice output of the English dialect of the search entry “lough”, as shown in (1) to (3) in FIG. Highlighting (identification) display HL is sequentially performed from the first word of the word “lough” and its phonetic symbol, and in the image synchronous playback window W2, the English character image 12d (No1UK) is used as a base. For the image area (X1, Y1; X2, Y2), the sound mouth type image 12e (Non 1 → Non 2 → Non 3) corresponding to the mouth number of each sound symbol is read out from the sound-specific mouth image data 12e and sequentially switched. They are combined and displayed.
[0119]
Also in this case, in accordance with the same text-corresponding mouth display processing as described above, the highlight (identification) display HL and the sound-mouth type image 12e synchronized with the output of the pronunciation voice for the accent character "au" of the search entry "Laugh" When switching and displaying (Non2), the English character (face) image 12d (No1UK) to which the mouth-shaped image 12e (Non2) is synthesized sounds strongly due to, for example, sweating of the head or movement of the body. Since the face image 12d (No1UK ') corresponding to the accent representing the state is changed and displayed, the user can hear the pronunciation voice of the English dialect of the search headword "Laugh", its utterance timing, and each character "L""au"" gh ”and the corresponding part of the phonetic symbol, and further, each of the sound opening type images 12e (Non1 → Non2 → Non3) is synchronized with the respective portions. Not only can be easily learned by the raw, the speech emphasizes timing will be able to learn in real depending on the accent of the British dialect.
[0120]
Next, a description will be given of an accent test process that can perform a test for assigning a correct answer / incorrect answer of an English word accent, for example, along with the main process of the portable device 10 having the above configuration.
[0121]
FIG. 17 is a view showing an operation display state when an incorrect answer is selected in accordance with the accent test processing of the portable device 10, and FIG. 17A shows an accent test question display screen G3, and FIG. Is a view showing a setting display state of an entry word character display window W1 and a sound opening type display window W2 with respect to the entry word display screen G2 to be set, and FIG. It is a figure which shows the change state of the output word display window W1 and the sound opening type display window W2.
[0122]
FIG. 18 is a diagram showing an operation display state when a correct answer is selected in accordance with the accent test processing of the portable device 10, wherein FIG. 18A shows an accent test question display screen G3, and FIG. FIG. 3C is a diagram showing the setting display state of a headword character display window W1 and a sound opening type display window W2 with respect to a headword display screen G2 to be set, and FIG. It is a figure showing a change state of word character display window W1 and pronunciation mouth type display window W2.
[0123]
That is, when the "accent test" key 17a6 of the input unit 17a is operated to set the accent test mode (step S13), a headword is randomly selected from dictionary data stored in advance in the dictionary database 12c. (Step S14), as shown in FIG. 17 (A), for the word “low” selected at random, the correct accented phonetic symbol with an accent on the “o” portion and the wrong accent with an accent on the “u” portion are included. An accent test question display screen G3 in which the phonetic symbol and the question mark are set as selection items Et / Ef is displayed on the display unit 18 (step S15).
[0124]
In the accent test question display screen G3, the selection frame X is moved by operating the cursor key 17a2. For example, when a selection item Ef having a phonetic symbol with an incorrect accent is detected (step S16), the character setting process (step S16) is performed. In steps S2 to S6), the character image and its related image (in this case, the animation-like character image 12d (No1) and its accent-corresponding image (No1 ')) previously selected and set as the synthesis destination of the pronunciation mouth-shaped image are For example, the normal yellow color is changed to a blue character image (No1BL) (No1BL ') (steps S17 → S18).
[0125]
At the same time, the pronunciation voice data read out of the dictionary voice data 12c corresponding to the question word "low" is corrected to voice data corresponding to the erroneous accent symbol selected by the user (step S19). ).
[0126]
Then, the question word "low" is stored in the entry word memory 12g in the RAM 12B, and dictionary data such as pronunciation / speech / meaning content corresponding to the entry word "low" is read out and read in the RAM 12B. It is stored in the entry word correspondence dictionary data memory 12h, and is displayed on the display unit 18 as a search entry word display screen G2 corresponding to the question word as shown in FIG. 17B (step S20).
[0127]
Here, for the accent word “low” selected by the user, the pronunciation voice is output, and at the same time, in order to synchronously display the characters, pronunciation symbols, and pronunciation mouth images of the found word, When the "translation / decision (voice)" key 17a4 is operated (step S21), the processing shifts to the synchronous reproduction processing in FIG. 8 (step SA).
[0128]
Then, in step A2 of the synchronous reproduction process, the animation-like character image 12d (No1BL) changed to blue in response to the user selection of the incorrect accent is read and transferred to the synchronization image file memory 12n in the RAM 12B. You. At the same time, based on the synchronous reproduction link data (see FIG. 2) for the current question word “low” stored in the dictionary database 12b, the text / image for synchronous reproduction on the search entry display screen G2 is displayed. An HTML file for setting the windows W1 and W2 (see FIG. 17B) is read according to the HTML file No. and written into the synchronization HTML file memory 12j. In addition, the text data “low (with erroneous phonetic symbols)” of the question word is read and written to the synchronization text file memory 12k. The pronunciation voice data corrected according to the error accent of the question word is read out and written into the synchronization sound file memory 12m (step A2).
[0129]
Then, from the time code file 12fn for synchronous playback of the encrypted voice / text / image corresponding to various headwords stored as the dictionary time code file 12f in the FLASH memory 12A, the current word "low" is set. Is decoded and read according to the time code file No. described in the synchronous reproduction link data (see FIG. 2), and the time code file 12fn (see FIG. 5) is read out. The data is transferred to and stored in the memory 12i (step A3).
[0130]
Then, the synchronous reproduction processing of the pronunciation sound of the error accent, the headword character, and the pronunciation mouth type image according to the time code file 12fn corresponding to the question word “low” is performed for the search headword “low” described above. As in the case, the reproduction process according to each command code in steps A7 to A12 and the text corresponding port display process in FIG. 9 are started. Then, as shown in FIG. 17 (B), in the window W1 (Ef) for text synchronous reproduction on the search entry search word display screen G2, an incorrect pronunciation accent symbol by the user selection is displayed together with the question word "low". In addition, in the window for image synchronous reproduction W2, the animated character image 12d (No. 1BL) whose blue color has been changed by the user selection of the error accent is displayed as the target image of the mouth-shaped image synthesis.
[0131]
As a result, in synchronism with the pronunciation sound output of an erroneous accent corresponding to the question word “low”, as shown in FIGS. 17 (C) (1) to (3), the text synchronous playback window W1 (Ef) The highlight (identification) display HL from the first character of the question word "low" and its erroneous phonetic symbol is sequentially displayed, and in the image synchronous reproduction window W2, the blue color is changed due to the selection of the wrong accent. Based on the animated character image 12d (No1BL), a sound mouth type image 12e (No36 → No9 → No8) corresponding to the mouth number of each sound symbol is provided in the mouth image area (X1, Y1; X2, Y2). The images are read out from the audio-specific mouth image data 12e, sequentially switched, combined, and displayed.
[0132]
Also in this case, according to the same text-corresponding mouth display processing as described above, the highlight (identification) display HL synchronized with the output of the pronunciation voice for the erroneous accent character “u” of the headword “Low”, the pronunciation mouth type image At the time of switchover display of 12e (No. 8), the animated character (face) image 12d (No. 1BL) whose blue has been changed, which is the synthesis destination of the mouth-shaped image 12e (No. 8), is caused by, for example, sweating of the head or shaking of the body. The accented blue face image 12d (No1BL '), which expresses a strongly pronounced state, is changed and displayed, so that the user can make an incorrect pronunciation of the accent word of the question word "Low", its erroneous utterance timing, and each correspondence. Audible mouth image 12e (No36 → No9 → No8) so that it can be clearly learned as the one with wrong accent That.
[0133]
On the other hand, as shown in FIG. 18A, in the accent test question display screen G3, when the selection frame X is moved by operating the cursor key 17a2, for example, when a selection item Et having a correct accented phonetic symbol is selected and detected. (Step S16) The process shifts to the synchronous reproduction process in FIG. 8 without performing the blue changing process (Step S18) of the character image 12d (No. 1) or the correction process of the pronunciation sound corresponding to the erroneous accent (Step S19). (Step S17 → SA).
[0134]
Then, the synchronized reproduction of the pronunciation voice / text / pronunciation mouth type image corresponding to the search entry "low" in the state where the animation-like character image 12e (No1) is set as described above with reference to FIG. In the same manner as the processing, as shown in FIG. 18B, in the window W1 (Et) for text synchronous reproduction on the search entry search word display screen G2, the correct accent pronunciation by the user selection together with the question word "low" is displayed. A symbol is displayed, and an animation-style character image 12d (No. 1) of a normal color as set in advance is displayed as a target image for mouth-shaped image synthesis in the window W2 for image synchronous reproduction.
[0135]
As a result, in synchronization with the pronunciation sound output of the correct accent corresponding to the question word “low”, as shown in FIGS. 18 (C) (1) to (3), in the text synchronous playback window W1 (Et), Highlighting (identification) display HL is sequentially performed from the first word of the question word “low” and its correct phonetic symbol, and an animation character image of a normal color as set in advance is displayed in the window for image synchronous reproduction W2. Based on the mouth image area (X1, Y1; X2, Y2) based on 12d (No1), a sound mouth type image 12e (No36 → No9 → No8) corresponding to the mouth number of each phonetic symbol is a speech-specific mouth image. The data is read out from the data 12e, sequentially switched, combined and displayed.
[0136]
Also in this case, in accordance with the same text-corresponding mouth display processing as described above, the highlight (identification) display HL synchronized with the output of the pronunciation voice for the correct accent character "o" of the found word "Low", the pronunciation mouth type image 12e At the time of the switching synthesis display of (No9), the animation-like character (face) image 12d (No1) to which the mouth-shaped image 12e (No9) is synthesized sounds strongly due to, for example, sweating of the head or shaking of the body. Since the face image 12d (No1 ') corresponding to the accent representing the state is changed and displayed, the user can make the correct accent pronunciation sound of the question word "Low", its correct utterance timing, and the corresponding pronunciation mouth type image 12e (No36). → No9 → No8) can be clearly learned.
[0137]
Therefore, according to the synchronized playback function of the pronunciation voice / text / pronunciation type image accompanying the headword search by the portable device 10 of the first embodiment having the above-described configuration, the headword “low” to be searched is input. When the dictionary data corresponding to the search entry is searched and displayed as the search entry display screen G2, when the "translation / decision (voice)" key 17a4 is operated, the search entry "low" is obtained. In synchronization with the pronunciation sound output from the stereo sound output unit 19b in accordance with the time code file 12f23, the search entry "low" and the highlight (identification) display HL of the search entry "low" in the text synchronous playback window W1. Are sequentially performed, and in the image synchronous reproduction window W2, based on the character image 12d (No3) set in advance, the mouth image area (X1 Y1; X2, Y2) are read out of the sound-based mouth image data 12e, and are sequentially switched and synthesized, and are displayed. You.
[0138]
In addition, when the composite display of the highlight (identification) display HL and the pronunciation mouth image 12e (No. 9) is synchronized with the output of the pronunciation voice for the accent character "o" of the search entry word "Low", the mouth shape is displayed. An accent-compatible face image 12d (No3 ') representing a state in which the character (face) image 12d (No3) to which the image 12e (No9) is synthesized is strongly pronounced due to, for example, sweating of the head or shaking of the mouth. , The user can hear the pronunciation voice of the search entry “Low” and its utterance timing, the corresponding portions of the characters “L”, “o”, “w” and their pronunciation symbols, and further, each pronunciation mouth type image. 12e (No36 → No9 → No8) can be easily learned not only by their respective synchronized playback, but also by realizing the timing of vocal emphasis according to the accent. It becomes possible way.
[0139]
Further, according to the synchronized playback function of pronunciation voice, text, and pronunciation type image accompanying the headword search by the portable device 10 of the first embodiment having the above-described configuration, for example, a dictionary database having pronunciation symbols of US and British dialects When a search for a headword is performed based on 12b, as shown in FIG. 15 or FIG. 16, a US tone [US] or an English sound [UK] is designated and the "translation / decision (voice)" key 17a4 is pressed. When operated, in synchronization with the designated pronunciation sound of the American sound or English sound, the search entry "lough" and the highlighting (identification) of the American sound or English sound symbol are displayed in the window W1 for text synchronous reproduction. HL are sequentially performed, and in the window W2 for image synchronous reproduction, the character image 12d (No1) set in advance is set to be used for American sound expression (No1US) or English sound expression (No1UK). Based on the mouth image area (X1, Y1; X2, Y2), the sound mouth type image 12e (Non1 → Non2 → Non3) corresponding to the mouth number of each phonetic symbol of the American sound or the English sound is displayed for each voice. Since it is read out from the mouth image data 12e, sequentially switched and synthesized, and displayed, the pronunciation voice of the US dialect corresponding to the search term and its pronunciation symbol / pronunciation type and the pronunciation voice of the English dialect and its pronunciation symbol・ Learn clearly and distinctively from the pronunciation type.
[0140]
According to the synchronized playback function of the pronunciation voice, text, and pronunciation type image accompanying the entry search by the portable device 10 of the first embodiment having the above-described configuration, each entry word recorded in the dictionary database 12b includes 17 and 18, when the "accent test" key 17a6 is operated, the randomly selected entry word "low" is generated. The accent test question display screen G3 is displayed together with the correct accent pronunciation symbols and the incorrect accent pronunciation symbols. When the correct accented pronunciation symbol is selected, each sound opening type image 12e (No36 → No9 → No8) based on the normal set character image 12d (No1) in synchronization with the correct pronunciation sound output. Is switched, and if a phonetic symbol with an incorrect accent is selected, each sound opening type based on the character image 12d (No1BL) that has been changed in blue color in synchronization with the incorrect phonetic sound output is displayed. The image 12e (No36 → No9 → No8) is switched and displayed, and the character image 12e (No1) (No1BL) as the mouth-shaped image synthesis base is accent-compatible even during synchronous reproduction of the correct or incorrect accent part. The character image 12e (No1 ') (No1BL') is changed and displayed. And pronunciation cement, and pronunciation wrong accent, it becomes possible to clearly learn the synchronous reproduction of the audio-text image corresponding to the respective.
[0141]
In the first embodiment, the synchronous raw reprocessing of the pronunciation voice, text (with pronunciation symbols), and pronunciation mouth type image corresponding to the search entry word is performed by outputting the pronunciation voice by the synchronous reproduction process according to the time code file 12f. And a pronunciation mouth type corresponding to a phonetic symbol corresponding to the identification display character by a text corresponding mouth display process executed by interruption in accordance with the sequential identification display of the text characters synchronized with the character. Although the configuration is performed by switching and displaying images, as described in the following second embodiment, various phonetic symbols including phonetic symbols with accent symbols and their respective phonetic voice data and phonetic face images are stored in advance. A plurality of pairs are stored in association with each other, and the characters of the headword to be reproduced are highlighted in order from the beginning, and the phonetic symbols of the highlighted characters are sequentially displayed. It may be configured to perform display of output and face image data of the sound speech data correlated.
[0142]
(2nd Embodiment)
FIG. 19 is a flowchart showing headword synchronized playback processing of the portable device 10 according to the second embodiment.
[0143]
That is, in the portable device 10 of the second embodiment, various phonetic symbols including phonetic symbols with accent symbols, their respective phonetic voice data, and phonetic voice data corresponding to the various phonetic symbols are supported. A plurality of sets of sounding facial images having different forms of mouth parts and facial expressions are stored in the memory 12 in advance.
[0144]
Then, for example, in the English-Japanese dictionary stored in advance as the dictionary database 12b, an arbitrary headword "low" is input and searched, and as shown in FIG. 11, displayed as a search headword display screen G2. In this state, when the "translation / decision (voice)" key 17a4 is operated in order to perform the synchronous reproduction of the pronunciation voice and the pronunciation face image, the synchronous reproduction process of the second embodiment shown in FIG. 19 is started. You.
[0145]
When the synchronous reproduction process according to the second embodiment is started, as shown in FIG. 12 or FIG. 13, first, a text synchronous reproduction window W1 is opened on the search entry search word display screen G2, and the search entry entry is opened. Each character of "low" and a phonetic symbol are highlighted HL from the head in the order of their pronunciation (step C1). Then, the phonetic symbols of the headwords with the emphasized identification display HL are read out (step C2), and it is determined whether or not accented symbols are provided (step C3).
[0146]
Here, as shown in FIG. 12 (B) (1) or FIG. 13 (B) (1), the phonetic symbol of the character “l” in the entry word “low” highlighted this time is displayed without accent marks. In some cases, accent-free pronunciation voice data corresponding to the phonetic symbol stored in advance in the memory 12 is read out and output from the stereo voice output unit 19b (steps C3 → C4). The associated soundless face image without accent is read out and displayed in the image synchronous playback window W2 (step C5).
[0147]
Then, the character “o” next to the currently output search term “low” is read out (step C6 → C7), and the process returns to the process from step C1 again, and FIG. As shown by (2) in FIG. 13B, the emphasis identification display HL is displayed together with the phonetic symbol (step C1).
[0148]
If it is determined that the phonetic symbol of the character "o" in the highlighted word HL highlighted this time has an accent symbol (steps C2 and C3), the phonetic symbol is stored in the memory 12 in advance. Accented pronunciation voice data corresponding to the pronunciation symbol is read out and output from the stereo voice output unit 19b (step C3 → C8), and FIG. 12 (C) (2) or FIG. 13 (B) ▲. As shown by 2 ▼, a sounding face image associated with the expression, for example, with accents due to sweating of the head or movement of the body is read out and displayed in the image synchronous reproduction window W2 (step C9).
[0149]
Therefore, even in the case of using the portable device 10 of the second embodiment, the output of the pronunciation voice and the generation of the pronunciation face image associated with the highlight (identification) display HL of the accent character “o” of the search entry “Low”. At the time of display, based on the accented pronunciation symbols, the pronunciation face image is displayed as an accent-compatible face image expressing a state in which the pronunciation is strong due to, for example, sweating of the head or movement of the body. Each letter "L", "o", "w" of the search entry word "Low" and the pronunciation voice, and further, each pronunciation face image can be easily learned by the corresponding output, and also utterance according to the accent. You will be able to learn the emphasis part realistically.
[0150]
In the second embodiment, various phonetic symbols including phonetic symbols with accent symbols stored in the memory 12 in advance, respective phonetic voice data, and phonetic voice data corresponding to the various phonetic symbols are stored. For pronunciation face images composed of different forms of mouth parts and facial expressions corresponding to the output of the pronunciation voice associated with the accented phonetic symbols is set to be larger than the pronunciation voice associated with the accentless phonetic symbols, The opening degree of the mouth portion of the pronunciation face image associated with the accented pronunciation symbol is set to be larger than the opening degree of the mouth portion of the pronunciation face image associated with the accent-free pronunciation symbol. Further, the expression in the face image is set so that the expression of the pronunciation face image associated with the accented pronunciation symbol is emphasized more than the expression of the pronunciation face image associated with the accent-free pronunciation symbol.
[0151]
In the second embodiment, various phonetic symbols including phonetic symbols with accent symbols, their respective phonetic voice data, and different types of mouth voice data corresponding to the phonetic voice data corresponding to the various phonetic symbols are used. A pronunciation face image consisting of parts and facial expressions is stored in advance, each character of the search entry is highlighted in the order of its pronunciation, and the pronunciation sound associated with the pronunciation symbol is read out and output. The associated pronunciation face image is read out and displayed. However, as described in the following third embodiment, the pronunciation voice of the entry word and the pronunciation voice of the entry word are associated with each entry word in the dictionary database 12b. A pronunciation face image is combined in advance and stored, and the pronunciation voice and the pronunciation face image are read out and output according to the character display of the search entry word, and the peak level of the pronunciation voice signal at this time is read out. Determining an accent moiety by detecting the Le, it may be configured to change control to the mouth and facial expressions of the form a different display form of the sound face image.
[0152]
(Third embodiment)
FIG. 20 is a flowchart showing headword synchronized playback processing of the portable device 10 according to the third embodiment.
[0153]
That is, in the portable device 10 of the third embodiment, the pronunciation voice of the headword and the pronunciation face image are stored in advance in association with each headword in each dictionary data of the dictionary database 12b.
[0154]
Then, for example, in the English-Japanese dictionary stored in advance as the dictionary database 12b, an arbitrary headword "low" is input and searched, and as shown in FIG. 11, displayed as a search headword display screen G2. When the "translation / decision (voice)" key 17a4 is operated in such a state that the pronunciation sound and the pronunciation face image are synchronously reproduced, the synchronous reproduction processing of the third embodiment shown in FIG. 20 is started. You.
[0155]
When the synchronous reproduction process according to the third embodiment is started, as shown in FIG. 12 or FIG. 13, first, a text synchronous reproduction window W1 is opened on the search entry search word display screen G2, and the search entry entry is opened. Each character of "low" is highlighted HL from the beginning in the order of its pronunciation (step D1). Then, the pronunciation voice data of the portion corresponding to the headline character with the emphasized identification display HL is read (step D2), and output from the stereo voice output unit 19b (step D3).
[0156]
Here, for example, whether or not the signal (waveform) level of the pronunciation voice data of the portion corresponding to the character "l" in the headword "low" highlighted this time HL is a voice signal level (accent portion) equal to or higher than a certain value If it is determined (step D4) that it is not higher than the predetermined audio signal level, that is, if it is not an accent part, the pronunciation face image stored in association with the search entry is read out. Then, it is displayed as it is in the window W2 for image synchronous reproduction (step D5).
[0157]
Then, the character “o” next to the currently output search term “low” is read out (step D6 → D7), and the process returns to the step D1 again to highlight and display HL (step D1). ).
[0158]
Then, the pronunciation voice data of the portion corresponding to the headword "o" highlighted this time is displayed (step D2), output from the stereo voice output unit 19b (step D3), and the highlighted display is performed. It is determined whether or not the signal (waveform) level of the pronunciation voice data of the portion corresponding to the HL word character "o" is a voice signal level (accent portion) that is equal to or higher than a predetermined value (step D4).
[0159]
Here, when it is determined that the sound level is equal to or higher than the predetermined audio signal level, that is, the sound part is an accent part, the pronunciation face image stored in association with the search entry is read out, and the face image is The facial image is controlled to be changed to a face image in which the opening of the mouth portion is large and the expression is strong (for example, FIG. 12 (B) (2) → FIG. 12 (C) (2)), and the image is displayed in the image synchronous playback window W2 ( Step D4 → D8).
[0160]
When the sound signal waveform level of the pronunciation sound is determined to be a certain value or more and is determined to be an accent part, the corresponding characters of the highlighted search term are further changed or added in display color. Alternatively, by changing the character font or the like, a configuration may be adopted in which change is controlled to a form indicating that the character is an accent part and displayed.
[0161]
Therefore, even in the case of the portable device 10 of the third embodiment, the output of the pronunciation voice and the generation of the pronunciation face image associated with the highlight (identification) display HL of the accent character “o” of the search entry “Low”. At the time of display, based on the sounding sound signal level at that time being equal to or higher than a certain value, the sounding face image is controlled to be changed to, for example, a face image corresponding to an accent with a large mouth opening and a strong expression. Therefore, the user can not only easily learn the characters "L", "o", "w" of the search headword "Low" and their pronunciation voices, and further, the pronunciation face images by their corresponding outputs, but also In this way, it is possible to learn a part for emphasizing speech according to the accent in a realistic manner.
[0162]
Note that in the description of the synchronous playback function of each character (text), pronunciation voice, and pronunciation face image (including pronunciation mouth type image) of the search entry in each of the above embodiments, the accent of the entry is present in one place. However, if the search entry word has two accents, the first accent and the second accent, the accent-capable pronunciation face image (including the pronunciation mouth) displayed corresponding to each accent part The type image may be displayed in a different form depending on, for example, the degree of opening of the mouth and the strength of the expression depending on the case of the first accent and the case of the second accent.
[0163]
The method of each processing by the portable device 10 described in each of the above embodiments, that is, the main processing according to the dictionary processing program 12a in the first embodiment shown in the flowchart of FIG. 7, and the main processing shown in the flowchart of FIG. Headword synchronous playback processing accompanying processing, text-corresponding mouth display processing executed by interruption in response to highlight display of each headword character accompanying the headword synchronous playback processing shown in the flowchart of FIG. 9, FIG. The headword synchronous reproduction process in the second embodiment shown in the flowchart of FIG. 20 and the headword synchronous reproduction process in the third embodiment shown in the flowchart of FIG. 20 can be executed by the computer. Programs include a memory card (ROM card, RAM card, DATA / CARD, etc.), a magnetic disk (floppy disk, De disks, etc.), optical disk (CD-ROM, DVD, etc.) can be distributed and stored in the external recording medium 13 such as a semiconductor memory. Various computer terminals having a communication function with the communication network (Internet) N read the program stored in the external recording medium 13 into the memory 12 by the recording medium reading unit 14, and operate according to the read program. By being controlled, a synchronized reproduction function of each character (text), pronunciation voice, and pronunciation face image (including pronunciation mouth type image) corresponding to the search entry word described in each of the above embodiments is realized. Can perform the same processing.
[0164]
Further, data of a program for realizing each of the above methods can be transmitted on a communication network (Internet) N in the form of a program code, and a computer terminal connected to the communication network (Internet) N transmits the program data. By taking in program data, it is also possible to realize a synchronized reproduction function of each character (text), pronunciation voice, and pronunciation face image (including pronunciation mouth type image) corresponding to the above-mentioned search entry word.
[0165]
It should be noted that the present invention is not limited to the above-described embodiments, and can be variously modified in an implementation stage without departing from the scope of the invention. Furthermore, the embodiments include inventions at various stages, and various inventions can be extracted by appropriately combining a plurality of disclosed constituent elements. For example, even if some components are deleted from all the components shown in each embodiment or some components are combined, the problem described in the section of the problem to be solved by the invention can be solved. In the case where the effects described in the section of the effects of the invention can be obtained, a configuration in which this component is deleted or combined can be extracted as the invention.
[0166]
【The invention's effect】
As described above, according to the audio display output control device (audio display output control processing program) according to claim 1 of the present invention, the audio data is output by the audio data output means, and the text synchronous display control is performed. Means for displaying text in synchronization with the output of the audio data, displaying an image including at least a mouth portion by image display control means, and a mouth portion included in the display image by mouth image display control means, A mouth-shaped image corresponding to the audio data is displayed in synchronization with the audio data to be output as audio. Then, an accent of the voice data or the text is detected by an accent detection unit, and an image displayed by the image display control unit is changed according to the detection of the accent by an image change display control unit. This makes it possible not only to display text and images synchronized with the output of audio data, and to display a mouth-shaped image corresponding to the audio data in the mouth portion included in the image, but also to display the image in accordance with the detection of accent of the audio data or text. Can be changed, and the timing of accents can be clearly expressed.
[0167]
According to the voice display output control device according to claim 2 of the present invention, in the voice display output control device according to claim 1, the dictionary data corresponding to the headword input by the dictionary search means is further stored. The dictionary data corresponding to the entry searched by the dictionary is displayed by the dictionary data display control means. The voice data is the pronunciation voice data of the headword searched by the dictionary search means, the text is the text of the headword searched by the dictionary search means, and the headword pronunciation by the voice data output means. The output of the voice data, the display of the headword text synchronized with the headword pronunciation voice data by the text synchronous display control means, and the display of the image by the image display control means are performed by the dictionary data display control means. This is performed in the display state of the dictionary data corresponding to the headword. Thereby, along with the search and display of the dictionary data corresponding to the input headword, the output of the headword pronunciation voice data, the display of the headword text synchronized with this, the display of the image and the synchronous display of the mouth-shaped image In addition, the timing of the headword accent can be clearly expressed by the change of the display image according to the accent detection.
[0168]
According to the speech display output control device (speech display output control processing program) according to claim 3 (claim 11) of the present invention, a plurality of words and correct accented pronunciation symbols and errors of each word are stored by the word storage means. A pronunciation data with correct accent of the stored word or a pronunciation voice data with an erroneous accent of the stored word is output by the voice data output means, and the voice is output by the text synchronous display control means. The text of the word is displayed in synchronization with the pronunciation sound data of the word, and an image including at least the mouth portion is displayed by the image display control means. Display in a different display format than when accented pronunciation voice data is output Further, the mouth portion included in the mouth image display control means by the display image to display the image of the mouth type in synchronization with the sound audio data outputted corresponding to the sound audio data by the audio data outputting means. Then, with the synchronous display of the word text by the text synchronous display control means by the accent detection means, the accent of the word is detected from the accented pronunciation symbol of the corresponding word stored by the word storage means, and the image change display control means The image displayed by the image display control means is changed according to the accent detection. Thereby, not only the pronunciation sound data of correct accent and the pronunciation sound data of erroneous accent can be output for the word stored by the word storage means, but also the display of the word text synchronized with the pronunciation sound data and the mouth included in the display image Mouth-shaped images corresponding to the pronunciation sound data of parts can be displayed, and the displayed image can be changed according to the detection of word accents, so that correct and incorrect accents for words can be learned easily and at clear timing. Become.
[0169]
According to the voice display output control device according to claim 4 of the present invention, in the voice display output control device according to claim 3, the word stored by the correct / incorrect accent display control means corresponds to the word. The attached correct accented phonetic symbols and incorrect accented phonetic symbols are displayed side by side, and the correct / erroneous accent selecting means selects either the correct accented phonetic symbols or the incorrect accented phonetic symbols of the words displayed side by side. . Then, the voice data output means outputs the correct accent pronunciation speech data or the incorrect accent pronunciation speech data of the corresponding word in accordance with the correct / incorrect selection of the word accent by the correct / false accent selection means. Thereby, it is possible to further select a correct accented phonetic symbol or an incorrect accented phonetic symbol for the word stored by the word storage means and output the pronunciation voice data, and furthermore, to output the word text synchronized with the pronunciation voice data. The display and the mouth-shaped image corresponding to the pronunciation voice data of the mouth part included in the display image can be displayed, and the display image can be changed according to the detection of the word accent. You can learn at a clear timing.
[0170]
According to the voice display output control device according to claim 5 of the present invention, the storage means stores the plurality of headwords and the pronunciation voice data of at least two or more areas of each headword in association with each other, The region designating means designates any one of the pronunciation sound data of two or more regions of the stored headword. Then, the voice data output means outputs the pronunciation voice data of the designated area of the headword in accordance with the area designation of the pronunciation voice data, and the text synchronous display control means outputs the specified area of the headword to be voice-output. And displaying the text of the headword in synchronization with the pronunciation voice data of the subject, and displaying the image including at least the mouth portion in a different display form according to the designated area by the image display control means, Means for displaying a mouth-shaped image corresponding to the pronunciation sound data in synchronization with the sound output speech data for the mouth portion included in the display image. Then, with the synchronous display of the headword text by the accent detection means, the accent of the headword is detected, and the image displayed by the image display control means in response to the detection of the accent by the image change display control means. To change. Thereby, it is possible to designate and output the pronunciation voice data having the same headword and different regional dialects, and to synchronize the headword text and the mouth-shaped image of the display image middle part in synchronization with the output of the pronunciation voice data. Can be displayed, and images in different display modes can be displayed according to the specified area, and the change of the image can be displayed by detecting the accent, so that the pronunciation sound data and the timing of the accent in the specified area can be easily and clearly learned. Become.
[0171]
According to the image display control device (image display control processing program) according to claim 6 (claim 12) of the present invention, according to the display of the pronunciation order of a series of pronunciation target data including a headword of a word, a mouth or an expression is displayed. An image display control device for changing and controlling a face image provided with a plurality of sets of the sounding target data and phonetic symbols including accented phonetic symbols in a first storage means in association with each other; The storage unit stores a plurality of pairs of phonetic symbols including phonetic symbols with accent symbols and their voices and face images in association with each other. The first control means reads out the phonetic symbols corresponding to the sounding target data from the first storage means in accordance with the display of the sequence of the sounding target data in the order of sounding, and corresponds to the read phonetic symbols. The read voice and the face image are read from the second storage means, and the read voice is output to the outside, and the read face image is controlled so as to be displayed. When outputting voice to the outside by the first control, it is determined whether or not the read phonetic symbols include accented phonetic symbols, and it is determined that accented symbols are included. In this case, a voice and a face image corresponding to the accented phonetic symbol are read from the second storage means, and the read voice is output to the outside, and the read face image is displayed. It is controlled to be. With this, along with the display of the pronunciation order of the pronunciation target data such as a headword of a word, voice output corresponding to the pronunciation symbol of the pronunciation target data and face image display can be performed, and the accented pronunciation symbol Can easily and clearly learn pronunciation sounds such as words, the expression of the face accompanying this pronunciation, the pronunciation sound at the accent part, and the expression of the face accompanying the pronunciation of this accent part. become able to.
[0172]
According to the image display control device according to claim 7 of the present invention, in the image display control device according to claim 6, the phonetic symbols including accented phonetic symbols stored in the second storage means. Is composed of phonetic symbols with accent marks and phonetic symbols without accent marks, and the voice and face images stored in association with the phonetic symbols with accent marks and the accent marks are attached. It is different from the voice and the face image stored in association with the not-shown phonetic symbols. As a result, the difference between the pronunciation sound and the expression of the face in the portion of the data to be pronounced, such as a headword of a word, which does not have accent marks, and the difference between the pronunciation sound and the expression of the face in the portion with accent marks Will be able to learn more clearly.
[0173]
According to the image display control device of the present invention, an image for changing and controlling a face image having a mouth or a facial expression in accordance with a display of a pronunciation order of a series of pronunciation target data including a headword of a word. A display control device, wherein the storage means stores a plurality of sets of the sounding target data and its voice and face image in association with each other, and a signal corresponding to an accent portion of the stored signal waveform of the voice by the detection means. A peak portion of the waveform is detected, and a face image corresponding to the voice of the detected accent portion is read out from the storage portion by the display control portion, and the read face image is replaced with a signal waveform portion other than the accent portion. Control is performed so that the face image corresponding to the voice is displayed in a different display form. As a result, a face image corresponding to the pronunciation sound can be displayed in accordance with the pronunciation order of the pronunciation target data such as a headword of a word, and a different display mode is used for an accent portion detected by a peak portion of a voice signal waveform. This makes it possible to display a face image in which the face has been changed, and to learn the expression of the face accompanying the pronunciation in the accent part more clearly.
[0174]
According to the image display control device of the ninth aspect of the present invention, in the image display control device of the eighth aspect, the display control means corresponds to the accent part detected by the detection means. A text display control means is provided for controlling the display of the part of the sound target data to be displayed in a different display form from the display of the part of the sound target data corresponding to the signal waveform part other than the accent part. Thereby, in addition to the display of the face image corresponding to the pronunciation voice of the pronunciation target data, the display of the accent portion of the pronunciation target data can be displayed in a display form different from the display of the pronunciation target data other than the accent portion. Thus, it is possible to more clearly learn the accent portion of the pronunciation target data and the expression of the face accompanying the utterance of the pronunciation sound.
[0175]
Therefore, according to the present invention, in displaying an image synchronized with a sound output, a sound display output control device, an image display control device, and a sound display output control processing program capable of clearly expressing accent timing A display control processing program can be provided.
[Brief description of the drawings]
FIG. 1 is a block diagram showing a configuration of an electronic circuit of a portable device 10 according to an embodiment of a sound display output control device (image display control device) of the present invention.
FIG. 2 is a view showing link data for synchronous reproduction for one entry word “low” in a dictionary database 12b stored in a memory 12 of the portable device 10, and FIG. (B) shows the text data “low” stored in accordance with the text file No., and FIG. (C) shows the text stored in accordance with the text mouth synchronization file No. The figure which shows a character, a phonetic symbol, and a mouthpiece number.
FIG. 3 is a diagram showing character image data 12d stored in a memory 12 of the portable device 10 and selectively used by a user setting for synchronous display of a pronunciation mouth image in a dictionary entry search.
FIG. 4 shows a mouth image area (X1, Y1, No. 1) of a character image (12d: No. 1 to No. 3) stored in the memory 12 of the portable device 10 and synchronously displayed for a pronunciation mouth image in a dictionary entry search. The figure which shows the mouth image data 12e classified by audio | voice synthesized and displayed on X2, Y2).
FIG. 5 is a view showing a time code file 12f23 (12i) of a file No. 23 associated with a headword "low" in a dictionary time code file 12f stored in a memory 12 of the portable device 10.
FIG. 6 is a view showing the command codes of various commands described in the dictionary time code file 12fn (see FIG. 5) of the portable device 10 in association with command contents analyzed based on the parameter data.
FIG. 7 is a flowchart showing main processing according to the dictionary processing program 12a of the portable device 10.
FIG. 8 is a flowchart showing headword synchronous reproduction processing accompanying the main processing of the portable device 10;
FIG. 9 is a flowchart showing a text-corresponding mouth display process that is executed by interruption in response to the highlight display of each headword character accompanying the headword synchronous reproduction process of the portable device 10;
FIG. 10 is a view showing a setting display state of a synchronous reproduction character image in a character setting process in a main process of the portable device.
FIG. 11 is a view showing a search entry display screen G2 associated with the entry search processing in the main processing of the portable device 10.
FIG. 12 is a headword character display window W1 and a sound opening which are window-displayed on a search headword display screen G2 in a setting state of a character image No. 3 in accordance with the synchronous playback processing in the headword search processing of the portable device 10. FIG. 6A is a diagram showing a display state of a type display window W2, and FIG. 7A is a diagram showing a setting display state of a headword character display window W1 and a sound opening type display window W2 with respect to a search headword display screen G2. FIG. 8B shows the change of the entry word display window W1 and the accent-incompatible sound opening type display window W2 synchronized with the output of the pronunciation sound. FIG. The figure which shows the change state of the entry word display window W1 and the pronunciation opening type display window W2 corresponding to an accent.
FIG. 13 is a headword character display window W1 and a sound opening which are window-displayed on the search headword display screen G2 in the setting state of the character image No. 1 along with the synchronous playback processing in the headword search processing of the portable device 10. FIG. 6A is a diagram showing a display state of a type display window W2, and FIG. 7A is a diagram showing a setting display state of a headword character display window W1 and a sound opening type display window W2 with respect to a search headword display screen G2. FIG. 7B is a diagram showing a change state of a headword character display window W1 and a sound opening type display window W2 synchronized with the output of the pronunciation voice.
FIG. 14 is a view showing a search entry display screen G2 when an English-Japanese dictionary containing pronunciation forms of two countries, the United States and the United Kingdom, is used in the entry search processing in the main processing of the portable device 10.
FIG. 15 is a headword character display window that is displayed on the search headword display screen G2 when a US-style pronunciation [US] is designated in synchronization with the headword search processing of the portable device 10; FIG. 7A is a diagram showing a display state of W1 and a sound opening type display window W2. FIG. 7A shows a setting display state of a headword character display window W1 and a sound opening type display window W2 with respect to a search headword display screen G2. FIG. 7B is a diagram showing a change state of the entry word character display window W1 and the pronunciation opening type display window W2 synchronized with the output of the American pronunciation sound.
FIG. 16 is a headword character display window that is displayed on the search headword display screen G2 when English pronunciation [English] is designated along with the synchronous playback processing in the headword search processing of the portable device 10. FIG. 7A is a diagram showing a display state of W1 and a sound opening type display window W2. FIG. 7A shows a setting display state of a headword character display window W1 and a sound opening type display window W2 with respect to a search headword display screen G2. FIG. 6B is a diagram showing a change state of the headword character display window W1 and the pronunciation mouth type display window W2 synchronized with the output of the English pronunciation sound.
17A and 17B are diagrams showing an operation display state when an incorrect answer is selected in the accent test processing of the portable device 10, wherein FIG. 17A shows an accent test question display screen G3, and FIG. ) Shows the setting display state of the entry word character display window W1 and the sound opening type display window W2 with respect to the entry word display screen G2 to be set, and FIG. The figure which shows the change state of the entry word character display window W1 and the sound opening type display window W2.
FIG. 18 is a diagram showing an operation display state when a correct answer is selected in accordance with the accent test processing of the portable device 10, wherein FIG. 18A shows an accent test question display screen G3, and FIG. Is a view showing a setting display state of an entry word character display window W1 and a sound opening type display window W2 with respect to the entry word display screen G2 to be set, and FIG. The figure which shows the change state of the output word display window W1 and the sound opening type display window W2.
FIG. 19 is a flowchart showing headword synchronized playback processing of the portable device 10 according to the second embodiment.
FIG. 20 is a flowchart showing a headword synchronous reproduction process of the portable device 10 according to the third embodiment.
[Explanation of symbols]
10… Portable equipment
11 ... CPU
12 ... memory
12A: FLASH memory
12B ... RAM
12a: Dictionary processing program
12b… Dictionary database
12c: Dictionary audio data
12d: Character image data
12d (No. n): Setting character image
12d (No. n ') ... face image with accent
12d (No. nUS): US character set character image
12d (No. nUS '): American accent-compatible face image
12d (No. nUK): Character image set for English
12d (No. nUK '): English accent-compatible face image
12d (No. nBL): Blue change setting character image
12d (No. nBL '): Accent-compatible blue face image
12e: Sound-based mouth image data
12f: Dictionary time code file
12g ... entry word data memory
12h ... entry data dictionary data memory
12i: Time code file No23
12j: HTML file memory for synchronization
12k: Text file memory for synchronization
12m: Sound file memory for synchronization
12n: Image file memory for synchronization
12p: Mouth image area memory
12q ... Image expansion buffer
13. External recording medium
14: Recording medium reading unit
15 ... Transmission control unit
16… Communication unit
17a ... input section
17b Coordinate input device
18 Display part
19a: Voice input unit
19b: Stereo audio output unit
20… Communication equipment (home PC)
30… Web server
N: Communication network (Internet)
X ... selected frame
H: Time code table header information
G1 ... Character image list selection screen
G2… entry search screen
G3: Accent test question display screen
W1 ... entry word character display window (text synchronous playback window)
W2… sounding mouth type display window (window for synchronized playback)
HL: Highlight (identification) display
Et ... correct answer selection item
Ef: Error accent selection item

Claims (12)

  1. Audio data output means for outputting audio data;
    Text synchronous display control means for displaying text in synchronization with the audio data output by the audio data output means,
    Image display control means for displaying an image including at least a mouth portion,
    Mouth image display control for displaying a mouth-shaped image corresponding to the voice data in synchronization with the voice data output by the voice data output means for the mouth portion included in the image displayed by the image display control means Means,
    Accent detection means for detecting the presence or absence of an accent in the audio data or the text;
    Image change display control means for changing a mouth-shaped image displayed by the image display control means according to detection of presence of an accent by the accent detection means;
    A voice display output control device comprising:
  2. further,
    Dictionary search means for searching dictionary data corresponding to the input headword;
    Dictionary data display control means for displaying dictionary data corresponding to the entry word searched by the dictionary search means,
    The voice data is pronunciation sound data of the headword searched by the dictionary search unit, and the text is a text of the headword searched by the dictionary search unit,
    Output of headword pronunciation voice data by the voice data output means, display of headword text synchronized with the headword pronunciation voice data by the text synchronous display control means, and display of an image by the image display control means Is performed in a display state of dictionary data corresponding to a search entry word by the dictionary data display control means,
    The audio display output control device according to claim 1, wherein:
  3. Word storage means for storing a plurality of words and correct accented pronunciation symbols and error accented pronunciation symbols for each of the words in association with each other;
    Sound data output means for outputting correct accent pronunciation sound data or incorrect accent pronunciation sound data of the word stored by the word storage means;
    Text synchronous display control means for displaying the text of the word in synchronization with the pronunciation voice data of the word output by the voice data output means,
    Image display control for displaying an image including at least a mouth part in a different display form depending on whether sound data with correct accent is output by the sound data output means and when sound data with incorrect accent is output. Means,
    A mouth image for displaying a mouth-shaped image corresponding to the pronunciation sound data in synchronization with the pronunciation sound data output by the sound data output means for the mouth portion included in the image displayed by the image display control means. Display control means;
    Along with the synchronous display of the word text by the text synchronous display control means, accent detection means for detecting the accent of the word from the accented phonetic symbol of the corresponding word stored by the word storage means,
    Image change display control means for changing an image displayed by the image display control means in accordance with detection of an accent by the accent detection means;
    A voice display output control device comprising:
  4. further,
    Correct / false accent display control means for displaying a word stored by the word storage means and a correct accented pronunciation symbol and an incorrect accented pronunciation symbol associated with the word side by side,
    A correct / false accent selecting means for selecting either a correct accented phonetic symbol or an incorrect accented phonetic symbol of the word displayed by the correct / false accent display control means;
    The audio data output unit outputs pronunciation audio data of a correct accent or pronunciation audio data of an erroneous accent of the word in accordance with the correct / incorrect selection of a word accent by the correct / false accent selection unit.
    The audio display output control device according to claim 3, wherein:
  5. Storage means for storing a plurality of headwords and pronunciation sound data of at least two or more areas of each headword in association with each other;
    Area designating means for designating any of the pronunciation sound data of two or more areas of the headword stored by the storage means;
    Voice data output means for outputting the pronunciation voice data of the designated area of the headword in accordance with the area designation of the pronunciation voice data by the area designation means;
    Text synchronous display control means for displaying the text of the headword in synchronization with the pronunciation voice data of the specified area of the headword output by the voice data output means,
    Image display control means for displaying an image including at least a mouth portion in a different display form according to the designated area of the pronunciation voice data by the area designating means,
    A mouth image for displaying a mouth-shaped image corresponding to the pronunciation sound data in synchronization with the pronunciation sound data output by the sound data output means for the mouth portion included in the image displayed by the image display control means. Display control means;
    With the synchronous display of the headword text by the text synchronous display control means, accent detection means for detecting the accent of the headword,
    Image change display control means for changing an image displayed by the image display control means in accordance with detection of an accent by the accent detection means;
    A voice display output control device comprising:
  6. An image display control apparatus that changes and controls a face image having a mouth or a facial expression according to a display of a sequence of pronunciation target data including a headword of a word,
    First storage means for storing a plurality of sets of the pronunciation target data and pronunciation symbols including accented pronunciation symbols in association with each other;
    Second storage means for storing a plurality of pairs of phonetic symbols including accented phonetic symbols and their voices and face images in association with each other;
    With the display of the pronunciation order of the series of pronunciation target data, pronunciation symbols corresponding to the pronunciation target data are read from the first storage means, and the voice and face image corresponding to the read pronunciation symbols are read out from the first storage unit. First control means for reading from the second storage means, outputting the read voice to the outside, and controlling to display the read face image;
    When outputting a voice to the outside under the control of the first control means, it is determined whether or not the read phonetic symbols include accented phonetic symbols. If it is determined that there is a voice, the voice and the face image corresponding to the accented phonetic symbol are read out from the second storage means, and the read voice is output to the outside and the read face is read out. Second control means for controlling to display an image,
    An image display control device comprising:
  7. In the image display control device according to claim 6,
    The phonetic symbols including accented phonetic symbols stored in the second storage means include phonetic symbols with accented symbols and phonetic symbols without accented symbols, and are provided with the accented symbols. An image display control device, wherein a voice and a face image stored in association with a phonetic symbol are different from a voice and a face image stored in association with a phonetic symbol without the accent symbol. .
  8. An image display control apparatus that changes and controls a face image having a mouth or a facial expression according to a display of a sequence of pronunciation target data including a headword of a word,
    Storage means for storing a plurality of pairs of the sound target data and its voice and face image in association with each other,
    Detecting means for detecting a peak part of a signal waveform corresponding to an accent part of the sounding target data among signal waveforms of voices stored in the storage means;
    A face image corresponding to the voice of the accent part detected by the detection means is read from the storage means, and the read face image is displayed differently from the face image corresponding to the voice of the signal waveform part other than the accent part. Display control means for controlling display in a form;
    An image display control device comprising:
  9. In the image display control device according to claim 8,
    The display control means displays the sound target data portion corresponding to the accent part detected by the detection means in a display mode different from the display of the sound target data portion corresponding to other signal waveform parts other than the accent part. An image display control device, comprising: a text display control unit that controls the display so as to be displayed on a display.
  10. An audio display output control processing program for controlling a computer of the electronic device to synchronously reproduce audio data, text, and images,
    Said computer,
    Audio data output means for outputting audio data,
    Text synchronous display control means for displaying text in synchronization with the audio data output by the audio data output means,
    Image display control means for displaying an image including at least a mouth portion,
    Mouth image display control for displaying a mouth-shaped image corresponding to the voice data in synchronization with the voice data output by the voice data output means for the mouth portion included in the image displayed by the image display control means means,
    Accent detection means for detecting an accent of the voice data or the text;
    Image change display control means for changing an image displayed by the image display control means in accordance with detection of an accent by the accent detection means;
    A computer-readable audio display output control processing program functioning as a computer.
  11. An audio display output control processing program for controlling a computer of the electronic device to synchronously reproduce audio data, text, and images,
    Said computer,
    Word storage means for storing a plurality of words and a correct accented pronunciation symbol and an incorrect accented pronunciation symbol for each word in association with each other;
    Voice data output means for outputting correct accent pronunciation voice data or incorrect accent pronunciation voice data of the word stored by the word storage means;
    Text synchronous display control means for displaying the text of the word in synchronization with the pronunciation voice data of the word output by the voice data output means,
    Image display control for displaying an image including at least a mouth part in a different display form depending on whether sound data with correct accent is output by the sound data output means and when sound data with incorrect accent is output. means,
    A mouth image for displaying a mouth-shaped image corresponding to the pronunciation sound data in synchronization with the pronunciation sound data output by the sound data output means for the mouth portion included in the image displayed by the image display control means. Display control means,
    Along with the synchronous display of the word text by the text synchronous display control means, accent detection means for detecting the accent of the word from accented phonetic symbols of the word stored by the word storage means,
    Image change display control means for changing an image displayed by the image display control means in accordance with detection of an accent by the accent detection means;
    A computer-readable audio display output control processing program functioning as a computer.
  12. An image display control processing program for controlling a computer of an electronic device to change and control a face image having a mouth or a facial expression in accordance with a display of a sequence of pronunciation target data including a headword of a word, ,
    Said computer,
    A first storage unit that stores a plurality of sets of the pronunciation target data and pronunciation symbols including accented pronunciation symbols in association with each other;
    Second storage means for storing a plurality of pairs of phonetic symbols including accented phonetic symbols and their voices and face images in association with each other;
    Along with the display of the pronunciation order of the series of pronunciation target data, pronunciation symbols corresponding to the pronunciation target data are read out of the pronunciation symbols stored in the first storage unit, and the pronunciation symbols corresponding to the read pronunciation symbols are read out. The voice and the face image are read out from the voice and the face image stored by the second storage means, and the read voice is output to the outside and the read face image is controlled to be displayed. First control means,
    When outputting a voice to the outside under the control of the first control means, it is determined whether or not the read phonetic symbols include accented phonetic symbols. When it is determined that there is a voice and a face image corresponding to the accented phonetic symbol, the voice and the face image are read out from the voice and the face image stored by the second storage means, and the read voice is sent to the outside. Second control means for outputting and displaying the read face image,
    An image display control processing program that functions as a computer.
JP2003143499A 2003-05-21 2003-05-21 Voice display output control device and voice display output control processing program Active JP4370811B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003143499A JP4370811B2 (en) 2003-05-21 2003-05-21 Voice display output control device and voice display output control processing program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2003143499A JP4370811B2 (en) 2003-05-21 2003-05-21 Voice display output control device and voice display output control processing program

Publications (2)

Publication Number Publication Date
JP2004347786A true JP2004347786A (en) 2004-12-09
JP4370811B2 JP4370811B2 (en) 2009-11-25

Family

ID=33531274

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003143499A Active JP4370811B2 (en) 2003-05-21 2003-05-21 Voice display output control device and voice display output control processing program

Country Status (1)

Country Link
JP (1) JP4370811B2 (en)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100593757B1 (en) 2005-02-14 2006-06-20 유철민 Foreign language studying device for improving foreign language studying efficiency, and on-line foreign language studying system using the same
JP2006195093A (en) * 2005-01-12 2006-07-27 Yamaha Corp Pronunciation evaluation device
WO2006085719A1 (en) * 2005-02-14 2006-08-17 Hay Kyung Yoo Foreign language studying device for improving foreign language studying efficiency, and on-line foreign language studying system using the same
JP2006251744A (en) * 2005-03-09 2006-09-21 Makoto Goto Pronunciation learning system and pronunciation learning program
JP2006301063A (en) * 2005-04-18 2006-11-02 Yamaha Corp Content provision system, content provision device, and terminal device
KR100816378B1 (en) 2006-11-15 2008-03-25 주식회사 에듀왕 Method for studying english pronunciation using basic word pronunciation
JP2008083446A (en) * 2006-09-28 2008-04-10 Casio Comput Co Ltd Pronunciation learning support device and pronunciation learning support program
WO2009066963A2 (en) * 2007-11-22 2009-05-28 Intelab Co., Ltd. Apparatus and method for indicating a pronunciation information
WO2010045757A1 (en) * 2008-10-24 2010-04-29 无敌科技(西安)有限公司 Emulated video and audio synchronous display device and mrthod
KR101153736B1 (en) 2010-05-31 2012-06-05 봉래 박 Apparatus and method for generating the vocal organs animation
JP2015520861A (en) * 2012-03-06 2015-07-23 アップル インコーポレイテッド Multilingual content speech synthesis processing
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
CN109147430A (en) * 2018-10-19 2019-01-04 渭南师范学院 A kind of teleeducation system based on cloud platform
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006195093A (en) * 2005-01-12 2006-07-27 Yamaha Corp Pronunciation evaluation device
JP4626310B2 (en) * 2005-01-12 2011-02-09 ヤマハ株式会社 Pronunciation evaluation device
WO2006085719A1 (en) * 2005-02-14 2006-08-17 Hay Kyung Yoo Foreign language studying device for improving foreign language studying efficiency, and on-line foreign language studying system using the same
KR100593757B1 (en) 2005-02-14 2006-06-20 유철민 Foreign language studying device for improving foreign language studying efficiency, and on-line foreign language studying system using the same
JP4678672B2 (en) * 2005-03-09 2011-04-27 誠 後藤 Pronunciation learning device and pronunciation learning program
JP2006251744A (en) * 2005-03-09 2006-09-21 Makoto Goto Pronunciation learning system and pronunciation learning program
JP2006301063A (en) * 2005-04-18 2006-11-02 Yamaha Corp Content provision system, content provision device, and terminal device
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
JP2008083446A (en) * 2006-09-28 2008-04-10 Casio Comput Co Ltd Pronunciation learning support device and pronunciation learning support program
KR100816378B1 (en) 2006-11-15 2008-03-25 주식회사 에듀왕 Method for studying english pronunciation using basic word pronunciation
WO2009066963A2 (en) * 2007-11-22 2009-05-28 Intelab Co., Ltd. Apparatus and method for indicating a pronunciation information
WO2009066963A3 (en) * 2007-11-22 2009-07-30 Intelab Co Ltd Apparatus and method for indicating a pronunciation information
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
WO2010045757A1 (en) * 2008-10-24 2010-04-29 无敌科技(西安)有限公司 Emulated video and audio synchronous display device and mrthod
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
KR101153736B1 (en) 2010-05-31 2012-06-05 봉래 박 Apparatus and method for generating the vocal organs animation
JP2015520861A (en) * 2012-03-06 2015-07-23 アップル インコーポレイテッド Multilingual content speech synthesis processing
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
CN109147430A (en) * 2018-10-19 2019-01-04 渭南师范学院 A kind of teleeducation system based on cloud platform

Also Published As

Publication number Publication date
JP4370811B2 (en) 2009-11-25

Similar Documents

Publication Publication Date Title
Jenks Transcribing talk and interaction: Issues in the representation of communication data
KR101674851B1 (en) Automatically creating a mapping between text data and audio data
US8036894B2 (en) Multi-unit approach to text-to-speech synthesis
TW550539B (en) Synchronizing text/visual information with audio playback
JP4237915B2 (en) A method performed on a computer to allow a user to set the pronunciation of a string
US6985864B2 (en) Electronic document processing apparatus and method for forming summary text and speech read-out
US8712776B2 (en) Systems and methods for selective text to speech synthesis
CA1259410A (en) Apparatus for making and editing dictionary entries in a text-to-speech conversion system
US8355919B2 (en) Systems and methods for text normalization for text to speech synthesis
US8583418B2 (en) Systems and methods of detecting language and natural language strings for text to speech synthesis
CN1206620C (en) Transcription and display input speech
US20100082346A1 (en) Systems and methods for text to speech synthesis
US8352268B2 (en) Systems and methods for selective rate of speech and speech preferences for text to speech synthesis
US8396714B2 (en) Systems and methods for concatenation of words in text to speech synthesis
US20100082328A1 (en) Systems and methods for speech preprocessing in text to speech synthesis
US7149690B2 (en) Method and apparatus for interactive language instruction
KR100305455B1 (en) Apparatus and method for automatically generating punctuation marks in continuous speech recognition
US20100082327A1 (en) Systems and methods for mapping phonemes for text to speech synthesis
TWI488174B (en) Automatically creating a mapping between text data and audio data
US6689946B2 (en) Aid for composing words of song
US20030191645A1 (en) Statistical pronunciation model for text to speech
US8027837B2 (en) Using non-speech sounds during text-to-speech synthesis
JP2005189454A (en) Text synchronous speech reproduction controller and program
EP1168298A2 (en) Method of assembling messages for speech synthesis
US6963841B2 (en) Speech training method with alternative proper pronunciation database

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20060502

A977 Report on retrieval

Effective date: 20090402

Free format text: JAPANESE INTERMEDIATE CODE: A971007

A131 Notification of reasons for refusal

Effective date: 20090421

Free format text: JAPANESE INTERMEDIATE CODE: A131

A521 Written amendment

Effective date: 20090612

Free format text: JAPANESE INTERMEDIATE CODE: A523

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20090811

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20090824

R150 Certificate of patent (=grant) or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (prs date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120911

Year of fee payment: 3

FPAY Renewal fee payment (prs date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130911

Year of fee payment: 4