IL128295A - Computerized translator displaying visual mouth articulation - Google Patents

Computerized translator displaying visual mouth articulation

Info

Publication number
IL128295A
IL128295A IL12829599A IL12829599A IL128295A IL 128295 A IL128295 A IL 128295A IL 12829599 A IL12829599 A IL 12829599A IL 12829599 A IL12829599 A IL 12829599A IL 128295 A IL128295 A IL 128295A
Authority
IL
Israel
Prior art keywords
pictorial
images
word
database file
words
Prior art date
Application number
IL12829599A
Other versions
IL128295A0 (en
Original Assignee
Jacob Frommer
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jacob Frommer filed Critical Jacob Frommer
Priority to IL12829599A priority Critical patent/IL128295A/en
Publication of IL128295A0 publication Critical patent/IL128295A0/en
Priority to AU21270/00A priority patent/AU2127000A/en
Priority to EP00901314A priority patent/EP1149349A2/en
Priority to CN00803306.4A priority patent/CN1339133A/en
Priority to JP2000596476A priority patent/JP2002536720A/en
Priority to PCT/IL2000/000060 priority patent/WO2000045288A2/en
Publication of IL128295A publication Critical patent/IL128295A/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Machine Translation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A computerized dictionary apparatus providing animation of translated word prounciations, the apparatus comprising: a keyboard (B) for entering words in at least a first language and at least a second, target language; means (42) for displaying animated speech organs; and a first database file (24) of words in the first language and the corresponding, translated words in the second language, a 248 ו' בניסן התשס" ד - March 28, 2004 second database file (30) of textual pronuciation keys of the basic phonetic syllables of all words of the second language; a third database file (32) of primary pictorial mouth articulation images of the basic phonetic syllables; a fourth database file (34) of key-pointed sets outlining every one of the pictorial mouth articulation images of the basic phonetic syllables; image morphing generator (36) for creating transient pictorial mouth articulation images wherein the transient images are inserted in order to fill the interval between pictorial mouth articulation images of the basic phonetic syllables, resulting in smooth animation presentation; and means (28) for animating the respective series of pictorial mouth articulation images relating to any translated word in the second language.

Description

COMPUTERIZED TRANSLATOR COMPUTERIZED TRANSLATORS BACKGROUND OF THE INVENTION The present invention generally relates to computerized translators.
In the art of computerized self-teaching apparatus there are presently available ones that are known as "Electronic Dictionaries". These devices comprise a keyboard for entering a word in the user's language, or from the source language document subject to translation, and present on the display screen the word translated to the selected foreign or user's language, as in any conventional dictionary.
More advanced apparatus includes the further feature of producing, through suitable electronic circuitry, a microphone and speakers the spoken sounds of the translated word.
However, this important aid is insufficient, in the sense that it is still missing a guidance as to how exactly pronounce the word, namely the visual presentation of mouth and related organs (lips and tongue).
It is thus the object of the present invention to provide a computerized translator capable of visually displaying translated words pronunciation picture.
It is a further object of the present invention to utilize a database of basic phonetic syllables of which all words in the translated language are composed.
It is a still further object of the present invention to display the pronunciation of the translated words in a dynamic stre i†ilined fashion, using the morphing technique. -8206Ϊ 1 SUMMARY OF THE INVENTION Thus provided according to the present invention is a computerized dictionary apparatus, comprising a first database file of words in a given source language, a second database file of textual words pronunciation of the said first database file of words, a third database file of basic phonetic syllables of the any said source language word, a fourth database file of sequential pictorial mouth articulation images for each of the said basic phonetic syllables, means for the suitable selection of the source and target languages, means for inputting a selected word included in the first file, means for displaying the selected word in a textual form, means for locating the selected word in the first database file of words in a given source language, means for locating the textual word pronunciation in the second database file, means for locating the basic phonetic syllables in the third database, means for selecting the sequential pictorial mouth articulation images relating to the phonetic syllables of the textual word pronunciation, and means for displaying the sequential pictorial mouth articulation images of the selected word in differential and combined succession.
BRIEF DESCRIPTION OF THE DRAWINGS These and additional objects, advantages and constructional features of the present invention will be more clearly understood in the light of the ensuing description of preferred embodiments of the invention, given by way of example only with reference to the accompanying drawings, wherein- -8206Ϊ 2 Fig. 1 illustrates the general design of an apparatus featuring the characteristics of the present invention; Fig. 2 is a block diagram of the apparatus sub-systems; Fig. 3 is a block diagram of database files used in the apparatus; Fig. 4 is an example of use of the method as applied to a specific word; and Fig. 5 illustrates the morphing process.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS As schematically shown in Fig. 1 the proposed apparatus generally designated A is similar to any known electronic translators; it necessarily comprises keyboard B for entering the word in the language of origin, and/or mouse pointer arrows †4- keys to choose from indexed library database, source language selector and/or other functional control buttons C and D; e.g., target language selector; a first, LC or other digital imaging screen E for displaying the entered and translated words; and, as required for the implementation of the present invention - a second screen G on which the movements of the speech organs are displayed, as will be described in greater detail herein below, and microphone speakers block F. enabling audio playback of the above operations..
Obviously, the design of the apparatus may vary in many respects, as in fact there are many types of electronic dictionaries available on the market.
It should be further mentioned that the audio function, represented by the speaker block F is not essential to the implementation of the present invention, although the incorporation thereof seems natural and preferable in the context.
On the software side, the apparatus according to the present invention may include any useful combination or all of the following components for the -8206Ϊ 3 implementation of the preferred embodiments, as shown in the block diagram of Fig. 2.
User interface units for receiving data from the user: Data input unit 10 for transferring the inserted input by keyboard B to textual mode with speller checkup and database file selector 12 for interpreting the signals from buttons C and D to select source or target languages from the vocabulary database files 30.
Data processing for database files management: Searching database file by index or file ID-field name using data search engine unit 1 , or matching records of any two database files by index or common ID-fields using data integrator 16.
Image morphinq generator 18 for creating transient images.
Output interface for presenting the translation results for the user.
Displaying textual pronunciation of the selected word using text display 20 with speller checkup on input errors, and presenting visual display of mouth articulations using visual display 22.
The function and mode of operation of the above component will be explained in greater details below.
Vocabulary database files 30 contains the vocabulary of words in a given language available for selection by index or by alphabetical order; The index of each database is linked to other database, to create mutual matching; Word pronunciation database files 32 contains the records of textual phonetic pronunciation combined of basic syllables for each 2-8206i 4 word; Images of mouth articulation database files 34 contains the records of sequential images set of all basic phonetic syllables by index or basic syllable; Keypoint sets of mouth images database files 36 contains the records of all keypoints sets of mouth images, outlining the first and last image of each syllable. Search through this file can be made according to index or basic syllables.
As already mentioned, the general object of the invention is to enable the user to input and/or select a word from one language vocabulary database file with speller checkup in case of user's error, and display a visual presentation of word translation to a second language according with mouth articulation and optionally audio pronunciation from word database files.
The method of operation of the apparatus according to the present invention is schematically exemplified in Fig. 3.
Using button C the user selects out of available languages list any language that he aims to use as a source language, unit 12 selects the matching source vocabulary database file from database files 30. Let us assume, for illustration purpose, that the source language is French.
Data input unit 10 enables input of a word, e.g. by means of keyboard B or by mouse pointer selection - in the present example -- "BONJURE". The new input data are transformed to a textual mode.
The selected word is displayed on a screen E using text display unit 20, and is located by the data search engine 14 in the selected source language vocabulary database files 30. The user selects the target language for translation out of -8206i 5 available language list - in the present example - English', unit 12 selects the matching target language vocabulary database file from database files 30.
The selected word is translated into the corresponding word "HELLO" in the selected target language of vocabulary database files 30, using the data integrator unit 16 to match between source and target vocabulary database files.
Using unit 16, the translated word is matched with the relevant textual pronunciation stored in database files 32.
The textual pronunciation is separated to components of the basic syllables., Fig. 4 illustrates the separation of the word into its two basic phonetic syllables: he , Ιδ' Each basic syllable is matched with the relevant set of mouth articulation images in the mouth articulation database files 34.
Displaying on screen G the relevant data set of mouth articulation images, by using visual display unit 22 , gives the user a visual presentation of the word pronunciation. It is preferably to control the duration of each image appearance and possibly generate repetitions of each syllable separately as many times as desired.
However, the sequential presentation of mouth articulation image sets may not express a smooth pictorial presentation, but only a step-wise demonstration.
Therefore, according to additional feature of the present invention the apparatus is provided with an image morphing generator 18, whose function enables full, smooth and realistic demonstration of word pronunciation. By this image morphing function (which is known per-se and need not to be described in greater detail) transient or dynamically changing images are created. These transient images are dynamically and intermediately inserted in-between sets of two -8206Ϊ 6 successive syllables, to make a smoother pictorial demonstration.
The image morphing function enables to create sequential series of images, from the last image of the first syllable to the first image of the second syllable.
A simple example of the morphing process use is schematically depicted in Fig. 5. The new images are stored in buffer 26.
Fig. 5a is the last mouth image of the first syllable outlined by selected key points A1 - A6, whereas points A'"1 - A'"6 in Fig. 5d outline the last mouth image of the second syllable. The selected keypoints are retrieved from keypoint sets of mouth images database file 36.
The product images of the morphing process are shown in Figs. 5b to Fig 5c, namely being shifted slightly at every step to reach the position of the target image in Fig. 5d.
Using the visual display unit 22 to display on screen G the new transient images in between the syllable images, gives the user a vivid visual presentation of the word pronunciation.
It will be readily understood that the apparatus may be designed to comprise a ready-made database file of transient images matching all possibilities of filling in between any couple of basic syllables. This database file will replace the morphing function. However, this option will require a large database file, similarly..
In yet another option the apparatus may comprise a database file of sequential images set demonstrating the pronunciation of all complete word belonging to the selected language. This again will dictate an unreasonably large database and therefore is not preferable compared with the other possibilities as described in detail hereinabove. -8206i 7 While the above description contains many specificities, these should not be construed as limitations on the scope of the invention, but rather as exemplification of the preferred embodiments. Those skilled in the art will envision other possible variations that are within its scope. Accordingly, the scope of the invention should be determined not by the embodiment illustrated, but by the appended claims and their legal equivalents. -8206Ϊ 8

Claims (17)

WHAT IS CLAIMED IS:
1. A computerized dictionary apparatus, comprising : -a first database file of words in a given source language; -a second database file of textual words pronunciation of the said first database file of words ; -a third database file of basic phonetic syllables of the any said source language word; -a fourth database file of sequential pictorial mouth articulation images for each of the said basic phonetic syllables; -means for the suitable selection of the source and target languages; -means for inputting a selected word included in the first file; -means for displaying the selected word in a textual form; -means for locating the selected word in the first database file of words in a given source language; -means for locating the textual word pronunciation in the second database file; -means for locating the basic phonetic syllables in the third database; -means for selecting the sequential pictorial mouth articulation images relating to the phonetic syllables of the textual word pronunciation ; and -means for displaying the sequential pictorial mouth articulation images of the selected word in differential and combined succession. 2-8206Ϊ 9
2. The apparatus as claimed in Claim 1 , further comprising: -a fifth database of selected keypoint sets outlining the said visual images for each first and last sequential pictorial articulations of the phonetic syllables; -means for creating the transient pictorial articulation images between two phonetic syllables using the keypoint set of the last pictorial articulation of the first phonetic syllable and keypoint set of the first pictorial articulation of the second phonetic syllable; -means for synchronically displaying the transient pictorial articulations in-between of the respective pictorial articulations of the phonetic syllables; and
3. The apparatus as claimed in Claim 1 , further comprising: -a sixth database file of pictorial descriptions of mouth organs such like tongue, lips and teeth; -means for selecting the respective pictorial mouth organs images relating to the phonetic syllables and \ or the combined word pronunciation; and -means for the convenient hearing of said textual word pronunciation and said basic phonetic syllables pronunciation , in simultaneous synchronical coordination with the visual displaying of said words and / or basic phonetic syllables with morphing process of activated images processing added. 2-8206Ϊ 10
4. The apparatus as claimed in Claim 1. further comprising: -a seventh database file of pictorial images of airflow path(s) for the phonetic syllables in the differential form and in combined word form of a selected word pronunciation; -means for selecting the respective pictorial air flow path(s) images relating to the phonetic syllables for any selected sequential syllable and the combined pronunciation of the whole selected word; -means for the simultaneous audio playback of the said pictorial images of the air flow path(s), textual word pronunciation and corresponding syllable pronunciation.
5. The apparatus as claimed in Claim 1 , wherein the pictorial mouth articulations are animated drawings of lips images.
6. The apparatus as claimed in Claim 1 , wherein the pictorial mouth articulations are photo snapshots of the lips, with or without the denoted keypoint sets.
7. The apparatus as claimed in Claim 1 , wherein the pictorial mouth articulations are video clips of the lips, with or without denoted keypoint sets.
8. The apparatuses claimed in Claim 1 , wherein the inputting data means comprise a keyboard.
9. The apparatus as claimed in Claim 1 r wherein the inputting data means 2-8206Ϊ 11 comprise trackball mouse pointer selector and said word selection is effected by the selection of an item from indexed library of the first database file of words in a given source language.
10. The apparatus as claimed in Claim 1 , wherein the data inputting means comprise a scanner and OCR software package.
11. The apparatus as claimed in Claim 1 , wherein the inputting means comprise a touch activated screeh and hand writing recognition utility.
12. The apparatus as claimed in Claim 1 , wherein the- inputting data means comprise means for receiving audio data and converting same into text data.
13. The apparatus as claimed in Claim 1 , comprising means for audio pronouncing of the selected word and basic phonetic syllables, said audio images of words or syllables are processed simultaneously with corresponding visual images;
14. The apparatus as claimed in Claims 1-11 , further comprising an eighth database file of said words in a second target language wherein inputting the selected word in the first source language becomes associated with corresponding word in the given second target language by means of data search engine. 2-8206i 12
15. The apparatus as claimed in Claims 1 -11 , further comprising: -a ninth database file of said words in a third target language etc.; -a tenth database file of words pronunciation of the said word in the second target language; -an eleventh database of words pronunciation of the said word in the third target language; -means for selecting the first source language; and -means for selecting the second, third target language etc.
16. The apparatus as claimed in Claim 1 , further comprising: -a twelfth database of transient pictorial articulations necessary for filling-in of the intervals between successive pictorial articulations of the basic phonetic syllables -means for locating the transient pictorial articulations; -means for synchronically displaying the transient pictorial articulations in-between of the respective pictorial mouth articulation images of phonetic syllables; and -means for the combining of aforesaid pictorial mouth articulation images with corresponding audio files and their controllable simultaneous differential and/or combined synchronical playback.
17. The apparatus as claimed in Claims 1 - 17, further comprising means for automatic repetitions of the differential and combined visual mouth articulation images and audio differential and combined playback of the any inputting word from the first source language 2-8206Ϊ 13 database file and /or second, third etc, target language database files. For the Applicant Daniel FREIMANN, Adv. -82061 14
IL12829599A 1999-01-31 1999-01-31 Computerized translator displaying visual mouth articulation IL128295A (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
IL12829599A IL128295A (en) 1999-01-31 1999-01-31 Computerized translator displaying visual mouth articulation
AU21270/00A AU2127000A (en) 1999-01-31 2000-01-30 Computerized translating apparatus
EP00901314A EP1149349A2 (en) 1999-01-31 2000-01-30 Computerized translating apparatus
CN00803306.4A CN1339133A (en) 1999-01-31 2000-01-30 Computerized translating apparatus
JP2000596476A JP2002536720A (en) 1999-01-31 2000-01-30 Electronic translation device
PCT/IL2000/000060 WO2000045288A2 (en) 1999-01-31 2000-01-30 Computerized translating apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
IL12829599A IL128295A (en) 1999-01-31 1999-01-31 Computerized translator displaying visual mouth articulation

Publications (2)

Publication Number Publication Date
IL128295A0 IL128295A0 (en) 1999-11-30
IL128295A true IL128295A (en) 2004-03-28

Family

ID=11072437

Family Applications (1)

Application Number Title Priority Date Filing Date
IL12829599A IL128295A (en) 1999-01-31 1999-01-31 Computerized translator displaying visual mouth articulation

Country Status (6)

Country Link
EP (1) EP1149349A2 (en)
JP (1) JP2002536720A (en)
CN (1) CN1339133A (en)
AU (1) AU2127000A (en)
IL (1) IL128295A (en)
WO (1) WO2000045288A2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006108236A1 (en) * 2005-04-14 2006-10-19 Bryson Investments Pty Ltd Animation apparatus and method
JP4591481B2 (en) * 2007-07-27 2010-12-01 カシオ計算機株式会社 Display control apparatus and display control processing program
US20140006004A1 (en) * 2012-07-02 2014-01-02 Microsoft Corporation Generating localized user interfaces

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4884972A (en) * 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US5697789A (en) * 1994-11-22 1997-12-16 Softrade International, Inc. Method and system for aiding foreign language instruction

Also Published As

Publication number Publication date
CN1339133A (en) 2002-03-06
WO2000045288A2 (en) 2000-08-03
AU2127000A (en) 2000-08-18
JP2002536720A (en) 2002-10-29
IL128295A0 (en) 1999-11-30
WO2000045288A3 (en) 2000-12-07
EP1149349A2 (en) 2001-10-31

Similar Documents

Publication Publication Date Title
US7512537B2 (en) NLP tool to dynamically create movies/animated scenes
JP6355757B2 (en) English learning system using word order map of English
US6148286A (en) Method and apparatus for database search with spoken output, for user with limited language skills
US5799267A (en) Phonic engine
Sjölander et al. Wavesurfer-an open source speech tool
US6022222A (en) Icon language teaching system
Knoblauch et al. Video analysis
US6116907A (en) System and method for encoding and retrieving visual signals
US8793133B2 (en) Systems and methods document narration
US20060194181A1 (en) Method and apparatus for electronic books with enhanced educational features
US20060134585A1 (en) Interactive animation system for sign language
JP2001525078A (en) A method of producing an audiovisual work having a sequence of visual word symbols ordered with spoken word pronunciations, a system implementing the method and the audiovisual work
US7827034B1 (en) Text-derived speech animation tool
Punchimudiyanse et al. Animation of fingerspelled words and number signs of the Sinhala sign language
KR20160118542A (en) Hangul input method, hangul input apparatus and hangul education system based on hangul vowel character-generative principle
KR102645880B1 (en) Method and device for providing english self-directed learning contents
US20050137872A1 (en) System and method for voice synthesis using an annotation system
JPH08263681A (en) Device and method for generating animation
Kroskrity et al. On using multimedia in language renewal: Observations from making the CD-ROM Taitaduhaan
Shevtsova “Music, singing, word, action”: the Opera-Dramatic Studio 1935–1938
IL128295A (en) Computerized translator displaying visual mouth articulation
KR20030079497A (en) service method of language study
CN111681467B (en) Vocabulary learning method, electronic equipment and storage medium
Baehaqi et al. Morphological analysis of speech translation into Indonesian sign language system (SIBI) on android platform
TWI230910B (en) Device and method for combining language syllable pronunciation with its lip-shape pictures to play in synchronization

Legal Events

Date Code Title Description
KB Patent renewed
MM9K Patent not in force due to non-payment of renewal fees