CN1339133A - Computerized translating apparatus - Google Patents

Computerized translating apparatus Download PDF

Info

Publication number
CN1339133A
CN1339133A CN00803306.4A CN00803306A CN1339133A CN 1339133 A CN1339133 A CN 1339133A CN 00803306 A CN00803306 A CN 00803306A CN 1339133 A CN1339133 A CN 1339133A
Authority
CN
China
Prior art keywords
pronunciation
image
words
diagram
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN00803306.4A
Other languages
Chinese (zh)
Inventor
雅各布·弗罗默
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CN1339133A publication Critical patent/CN1339133A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Machine Translation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A computerized dictionary/translating apparatus is disclosed, which comprises an additional screen (G) and suitable software for displaying, besides the textual form of the retrieved/translated word, also the mouth articulation of the word when verbally pronounced.

Description

Computerized translating apparatus
Background technology of the present invention
The present invention briefly relates to a kind of machine translation machine.In the technology of computer self-learning equipment, the present spendable kind equipment that is referred to as " electronic dictionary " that has.These equipment comprise a keyboard and a display screen.According to the program design of equipment, the words of importing with a kind of appointed language can be shown as the another kind of language that is translated on screen.
More advanced equipment also comprises additional pronunciation part, by the pronunciation of suitable electronic circuit and the boombox generation words of translating.
For the user, this important subsidiary function is inadequate, and it fails the orthoepy of word is made guidance, on this meaning, that is to say, fails Visual Display to outlet plenum and relevant vocal organs (lip and tongue).
Therefore, the purpose of this invention is to provide a kind of can demonstrate the machine translation machine of the actual pronunciation of the words of translating.
Another object of the present invention is to use the database of a basic speech syllable for this purpose, forms all words in the language of translating by these basic speech syllables.
Another purpose of the present invention is to use distortion (morphing) technology, shows the pronunciation of the words of being translated with dynamic, smooth form.
General introduction of the present invention
Therefore, according to a kind of computing machine dictionary equipment provided by the present invention, it comprises first database file of being made up of the words of the source language of appointment and target language, with second database file of being made up of the word pronunciation key element (textual pronunciation key) of basic speech syllable, basic speech syllable is relevant with the words of described first database file; It is characterized in that: with respect to visual the 3rd database file of forming of continuous diagram oral cavity pronunciation of each described basic speech syllable; Be used for suitably selecting the device of source language and target language; Be used for importing the device that is included in the selected words of first database file; Be used for showing the device of selected words with textual form; The device of the selected words in location in first database file that words in the assigned source language is formed; Be used to select to illustrate continuously the device of oral cavity pronunciation image, this image is corresponding to the speech syllable of literal main points pronunciation; And the device that is used for showing the continuous diagram oral cavity pronunciation image of selected words with order different and combination.
The accompanying drawing summary
With reference to the accompanying drawing description of a preferred embodiment, can more be expressly understood these and other purpose, advantage and feature of the present invention according to following, these embodiment just provide as example, and accompanying drawing wherein is:
Fig. 1 is the one-piece construction figure with equipment of feature of the present invention;
Fig. 2 is the block scheme of this equipment subsystem;
Fig. 3 is the process flow diagram of this equipment work;
Fig. 4 is the application example that this method is used for a certain words;
Fig. 5 has illustrated deformation process.
DETAILED DESCRIPTION OF THE PREFERRED
As shown in Figure 1, the whole equipment that is denoted as A is similar to any known electronic translator among the figure; It comprises the keyboard B that is used for source language input words certainly, is used to select the key C that adapts of index database database, source language etc., and other function control knob D, for example target language selector switch; Be used to show import and a LQ (liquid crystal) the screen E of the words of translation; And, implement one the 2nd LQ of the presently claimed invention (liquid crystal) screen G and loudspeaker F, on this screen, show the motion of vocal organs, below will do more detailed description.
Obviously, as the many kinds available electronic dictionary of reality on market, the design of this equipment can change aspect a lot.
Should be pointed out that further that the audio function of loudspeaker F performance is not necessary for realization of the present invention, seems in this article normally although add this function at this, and is comparatively desirable yet.
As shown in Figure 2, this equipment comprises following main software application: user interface 10, data processor 12, ROM (read-only) storer 14 and output interface 16.
More particularly, this user interface comprises data input cell 20 and database file selector switch 22, data input cell 20 is used for the data of keyboard B input are converted to text formatting, also may comprise the spelling checker function, database file selector switch 22 is used for explaining the instruction by speech selection button C and D (as Fig. 1) input, so that select source language or target language from the lexical data library file 24 of ROM storage application 14.
Data processor 12 comprises data search engine 26, it can be used for according to index or file ID field name search data library file, that is, use data set (the data integrator) 28 that grow up to be a useful person, come to be complementary with the records of any two or more database files by index or general ID field.
ROM storage application 14 comprises lexical data library file 24 above-mentioned.It comprises the vocabulary of word (word) composition of appointed language, can be used for choosing by index or alphabet sequence; The index of each database and other database link form coupling mutually.
Words pronunciation data library file 30 comprises the record of text-to-speech pronunciation, is made up of the basic syllable of each word in this vocabulary.
Oral cavity pronunciation pictorial data library file 32 comprises the record of the sequential chart image set of all basic speech syllables.
The point set database 34 of wanting of oral cavity image comprises whole records of wanting point set of oral cavity image, is used to draw first and last visual profile of each syllable.
Also comprise an anamorphose generator 36 that is used to generate transfer image acquisition, preferably by buffer 38 (vide infra).
Output interface application program 16 is used for visual form, also can selecting for use form of sound (speaker drive program 40) performance translation result, and it comprises text display driver 42 and oral cavity pronunciation display driving software 44.
Synoptic diagram has as shown in Figure 3 showed the operational process according to equipment of the present invention.
User's the first step is to select a kind of language as source language or source language in all available languages from be programmed in this device A, and needed target language.Database file selector switch 22 is selected the source lexical data library file of coupling from database file 24.
We suppose that selected source language is a French, and require English translated in word " BONJOUR ".
By keyboard B, utilize data input cell 20 input words.
Utilize text display driver 44 that selected word is presented on the screen E, in selected source language lexical data library file 24, locate this word by data search engine 26.User's select target language is an English Translation.We suppose the translation that has retrieved word " HELLO " conduct " BONJOUR " again.Database file selector switch 22 is selected the English glossary database file from lexical data library file 24, use data set to grow up to be a useful person and 28 mate between source lexical data library file and target vocabulary database file.
Grow up to be a useful person 28 by data set, the word of being translated at once be stored in the database file 30 corresponding word pronunciation key element coupling, this database file 30 comprises the basic speech syllable of forming each word.Therefore, word " HELLO " is made up of two basic speech syllables: h and 1, as shown in Figure 4.
Process flow diagram as Fig. 3 further specifies, and each basic syllable is complementary with relevant oral cavity pronunciation image collection, as is stored in the image (being expressed as a series of ellipses briefly) in the oral cavity pronunciation data library file 32.
Now, show relevant oral cavity pronunciation image set by going up at screen G (as Fig. 1), (utilizing Visual Display driver 42) realizes the Visual Display of this pronunciation of words, can produce the sound of this word simultaneously.
Preferably, the duration that every width of cloth image manifests will be controllable, and each syllable also can repeat repeatedly independently on demand.
Yet, it should be noted that the continuous demonstration of oral cavity pronunciation image set might not generate smooth image frame, but stepped animation effect can occur.
In order to proofread and correct this effect, according to supplementary features of the present invention, this equipment is provided with an anamorphose generator 36, its function be complete, smoothly and realistically show the pronunciation of word.By this anamorphose function (this function itself is known, need not to do more detailed description), produce image transition or dynamic change.The image of these transition dynamically is inserted in the centre of two continuous syllable groups, shows to realize smooth image.
This anamorphose function can produce continuous transfer image acquisition sequence, from first width of cloth figure of figure to the second syllable of last width of cloth of first syllable.
The simple examples that this deformation method is used has been done simple description in Fig. 5.New image is stored in the buffer 38.
Fig. 5 a illustrates last oral cavity image of first syllable briefly, wherein shows its profile by the selected A1-A6 that will put, and A1 among Fig. 5 d-A6 each point is expressed the profile of last width of cloth oral cavity image of second (reaching last) syllable.Selected main points be from oral cavity image database file 34 want retrieve in the point set.
The image that deformation process produces promptly, is done slight moving in each step, to reach the position of the target image among Fig. 5 d shown in Fig. 5 b and 5c.
The process flow diagram of Fig. 3 has comprised the application (last two row) of deformation process.
Utilize visual display unit 42 on screen G, to be presented at the middle new transfer image acquisition of syllable image, present the visual effect of pronunciation of words true to nature for the user.
Be understood that easily this equipment can be designed to and comprise a transition pictorial data library file of design in advance, transition image wherein meets all possible transfer image acquisitions that are inserted in any two basic syllables centres.This database file will replace this distortion function.Therefore but this selection scheme will need a very large database file, for the former of practicality thereby do not adopt this scheme.
According to another kind of selection scheme, this equipment can comprise a continuous image collection database file, and image wherein shows the pronunciation of the whole whole-word that belongs to selected language.So also will require a too huge database, and therefore compare with other the possible scheme that is as above described in detail, this scheme is disadvantageous.
Though top description comprises a lot of singularity, these singularity can not be interpreted as it is restriction to the present invention's scope, and to be interpreted as be the example of preferred embodiment.Those skilled in the art it is contemplated that out other the possible version in this scope.Correspondingly, determining of the present invention's scope is not to be to depend on described embodiment, but depends on appended claim and jural equivalent thereof.

Claims (7)

1, a kind of computing machine dictionary equipment, it comprises one first database file (24) and one second database file (30), first database file (24) is made up of the words in assigned source language and the target language, second database file (30) is made up of the pronunciation key element of the basic speech syllable of literal, these speech syllables are relevant with the words of described first database file (24), it is characterized in that:
-Di three database files (32) are made up of the continuous diagram oral cavity pronunciation image with respect to each described basic speech syllable;
-be used for suitably selecting the device (22) of source language and target language;
-be used for input to be included in the device (20) of the selected words of first database file (24);
-be used for showing the device (44) of selected words with textual form;
-the device (26) of the selected words in location in first database file of forming by the source language word of appointment (24);
-being used to select the device (32) of continuous illustrated oral cavity pronunciation image, this image is relevant with the speech syllable of literal main points pronunciation;
-be used for showing the device (42) of the continuous diagram oral cavity pronunciation image of selected words with different orders with combination.
2, equipment as claimed in claim 1, its feature also is:
-by selected the 4th database (34) of wanting point set to form of describing described image, described image with respect to speech syllable each first and last illustrate continuously;
-utilize first speech syllable last diagram pronunciation the first diagram pronunciation of wanting point set and second speech syllable want point set, or the like, produces the visual device (36) of transition diagram pronunciation between two speech syllables; And
-be used for being presented at synchronously the device that middle transitional diagram of each diagram pronunciation of speech syllable is pronounced.
3, equipment as claimed in claim 1, the device (40) that also comprises the sound of the words pronunciation that is used to reproduce described literal and described basic speech syllable, the Visual Display of this audio reproduction and described basic speech syllable is synchronous, comprise with the distortion image processing synchronously.
4, equipment as claimed in claim 1 wherein, comprises a keyboard (B) in the data input device (20).
5, equipment as claimed in claim 1 wherein, makes the selected words of first source language become that corresponding words is associated in second target language with appointment by data search engine (26).
6, equipment as claimed in claim 1, its feature also is:
The database of-one transition diagram pronunciation, this transition diagram pronunciation are that the interval between the continuous diagram pronunciation of basic speech syllable is inserted required;
-be used to locate transition to illustrate the device of pronunciation;
-be used for being presented at synchronously the device that middle transitional diagram of each diagram oral cavity pronunciation image of speech syllable is pronounced; And
-be used to make up the device of the reproduced in synchronization of above-mentioned diagram oral cavity pronunciation image and corresponding audio file and controlled synchronous differentiated and/or combination thereof.
7, according to each described equipment in the claim 1 to 4, it is characterized in that being used for the device of reproduction automatically, this device reproduces vision oral cavity pronunciation image and any sound that difference and combination reproduction are arranged of import words of differentiated and combination, this words come from the first source language database file with (or) second, third etc. the target language data library file.
CN00803306.4A 1999-01-31 2000-01-30 Computerized translating apparatus Pending CN1339133A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL128295 1999-01-31
IL12829599A IL128295A (en) 1999-01-31 1999-01-31 Computerized translator displaying visual mouth articulation

Publications (1)

Publication Number Publication Date
CN1339133A true CN1339133A (en) 2002-03-06

Family

ID=11072437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN00803306.4A Pending CN1339133A (en) 1999-01-31 2000-01-30 Computerized translating apparatus

Country Status (6)

Country Link
EP (1) EP1149349A2 (en)
JP (1) JP2002536720A (en)
CN (1) CN1339133A (en)
AU (1) AU2127000A (en)
IL (1) IL128295A (en)
WO (1) WO2000045288A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104412256A (en) * 2012-07-02 2015-03-11 微软公司 Generating localized user interfaces

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006108236A1 (en) * 2005-04-14 2006-10-19 Bryson Investments Pty Ltd Animation apparatus and method
JP4591481B2 (en) * 2007-07-27 2010-12-01 カシオ計算機株式会社 Display control apparatus and display control processing program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4884972A (en) * 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US5697789A (en) * 1994-11-22 1997-12-16 Softrade International, Inc. Method and system for aiding foreign language instruction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104412256A (en) * 2012-07-02 2015-03-11 微软公司 Generating localized user interfaces
CN104412256B (en) * 2012-07-02 2017-08-04 微软技术许可有限责任公司 Generate localised users interface

Also Published As

Publication number Publication date
IL128295A (en) 2004-03-28
WO2000045288A3 (en) 2000-12-07
EP1149349A2 (en) 2001-10-31
AU2127000A (en) 2000-08-18
IL128295A0 (en) 1999-11-30
JP2002536720A (en) 2002-10-29
WO2000045288A2 (en) 2000-08-03

Similar Documents

Publication Publication Date Title
US20140039871A1 (en) Synchronous Texts
US7149690B2 (en) Method and apparatus for interactive language instruction
Kennaway et al. Providing signed content on the Internet by synthesized animation
US20060194181A1 (en) Method and apparatus for electronic books with enhanced educational features
CN101425054A (en) Chinese learning system
CN1808519A (en) Apparatus and method of synchronously playing syllabic pronunciation and mouth shape picture
JP6976996B2 (en) Dynamic story-oriented digital language education methods and systems
da Rocha Costa et al. SignWriting and SWML: Paving the way to sign language processing
CN1339133A (en) Computerized translating apparatus
US20040102973A1 (en) Process, apparatus, and system for phonetic dictation and instruction
Freitas et al. Development of accessibility resources for teaching and learning of Science, Technology, Engineering and Mathematics
CN1521657A (en) Computer aided language teaching method and apparatus
TW200926085A (en) Intelligent conversion method with system for Chinese and the international phonetic alphabet (IPA)
CN1474300A (en) Method for teaching Chinese in computer writing mode
Tomuro et al. An alternative method for building a database for American sign language
CN1167999C (en) Method for converting super medium document into speech sound
CN1054932C (en) Sentence pattern training instrument for language teaching
Soiffer et al. Mathematics and statistics
Foelsche Hypertext/hypermedia-like environments and language learning
Harrison Teaching Japanese using Apple Macintosh and Hypercard
CN1474368A (en) Sound spelling system and its method
Heiland Voices of Notators: Approaches to Writing a Score--Special Issue
Luque et al. Teaching Phonetics Online: Lessons From Before and During the Pandemic
Ghosh et al. Augmented Reality in Elementary Education: System Architecture for Implementing an Interactive and Immersive E-Learning Application.
CN1215407C (en) Computer executable phonetic symbol spelling system and its method

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication