USH2098H1 - Multilingual communications device - Google Patents

Multilingual communications device Download PDF

Info

Publication number
USH2098H1
USH2098H1 US08/200,049 US20004994A USH2098H US H2098 H1 USH2098 H1 US H2098H1 US 20004994 A US20004994 A US 20004994A US H2098 H USH2098 H US H2098H
Authority
US
United States
Prior art keywords
phrases
language
discrete
phrase
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US08/200,049
Other versions
US20030036911A1 (en
Inventor
Lee M. E. Morin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
US Department of Navy
Original Assignee
US Department of Navy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by US Department of Navy filed Critical US Department of Navy
Priority to US08/200,049 priority Critical patent/USH2098H1/en
Assigned to UNITED STATES OF AMERICA, THE, AS REPRESENTED BY THE SECRETARY OF THE NAVY reassignment UNITED STATES OF AMERICA, THE, AS REPRESENTED BY THE SECRETARY OF THE NAVY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MORIN, LEE M. E.
Publication of US20030036911A1 publication Critical patent/US20030036911A1/en
Application granted granted Critical
Publication of USH2098H1 publication Critical patent/USH2098H1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/42Data-driven translation
    • G06F40/47Machine-assisted translation, e.g. using translation memory

Definitions

  • This invention provides a method and apparatus for providing interpretation into a chosen one of a plurality of languages for a structured interview, especially the type of interview done by a medical professional (hereafter called the physician, the operator, or the user) with a patient-who does not share a common language, without the necessity of a human interpreter, and without the necessity of the person being interviewed (hereafter called the patient or the respondent) being able to read or write in any language.
  • a medical professional hereafter called the physician, the operator, or the user
  • the terms translation and interpretation are used interchangeably herein.
  • interpreter is a good solution to the physician/patient interview, but it has drawbacks.
  • An interpreter may not be available. It may not even be initially clear what language the patient speaks. Interpreters often interfere with the interview process. They may inject their usually poor medical judgment into the interview, or they may be embarrassed by or embarrass the patient with probing personal questions. If the translator is a relative of the patient, embarrassment or outright fabrication of answers may result.
  • phrase books have been used, and a large set of these for many different languages have been compiled by the United States Department of Defense. These have their drawbacks. Where they are written for the physician to attempt to pronounce a transliteration into a language that the physician is not familiar with, they frequently result in lack of understanding. Pointing to a written phrase in a phrase book requires that the patient be literate, and it is often slow.
  • the invention provides a translating machine to enable an operator who is fluent in one language to interview a respondent using a predetermined list of available sentences, which may include questions.
  • the respondent speaks any one of a plurality of available languages other than the language in which the operator is fluent, and also assumes that the respondent need not be literate in any language.
  • Translations into each of the available languages of each of the available sentences are stored in advance in a digital form which is convertible into an audio waveform.
  • the available language to be used with a particular respondent is chosen.
  • the user selects individual desired sentences from an alphanumerically stored list which is visually presented to the user. Then, as selected by the user, a translation of the chosen sentences are played out in an audio form to the respondent.
  • the device When the device is to be used to interview a potential respondent, if the language spoken by that respondent is uncertain, the user plays samples of seemingly probable languages to the respondent to determine which language the respondent chooses. The user then can limit future translations to a given respondent to a language which the given respondent has chosen from the samples.
  • digital audio sentences sufficient to conduct a medical interview in a large number of languages, approximating 25 or 30, can be stored on one CD-ROM disk of the size currently in wide use.
  • FIG. 1 is a schematic block diagram of a translating machine in accordance with the present invention
  • FIG. 2 is a schematic block diagram of a machine for recording a series of translations into a given foreign language.
  • FIG. 3 is a schematic block diagram of an element for use with the device of FIG. 1 for selecting which of a plurality of foreign languages a given respondent is familiar with.
  • FIG. 4 is a schematic block diagram indicating that a plurality of foreign languages can be stored on and played back from a single CD-ROM.
  • a storage unit 2 stores an alphabetical list of available phrases in the operator/user's language, and it is possible to move about the available list through the use of a manual selector 4 which can choose among the various available phrases.
  • the phrases available to choose from are displayed to the operator on a visual display of available phrases 6 .
  • the precise method of manually selecting from the available phrases can be chosen from several. It is possible to do a word search by typing in a word such as “appendicitis” and have all available phrases using that word appear on the visual display in order to allow selection of a desired phrase. It is possible to choose, with a mouse or otherwise, from the available phrases being displayed on the visual display in order to select the desired phrase. It is possible to have a script containing a plurality of questions to be asked in sequence (or skipped) as desired for a particular procedure or interview, and to go down that script in order to select the desired phrase.
  • a foreign language selector 8 operates a logical switch 10 , which chooses whether to take the stored spoken foreign language from a storage 12 for a first spoken foreign language, or a storage 14 for a second spoken foreign language.
  • selector 16 for corresponding foreign language phrases.
  • This selector in connection with logical switch 10 , chooses a recorded spoken phrase in the chosen foreign language (the first spoken foreign language with the switch as illustrated) and passes that recorded phrase to an audio playout device 18 , where it is played out to be listened to by the respondent/patient.
  • FIG. 2 is a schematic block diagram of a machine for recording translations of a series of phrases into a given foreign language
  • a storage unit 2 is provided for alphanumeric storage of available phrases in the operator's language.
  • the phrases to be translated are presented to the person/speaker who will speak and record the translations on a visual display 6 .
  • This speaker is, of course, necessarily knowledgeable in the foreign language to be recorded, unlike the physician/user who is to be the ultimate user of the machine.
  • the speaker When a phrase is presented for translation on display 6 , the speaker speaks the translation into microphone 30 , from which it is taken and temporarily stored in a temporary storage unit 32 for equivalent spoken foreign language phrases. The recorded phrase is then played back on an audio playout device 34 for the approval of the speaker. The speaker indicates whether or not he approves the translation as played back on manual approval indicator 36 . If he does not approve, a re-record control 38 causes the system to accept a new recording of the phrase from the speaker until he gets one he approves. If he does approve of the translation, a transfer control unit 40 causes the temporarily stored phrase from storage unit 32 to be transferred to long-term storage unit 42 for storage as an approved equivalent spoken foreign language phrase.
  • FIG. 3 is a schematic block diagram of an element for use with the device of FIG. 1 for selecting which of a plurality of recorded foreign languages a given respondent/patient is familiar with
  • foreign language selector 8 is shown in more detail in FIG. 3 .
  • manual control 50 is operated to cause a selector 52 to make an initial selection of samples from a plurality of foreign languages. If, for example, a Navy ship picks up a person of oriental appearance from a raft in the ocean off southeast Asia, the operator might choose a series of languages such as Vietnamese, Laotian, Thai, Burmese, etc., to use in the first attempt to find the language of the respondent.
  • selector 52 might ask, in that language, “Do you understand this language? If so, say yes.” These questions would be played out to the respondent from the audio playout device 18 of FIG. 1 .
  • manual control 50 could be used to operate limiter 54 to limit future translations to the one selected foreign language which had been found satisfactory.
  • switch 10 is shown as a logical switch connected to sources for two foreign languages, many more foreign languages could be connected.
  • phrases and sentences sufficient to conduct a medical interview in up to twenty-five or thirty different foreign languages can be stored on one CD-ROM disk 60 , and, of course, a plurality of such disks can be used interchangeably.
  • WAVE EDITOR version 1.03 (a shareware utility allowing wave editing, which displays waveform, allowing blocking of the part of a waveform to be retained, thereby reducing required memory, and also allowing amplitude adjustment) available from Keith W. Boone, 114 Broward St., Tallahasse, Fla. 32301, Sony SRS 27 speakers, ACE CAT 5-inch tablet for mouse, and Microsoft Visual Basic version 3.0.
  • the newly recorded material is originally recorded in RAM, then after approval by speaker is transferred to a hard disk.
  • the complete set of phrases for a given language are successfully recorded, they are “harvested” from the hard disk and combined with sets of phrases from other languages for permanent recording on a CD-ROM disk. Eventually as many different CD-ROM disks as are needed can be used.
  • the system also allows recording a series of phrases as used with one patient, then subsequently editing the phrases in the physician's language to derive a suitable set of phrases for use with later similar patients in any available language.
  • This edited version can include comments which were later added by the editing physician to assist later users. Editing can be done by using the Windows integrated utility Notepad, or by using other word processors, or by using the program which has been written in Visual Basic.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

A computer-based device for providing spoken translations of a predetermined set of medical questions, upon the selection of individual questions. Translations are prerecorded into a number of languages, and the physician user, in cooperation with the patient, chooses the language into which the translations are made. Then the physician chooses the questions in the physician's own language that should be asked, then indicates his choice to the device, and the device speaks the corresponding questions in the language of a potential respondent.

Description

GOVERNMENT INTEREST
The invention described herein may be manufactured, used, and/or licensed by or for the United States Government for governmental purposes without the payment of any royalties thereon.
BACKGROUND OF THE INVENTION
Field of the Invention
This invention provides a method and apparatus for providing interpretation into a chosen one of a plurality of languages for a structured interview, especially the type of interview done by a medical professional (hereafter called the physician, the operator, or the user) with a patient-who does not share a common language, without the necessity of a human interpreter, and without the necessity of the person being interviewed (hereafter called the patient or the respondent) being able to read or write in any language. The terms translation and interpretation are used interchangeably herein.
Medical history taking, physical examination, diagnostic procedures, and treatment all involve verbal communication to some degree. With rapid world-wide travel now being common, patients are often presented to physicians for care who do not have a common language with the physicians. While it is in this context that the inventor approached the problem, the invention could also be used between confessor and penitent, waiter and customer, hotel desk clerk and international customers, or in other situations where multiple unknown languages must be dealt with.
The use of a human interpreter is a good solution to the physician/patient interview, but it has drawbacks. An interpreter may not be available. It may not even be initially clear what language the patient speaks. Interpreters often interfere with the interview process. They may inject their usually poor medical judgment into the interview, or they may be embarrassed by or embarrass the patient with probing personal questions. If the translator is a relative of the patient, embarrassment or outright fabrication of answers may result.
Description of the Prior Art
In the prior art, phrase books have been used, and a large set of these for many different languages have been compiled by the United States Department of Defense. These have their drawbacks. Where they are written for the physician to attempt to pronounce a transliteration into a language that the physician is not familiar with, they frequently result in lack of understanding. Pointing to a written phrase in a phrase book requires that the patient be literate, and it is often slow.
In U.S. Pat. No. 4,428,733 to Kumat-Misir, a series of question and answer sheets are provided in two languages, with answers given in one language being generally understandable by reference to sheets in the second language. This would be slow, would require a literate patient, and would not allow the physician to choose the next question based upon the response to the previous question.
There have been efforts, such as represented in U.S. Pat. No. 4,984,117 to Rondel et al, to provide a number of phrases and sentences in a single foreign language, with provision for the user to attempt in his own language to select one or more of those phrases, and if his selection is recognized as possible, to play out a recorded foreign language version of what the user selected. In Rondel et al, this selection is made by training the device to recognize the user's voice as a means for making the selection in his own language. This device can operate in only one foreign language unless restructured, and provides no means for questioning a respondent to determine what foreign language would be suitable for an interview. It is also structured to operate only with user voices that it recognizes, making it time consuming at best for a new user to begin using the translator on short notice.
SUMMARY OF THE INVENTION
The invention provides a translating machine to enable an operator who is fluent in one language to interview a respondent using a predetermined list of available sentences, which may include questions. This assumes that the respondent speaks any one of a plurality of available languages other than the language in which the operator is fluent, and also assumes that the respondent need not be literate in any language. Translations into each of the available languages of each of the available sentences are stored in advance in a digital form which is convertible into an audio waveform. The available language to be used with a particular respondent is chosen. The user selects individual desired sentences from an alphanumerically stored list which is visually presented to the user. Then, as selected by the user, a translation of the chosen sentences are played out in an audio form to the respondent.
These translations into individual foreign languages were obtained and stored in advance from speakers who were fluent in the individual languages. One of the available sentences is visually presented to the speaker for translation and his spoken translation is recorded. It is then played back for the speaker's approval, and if approved is accepted for long-term storage. If not approved, the speaker is given additional opportunities for recording his spoken translation until he is satisfied.
When the device is to be used to interview a potential respondent, if the language spoken by that respondent is uncertain, the user plays samples of seemingly probable languages to the respondent to determine which language the respondent chooses. The user then can limit future translations to a given respondent to a language which the given respondent has chosen from the samples. In general, digital audio sentences sufficient to conduct a medical interview in a large number of languages, approximating 25 or 30, can be stored on one CD-ROM disk of the size currently in wide use.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a schematic block diagram of a translating machine in accordance with the present invention
FIG. 2 is a schematic block diagram of a machine for recording a series of translations into a given foreign language.
FIG. 3 is a schematic block diagram of an element for use with the device of FIG. 1 for selecting which of a plurality of foreign languages a given respondent is familiar with.
FIG. 4 is a schematic block diagram indicating that a plurality of foreign languages can be stored on and played back from a single CD-ROM.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
When a physician wishes to interview a patient, as in an initial examination, there is a standard list of questions, almost a script, that covers most of what has to be asked. Lists of these phrases have long been available in Department of Defense phrase books referred to above. Other than “yes” or “no” answers in a foreign language, the physician will generally have difficulty understanding responses in the foreign language and must depend upon pointing, holding up a proper number of fingers for the answer, and other non-verbal responses.
Referring to FIG. 1, which is a schematic block diagram of a translating machine in accordance with the present invention, a storage unit 2 stores an alphabetical list of available phrases in the operator/user's language, and it is possible to move about the available list through the use of a manual selector 4 which can choose among the various available phrases. The phrases available to choose from are displayed to the operator on a visual display of available phrases 6.
The precise method of manually selecting from the available phrases can be chosen from several. It is possible to do a word search by typing in a word such as “appendicitis” and have all available phrases using that word appear on the visual display in order to allow selection of a desired phrase. It is possible to choose, with a mouse or otherwise, from the available phrases being displayed on the visual display in order to select the desired phrase. It is possible to have a script containing a plurality of questions to be asked in sequence (or skipped) as desired for a particular procedure or interview, and to go down that script in order to select the desired phrase.
For the purposes of FIG. 1, it is assumed that, by this time, the foreign language to be used has been selected by operator, using a foreign language selector 8. This can also be operated from a keyboard or with a mouse. Selector 8 operates a logical switch 10, which chooses whether to take the stored spoken foreign language from a storage 12 for a first spoken foreign language, or a storage 14 for a second spoken foreign language.
The choice from the available phrases by the operator from selector 4 goes to a selector 16 for corresponding foreign language phrases. This selector, in connection with logical switch 10, chooses a recorded spoken phrase in the chosen foreign language (the first spoken foreign language with the switch as illustrated) and passes that recorded phrase to an audio playout device 18, where it is played out to be listened to by the respondent/patient.
Referring to FIG. 2, which is a schematic block diagram of a machine for recording translations of a series of phrases into a given foreign language, a storage unit 2 is provided for alphanumeric storage of available phrases in the operator's language. The phrases to be translated are presented to the person/speaker who will speak and record the translations on a visual display 6. This speaker is, of course, necessarily knowledgeable in the foreign language to be recorded, unlike the physician/user who is to be the ultimate user of the machine.
When a phrase is presented for translation on display 6, the speaker speaks the translation into microphone 30, from which it is taken and temporarily stored in a temporary storage unit 32 for equivalent spoken foreign language phrases. The recorded phrase is then played back on an audio playout device 34 for the approval of the speaker. The speaker indicates whether or not he approves the translation as played back on manual approval indicator 36. If he does not approve, a re-record control 38 causes the system to accept a new recording of the phrase from the speaker until he gets one he approves. If he does approve of the translation, a transfer control unit 40 causes the temporarily stored phrase from storage unit 32 to be transferred to long-term storage unit 42 for storage as an approved equivalent spoken foreign language phrase.
Referring to FIG. 3, which is a schematic block diagram of an element for use with the device of FIG. 1 for selecting which of a plurality of recorded foreign languages a given respondent/patient is familiar with, foreign language selector 8 is shown in more detail in FIG. 3. When a respondent/patient is first presented for interview, if it is not clear what language the respondent understands, manual control 50 is operated to cause a selector 52 to make an initial selection of samples from a plurality of foreign languages. If, for example, a Navy ship picks up a person of oriental appearance from a raft in the ocean off southeast Asia, the operator might choose a series of languages such as Vietnamese, Laotian, Thai, Burmese, etc., to use in the first attempt to find the language of the respondent. In each language in sequence, selector 52 might ask, in that language, “Do you understand this language? If so, say yes.” These questions would be played out to the respondent from the audio playout device 18 of FIG. 1. When a satisfactory language was arrived at, manual control 50 could be used to operate limiter 54 to limit future translations to the one selected foreign language which had been found satisfactory. While switch 10 is shown as a logical switch connected to sources for two foreign languages, many more foreign languages could be connected. When the foreign languages are stored on CD-ROM, as indicated in FIG. 4, phrases and sentences sufficient to conduct a medical interview in up to twenty-five or thirty different foreign languages can be stored on one CD-ROM disk 60, and, of course, a plurality of such disks can be used interchangeably.
It is perfectly possible to construct a special-purpose device containing all of the digital logic to carry out the functions of this invention. However, from the point of economy and ease of operation, the preferred embodiment of the invention uses a personal computer to carry out the function. The system used by the inventor is configured as follows:
An Austin 433VLI Winstation 486 computer with 20 megabytes RAM, two Maxtor hard disk drives respectively holding 130 megabytes and 220 megabytes, a CD drive and soundboard provided by Soundblaster Pro multimedia kit,,a Colorado Mountain Jumbo tape backup unit, an SVGA monitor, a Diamond Stealth video board with 1 megabyte of RAM, DOS version 5.0, Windows version 3.1, Norton Desktop version 2.0, WavaWav (Wave after Wave) version 1.5 (a shareware utility allowing sequential audio playback without using Windows) which is available from Ben Salido, 660 West Oak St., Hurst, Tex. 76053-5526, WAVE EDITOR version 1.03 (a shareware utility allowing wave editing, which displays waveform, allowing blocking of the part of a waveform to be retained, thereby reducing required memory, and also allowing amplitude adjustment) available from Keith W. Boone, 114 Broward St., Tallahasse, Fla. 32301, Sony SRS 27 speakers, ACE CAT 5-inch tablet for mouse, and Microsoft Visual Basic version 3.0.
Many variations on this configuration would be possible, but this is the configuration used by the inventor, which is known to be operable. The inventor uses computer programs in Visual Basic, operated under Windows, to run the system. Although these programs are made a part of the file of this application as originally filed, they are not considered to be essential to the invention per se. It is within the skill of those skilled in the art to write such programs as needed, and the programs themselves are not intended for printing with a patent resulting from this application.
When the foreign-language speaker is recording the initial translations, the newly recorded material is originally recorded in RAM, then after approval by speaker is transferred to a hard disk. When the complete set of phrases for a given language are successfully recorded, they are “harvested” from the hard disk and combined with sets of phrases from other languages for permanent recording on a CD-ROM disk. Eventually as many different CD-ROM disks as are needed can be used.
It may be advisable to record all the sample questions needed to find the language spoken by the respondent on one disk for all available languages, to reduce the need from frequent switching of disks as the language is located. It is also possible, when operating in an environment where perhaps five or fewer foreign languages will cover all of the potential respondents, to download those languages from a CD-ROM disk to a hard disk of perhaps 80 megabyte capacity, to avoid necessity of carrying a CD-ROM drive in a portable computer.
It is also desirable to provide the ability to keep a medical history by recording and later printing out a record of the questions asked and the physician's contemporaneous recording of the patient's responses to those question. The system also allows recording a series of phrases as used with one patient, then subsequently editing the phrases in the physician's language to derive a suitable set of phrases for use with later similar patients in any available language. This edited version can include comments which were later added by the editing physician to assist later users. Editing can be done by using the Windows integrated utility Notepad, or by using other word processors, or by using the program which has been written in Visual Basic.

Claims (17)

What is claimed is:
1. A computer-based communications device for aiding communication between a user and a respondent, wherein the user and respondent do not speak a common language, the device comprising:
(a) a primary storage unit which stores an ordered list comprising discrete phrases in a language understood by the user;
(b) at least one secondary storage unit which stores digital pre-recorded audio translations to the discrete phrases stored in the primary storage unit in at least one language not fluently spoken by the user, said digital pre-recorded audio translations having been produced by translating the discrete phrases into the at least one language not spoken by the user to form translated discrete phrases, speaking each translated discrete phrase to form spoken translated discrete phrases, and recording each spoken translated discrete phrase;
(c) a visual display unit which displays a plurality of phrases from the ordered list;
(d) a phrase selector which allows the user to select a discrete phrase from the plurality of displayed phrases, wherein the phrase selector allows the user to scroll through the plurality of displayed phrases to select available phrases, retrieves a set of discrete phrases from said primary storage unit in response to a keyword input from said user, retrieves a script comprising a plurality of discrete phrases making up a structured interview in response to a script topic selection from said user, or a combination thereof, said phrase selector including an input device which controls movement of an on-screen indicator, said input device further including an actuator which, when activated while the on-screen indicator is at a position corresponding to that of a specific displayed, discrete phrase, selects the specific displayed, discrete phrase;
(e) a foreign language selector which allows the user to select a language for translation of the selected, displayed, discrete phrase;
(f) a software interface which allows the user to interact with various components of the device;
(g) an audio unit which plays out, from the digital pre-recorded audio translations stored on the at least one secondary storage unit, a translation in the user-selected foreign language of the selected, displayed, discrete phrase;
wherein said ordered list comprises predetermined phrases which solicit a universally comprehensible response from the respondent; and wherein the respondent need not be literate in the language of the pre-recorded audio translations to comprehend and respond to the discrete phrases.
2. The device of claim 1 which is a personal computer system.
3. The device of claim 2 which is portable.
4. The device of claim 2 wherein the primary storage unit is at least one hard disk.
5. The device of claim 2 wherein the at least one secondary storage unit is at least one CD-ROM.
6. The device of claim 2 wherein the software interface comprises a program written in visual basic.
7. The device of claim 2 wherein the video display unit is a video monitor.
8. The device of claim 2 wherein the audio unit comprises a sound card and at least one speaker.
9. The device of claim 2 wherein the foreign language selector and the phrase selector comprise a keyboard or a peripheral device that, when activated by the user, interacts, through said software interface, with said visual display to select the language for translation or the discrete phrase.
10. The device of claim 1 wherein the ordered list comprises discrete phrases ordered alphabetically, according to category, or a combination thereof.
11. The device of claim 1 wherein the at least one language is a plurality of languages.
12. The device of claim 1 wherein the primary storage unit and the at least one secondary storage unit comprise the same hardware.
13. The device of claim 1, wherein said ordered list of discrete phrases comprises predetermined phrases which solicit a yes response, a no response, a non-verbal response, or a non-verbal gesture from the respondent.
14. The device of claim 1 wherein the phrase selector retrieves a script comprising a plurality of discrete phrases forming a structured medical interview in response to a script topic selection from the user.
15. The device of claim 14 wherein the structured interview is a standard medical interview.
16. The device of claim 15 wherein the standard medical interview is about at least one specific medical condition or ailment.
17. A method for a medical practitioner to interview a patient using the computer-based communications device of claim 1, wherein the patient does not understand any language spoken fluently by the doctor; the method comprising the steps of:
(a) selecting a phrase;
(b) selecting a foreign language to audibly present a translation of the phrase selected for the patient;
(d) determining whether the patient understood the translation;
(e) repeating steps (b) and (c) until a foreign language which the patient understands is found;
(f) selecting a script comprising a structured interview appropriate to medically interview the patient.
US08/200,049 1994-02-22 1994-02-22 Multilingual communications device Abandoned USH2098H1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/200,049 USH2098H1 (en) 1994-02-22 1994-02-22 Multilingual communications device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08/200,049 USH2098H1 (en) 1994-02-22 1994-02-22 Multilingual communications device

Publications (2)

Publication Number Publication Date
US20030036911A1 US20030036911A1 (en) 2003-02-20
USH2098H1 true USH2098H1 (en) 2004-03-02

Family

ID=22740110

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/200,049 Abandoned USH2098H1 (en) 1994-02-22 1994-02-22 Multilingual communications device

Country Status (1)

Country Link
US (1) USH2098H1 (en)

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020111791A1 (en) * 2001-02-15 2002-08-15 Sony Corporation And Sony Electronics Inc. Method and apparatus for communicating with people who speak a foreign language
US20030065504A1 (en) * 2001-10-02 2003-04-03 Jessica Kraemer Instant verbal translator
US20030146926A1 (en) * 2002-01-22 2003-08-07 Wesley Valdes Communication system
US20030200088A1 (en) * 2002-04-18 2003-10-23 Intecs International, Inc. Electronic bookmark dictionary
US20040172236A1 (en) * 2003-02-27 2004-09-02 Fraser Grant E. Multi-language communication system
US20070226211A1 (en) * 2006-03-27 2007-09-27 Heinze Daniel T Auditing the Coding and Abstracting of Documents
US20080208596A1 (en) * 2006-03-14 2008-08-28 A-Life Medical, Inc. Automated interpretation of clinical encounters with cultural cues
US20080256329A1 (en) * 2007-04-13 2008-10-16 Heinze Daniel T Multi-Magnitudinal Vectors with Resolution Based on Source Vector Features
US20090070140A1 (en) * 2007-08-03 2009-03-12 A-Life Medical, Inc. Visualizing the Documentation and Coding of Surgical Procedures
US7702624B2 (en) 2004-02-15 2010-04-20 Exbiblio, B.V. Processing techniques for visual capture data from a rendered document
US20100228536A1 (en) * 2001-10-11 2010-09-09 Steve Grove System and method to facilitate translation of communications between entities over a network
US7812860B2 (en) 2004-04-01 2010-10-12 Exbiblio B.V. Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device
US7990556B2 (en) 2004-12-03 2011-08-02 Google Inc. Association of a portable scanner with input/output and storage devices
US20110246174A1 (en) * 2008-01-17 2011-10-06 Geacom, Inc. Method and system for situational language interpretation
US8081849B2 (en) 2004-12-03 2011-12-20 Google Inc. Portable scanning and memory device
US20120017146A1 (en) * 2010-07-13 2012-01-19 Enrique Travieso Dynamic language translation of web site content
US8179563B2 (en) 2004-08-23 2012-05-15 Google Inc. Portable scanning device
US8244222B2 (en) 2005-05-02 2012-08-14 Stephen William Anthony Sanders Professional translation and interpretation facilitator system and method
US8261094B2 (en) 2004-04-19 2012-09-04 Google Inc. Secure data gathering from rendered documents
US8346620B2 (en) 2004-07-19 2013-01-01 Google Inc. Automatic modification of web pages
US8418055B2 (en) 2009-02-18 2013-04-09 Google Inc. Identifying a document by performing spectral analysis on the contents of the document
US8442331B2 (en) 2004-02-15 2013-05-14 Google Inc. Capturing text from rendered documents using supplemental information
US8447066B2 (en) 2009-03-12 2013-05-21 Google Inc. Performing actions based on capturing information from rendered documents, such as documents under copyright
US8489624B2 (en) 2004-05-17 2013-07-16 Google, Inc. Processing techniques for text capture from a rendered document
US8505090B2 (en) 2004-04-01 2013-08-06 Google Inc. Archive of text captures from rendered documents
US8600196B2 (en) 2006-09-08 2013-12-03 Google Inc. Optical scanners, such as hand-held optical scanners
US8620083B2 (en) 2004-12-03 2013-12-31 Google Inc. Method and system for character recognition
US8713418B2 (en) 2004-04-12 2014-04-29 Google Inc. Adding value to a rendered document
US8874504B2 (en) 2004-12-03 2014-10-28 Google Inc. Processing techniques for visual capture data from a rendered document
US8914395B2 (en) 2013-01-03 2014-12-16 Uptodate, Inc. Database query translation system
US8990235B2 (en) 2009-03-12 2015-03-24 Google Inc. Automatically providing content associated with captured information, such as information captured in real-time
US9008447B2 (en) 2004-04-01 2015-04-14 Google Inc. Method and system for character recognition
US9063924B2 (en) 2007-04-13 2015-06-23 A-Life Medical, Llc Mere-parsing with boundary and semantic driven scoping
US9081799B2 (en) 2009-12-04 2015-07-14 Google Inc. Using gestalt information to identify locations in printed information
US9116890B2 (en) 2004-04-01 2015-08-25 Google Inc. Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US9143638B2 (en) 2004-04-01 2015-09-22 Google Inc. Data capture from rendered documents using handheld device
US9268852B2 (en) 2004-02-15 2016-02-23 Google Inc. Search engines and systems with handheld document data capture devices
US9323784B2 (en) 2009-12-09 2016-04-26 Google Inc. Image search using text-based elements within the contents of images
US9953630B1 (en) * 2013-05-31 2018-04-24 Amazon Technologies, Inc. Language recognition for device settings
US11200379B2 (en) 2013-10-01 2021-12-14 Optum360, Llc Ontologically driven procedure coding
US11562813B2 (en) 2013-09-05 2023-01-24 Optum360, Llc Automated clinical indicator recognition with natural language processing
US12124519B2 (en) 2020-10-20 2024-10-22 Optum360, Llc Auditing the coding and abstracting of documents

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8935312B2 (en) * 2009-09-24 2015-01-13 Avaya Inc. Aggregation of multiple information flows with index processing
WO2013134090A1 (en) * 2012-03-07 2013-09-12 Ortsbo Inc. Method for providing translations to an e-reader and system thereof
CN107526742B (en) 2016-06-21 2021-10-08 伊姆西Ip控股有限责任公司 Method and apparatus for processing multilingual text
CN110170081B (en) * 2019-05-14 2021-09-07 广州医软智能科技有限公司 ICU instrument alarm processing method and system

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4428733A (en) 1981-07-13 1984-01-31 Kumar Misir Victor Information gathering system
US4493050A (en) * 1980-07-31 1985-01-08 Sharp Kabushiki Kaisha Electronic translator having removable voice data memory connectable to any one of terminals
US4593356A (en) * 1980-07-23 1986-06-03 Sharp Kabushiki Kaisha Electronic translator for specifying a sentence with at least one key word
US4613944A (en) * 1980-08-29 1986-09-23 Sharp Kabushiki Kaisha Electronic translator having removable data memory and controller both connectable to any one of terminals
US4843589A (en) 1979-03-30 1989-06-27 Sharp Kabushiki Kaisha Word storage device for use in language interpreter
US4882681A (en) * 1987-09-02 1989-11-21 Brotz Gregory R Remote language translating device
US4984177A (en) 1988-02-05 1991-01-08 Advanced Products And Technologies, Inc. Voice language translator
US5010495A (en) * 1989-02-02 1991-04-23 American Language Academy Interactive language learning system
US5056145A (en) 1987-06-03 1991-10-08 Kabushiki Kaisha Toshiba Digital sound data storing device
US5063534A (en) * 1980-09-08 1991-11-05 Canon Kabushiki Kaisha Electronic translator capable of producing a sentence by using an entered word as a key word
US5065317A (en) * 1989-06-02 1991-11-12 Sony Corporation Language laboratory systems
US5091876A (en) 1985-08-22 1992-02-25 Kabushiki Kaisha Toshiba Machine translation system
US5341291A (en) * 1987-12-09 1994-08-23 Arch Development Corporation Portable medical interactive test selector having plug-in replaceable memory
US5375164A (en) * 1992-05-26 1994-12-20 At&T Corp. Multiple language capability in an interactive system
US5384701A (en) * 1986-10-03 1995-01-24 British Telecommunications Public Limited Company Language translation system
US5523946A (en) * 1992-02-11 1996-06-04 Xerox Corporation Compact encoding of multi-lingual translation dictionaries

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4843589A (en) 1979-03-30 1989-06-27 Sharp Kabushiki Kaisha Word storage device for use in language interpreter
US4593356A (en) * 1980-07-23 1986-06-03 Sharp Kabushiki Kaisha Electronic translator for specifying a sentence with at least one key word
US4493050A (en) * 1980-07-31 1985-01-08 Sharp Kabushiki Kaisha Electronic translator having removable voice data memory connectable to any one of terminals
US4613944A (en) * 1980-08-29 1986-09-23 Sharp Kabushiki Kaisha Electronic translator having removable data memory and controller both connectable to any one of terminals
US5063534A (en) * 1980-09-08 1991-11-05 Canon Kabushiki Kaisha Electronic translator capable of producing a sentence by using an entered word as a key word
US4428733A (en) 1981-07-13 1984-01-31 Kumar Misir Victor Information gathering system
US5091876A (en) 1985-08-22 1992-02-25 Kabushiki Kaisha Toshiba Machine translation system
US5384701A (en) * 1986-10-03 1995-01-24 British Telecommunications Public Limited Company Language translation system
US5056145A (en) 1987-06-03 1991-10-08 Kabushiki Kaisha Toshiba Digital sound data storing device
US4882681A (en) * 1987-09-02 1989-11-21 Brotz Gregory R Remote language translating device
US5341291A (en) * 1987-12-09 1994-08-23 Arch Development Corporation Portable medical interactive test selector having plug-in replaceable memory
US4984177A (en) 1988-02-05 1991-01-08 Advanced Products And Technologies, Inc. Voice language translator
US5010495A (en) * 1989-02-02 1991-04-23 American Language Academy Interactive language learning system
US5065317A (en) * 1989-06-02 1991-11-12 Sony Corporation Language laboratory systems
US5523946A (en) * 1992-02-11 1996-06-04 Xerox Corporation Compact encoding of multi-lingual translation dictionaries
US5375164A (en) * 1992-05-26 1994-12-20 At&T Corp. Multiple language capability in an interactive system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Cowart, R., "Mastering Windows 3.1" pp. 516-518 Sybex Inc. 1993.* *
Operator's Guide-Morin Multimedia Medical Translator Release 2.0 (1993) (by the inventor).
Operator's Guide—Morin Multimedia Medical Translator Release 2.0 (1993) (by the inventor).
Wurst, Brooke E. "PC Interpreter topple the tower of babble. (Evaluation)", Nov. 1992, Computer Shopper, v12, n11, p950(2).* *

Cited By (95)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020111791A1 (en) * 2001-02-15 2002-08-15 Sony Corporation And Sony Electronics Inc. Method and apparatus for communicating with people who speak a foreign language
US20030065504A1 (en) * 2001-10-02 2003-04-03 Jessica Kraemer Instant verbal translator
US20100228536A1 (en) * 2001-10-11 2010-09-09 Steve Grove System and method to facilitate translation of communications between entities over a network
US8639829B2 (en) * 2001-10-11 2014-01-28 Ebay Inc. System and method to facilitate translation of communications between entities over a network
US20030146926A1 (en) * 2002-01-22 2003-08-07 Wesley Valdes Communication system
US20030200088A1 (en) * 2002-04-18 2003-10-23 Intecs International, Inc. Electronic bookmark dictionary
US20040172236A1 (en) * 2003-02-27 2004-09-02 Fraser Grant E. Multi-language communication system
US8442331B2 (en) 2004-02-15 2013-05-14 Google Inc. Capturing text from rendered documents using supplemental information
US8005720B2 (en) 2004-02-15 2011-08-23 Google Inc. Applying scanned information to identify content
US7702624B2 (en) 2004-02-15 2010-04-20 Exbiblio, B.V. Processing techniques for visual capture data from a rendered document
US7706611B2 (en) 2004-02-15 2010-04-27 Exbiblio B.V. Method and system for character recognition
US7707039B2 (en) 2004-02-15 2010-04-27 Exbiblio B.V. Automatic modification of web pages
US7742953B2 (en) 2004-02-15 2010-06-22 Exbiblio B.V. Adding information or functionality to a rendered document via association with an electronic counterpart
US8515816B2 (en) 2004-02-15 2013-08-20 Google Inc. Aggregate analysis of text captures performed by multiple users from rendered documents
US8214387B2 (en) 2004-02-15 2012-07-03 Google Inc. Document enhancement system and method
US7818215B2 (en) 2004-02-15 2010-10-19 Exbiblio, B.V. Processing techniques for text capture from a rendered document
US7831912B2 (en) 2004-02-15 2010-11-09 Exbiblio B. V. Publishing techniques for adding value to a rendered document
US9268852B2 (en) 2004-02-15 2016-02-23 Google Inc. Search engines and systems with handheld document data capture devices
US8019648B2 (en) 2004-02-15 2011-09-13 Google Inc. Search engines and systems with handheld document data capture devices
US8505090B2 (en) 2004-04-01 2013-08-06 Google Inc. Archive of text captures from rendered documents
US9143638B2 (en) 2004-04-01 2015-09-22 Google Inc. Data capture from rendered documents using handheld device
US9008447B2 (en) 2004-04-01 2015-04-14 Google Inc. Method and system for character recognition
US9116890B2 (en) 2004-04-01 2015-08-25 Google Inc. Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US9633013B2 (en) 2004-04-01 2017-04-25 Google Inc. Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US9514134B2 (en) 2004-04-01 2016-12-06 Google Inc. Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US7812860B2 (en) 2004-04-01 2010-10-12 Exbiblio B.V. Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device
US8713418B2 (en) 2004-04-12 2014-04-29 Google Inc. Adding value to a rendered document
US8261094B2 (en) 2004-04-19 2012-09-04 Google Inc. Secure data gathering from rendered documents
US9030699B2 (en) 2004-04-19 2015-05-12 Google Inc. Association of a portable scanner with input/output and storage devices
US8489624B2 (en) 2004-05-17 2013-07-16 Google, Inc. Processing techniques for text capture from a rendered document
US8799099B2 (en) 2004-05-17 2014-08-05 Google Inc. Processing techniques for text capture from a rendered document
US8346620B2 (en) 2004-07-19 2013-01-01 Google Inc. Automatic modification of web pages
US9275051B2 (en) 2004-07-19 2016-03-01 Google Inc. Automatic modification of web pages
US8179563B2 (en) 2004-08-23 2012-05-15 Google Inc. Portable scanning device
US8953886B2 (en) 2004-12-03 2015-02-10 Google Inc. Method and system for character recognition
US8081849B2 (en) 2004-12-03 2011-12-20 Google Inc. Portable scanning and memory device
US8874504B2 (en) 2004-12-03 2014-10-28 Google Inc. Processing techniques for visual capture data from a rendered document
US7990556B2 (en) 2004-12-03 2011-08-02 Google Inc. Association of a portable scanner with input/output and storage devices
US8620083B2 (en) 2004-12-03 2013-12-31 Google Inc. Method and system for character recognition
US8244222B2 (en) 2005-05-02 2012-08-14 Stephen William Anthony Sanders Professional translation and interpretation facilitator system and method
US8655668B2 (en) 2006-03-14 2014-02-18 A-Life Medical, Llc Automated interpretation and/or translation of clinical encounters with cultural cues
US20080208596A1 (en) * 2006-03-14 2008-08-28 A-Life Medical, Inc. Automated interpretation of clinical encounters with cultural cues
US8423370B2 (en) 2006-03-14 2013-04-16 A-Life Medical, Inc. Automated interpretation of clinical encounters with cultural cues
US20110196665A1 (en) * 2006-03-14 2011-08-11 Heinze Daniel T Automated Interpretation of Clinical Encounters with Cultural Cues
US7949538B2 (en) * 2006-03-14 2011-05-24 A-Life Medical, Inc. Automated interpretation of clinical encounters with cultural cues
US20070226211A1 (en) * 2006-03-27 2007-09-27 Heinze Daniel T Auditing the Coding and Abstracting of Documents
US8731954B2 (en) 2006-03-27 2014-05-20 A-Life Medical, Llc Auditing the coding and abstracting of documents
US10832811B2 (en) 2006-03-27 2020-11-10 Optum360, Llc Auditing the coding and abstracting of documents
US10216901B2 (en) 2006-03-27 2019-02-26 A-Life Medical, Llc Auditing the coding and abstracting of documents
US8600196B2 (en) 2006-09-08 2013-12-03 Google Inc. Optical scanners, such as hand-held optical scanners
US10061764B2 (en) 2007-04-13 2018-08-28 A-Life Medical, Llc Mere-parsing with boundary and semantic driven scoping
US10354005B2 (en) 2007-04-13 2019-07-16 Optum360, Llc Mere-parsing with boundary and semantic driven scoping
US9063924B2 (en) 2007-04-13 2015-06-23 A-Life Medical, Llc Mere-parsing with boundary and semantic driven scoping
US10019261B2 (en) 2007-04-13 2018-07-10 A-Life Medical, Llc Multi-magnitudinal vectors with resolution based on source vector features
US20080256329A1 (en) * 2007-04-13 2008-10-16 Heinze Daniel T Multi-Magnitudinal Vectors with Resolution Based on Source Vector Features
US11966695B2 (en) 2007-04-13 2024-04-23 Optum360, Llc Mere-parsing with boundary and semantic driven scoping
US8682823B2 (en) 2007-04-13 2014-03-25 A-Life Medical, Llc Multi-magnitudinal vectors with resolution based on source vector features
US10839152B2 (en) 2007-04-13 2020-11-17 Optum360, Llc Mere-parsing with boundary and semantic driven scoping
US11237830B2 (en) 2007-04-13 2022-02-01 Optum360, Llc Multi-magnitudinal vectors with resolution based on source vector features
US11581068B2 (en) 2007-08-03 2023-02-14 Optum360, Llc Visualizing the documentation and coding of surgical procedures
US9946846B2 (en) 2007-08-03 2018-04-17 A-Life Medical, Llc Visualizing the documentation and coding of surgical procedures
US20090070140A1 (en) * 2007-08-03 2009-03-12 A-Life Medical, Inc. Visualizing the Documentation and Coding of Surgical Procedures
US9418062B2 (en) * 2008-01-17 2016-08-16 Geacom, Inc. Method and system for situational language interpretation
US20110246174A1 (en) * 2008-01-17 2011-10-06 Geacom, Inc. Method and system for situational language interpretation
US8418055B2 (en) 2009-02-18 2013-04-09 Google Inc. Identifying a document by performing spectral analysis on the contents of the document
US8638363B2 (en) 2009-02-18 2014-01-28 Google Inc. Automatically capturing information, such as capturing information using a document-aware device
US8447066B2 (en) 2009-03-12 2013-05-21 Google Inc. Performing actions based on capturing information from rendered documents, such as documents under copyright
US8990235B2 (en) 2009-03-12 2015-03-24 Google Inc. Automatically providing content associated with captured information, such as information captured in real-time
US9075779B2 (en) 2009-03-12 2015-07-07 Google Inc. Performing actions based on capturing information from rendered documents, such as documents under copyright
US9081799B2 (en) 2009-12-04 2015-07-14 Google Inc. Using gestalt information to identify locations in printed information
US9323784B2 (en) 2009-12-09 2016-04-26 Google Inc. Image search using text-based elements within the contents of images
US10146884B2 (en) 2010-07-13 2018-12-04 Motionpoint Corporation Dynamic language translation of web site content
US10977329B2 (en) 2010-07-13 2021-04-13 Motionpoint Corporation Dynamic language translation of web site content
US10073917B2 (en) 2010-07-13 2018-09-11 Motionpoint Corporation Dynamic language translation of web site content
US9411793B2 (en) 2010-07-13 2016-08-09 Motionpoint Corporation Dynamic language translation of web site content
US10296651B2 (en) 2010-07-13 2019-05-21 Motionpoint Corporation Dynamic language translation of web site content
US9864809B2 (en) * 2010-07-13 2018-01-09 Motionpoint Corporation Dynamic language translation of web site content
US10387517B2 (en) 2010-07-13 2019-08-20 Motionpoint Corporation Dynamic language translation of web site content
US9858347B2 (en) 2010-07-13 2018-01-02 Motionpoint Corporation Dynamic language translation of web site content
US20120017146A1 (en) * 2010-07-13 2012-01-19 Enrique Travieso Dynamic language translation of web site content
US10922373B2 (en) 2010-07-13 2021-02-16 Motionpoint Corporation Dynamic language translation of web site content
US10936690B2 (en) 2010-07-13 2021-03-02 Motionpoint Corporation Dynamic language translation of web site content
US10089400B2 (en) 2010-07-13 2018-10-02 Motionpoint Corporation Dynamic language translation of web site content
US11030267B2 (en) 2010-07-13 2021-06-08 Motionpoint Corporation Dynamic language translation of web site content
US11157581B2 (en) 2010-07-13 2021-10-26 Motionpoint Corporation Dynamic language translation of web site content
US11481463B2 (en) 2010-07-13 2022-10-25 Motionpoint Corporation Dynamic language translation of web site content
US9465782B2 (en) 2010-07-13 2016-10-11 Motionpoint Corporation Dynamic language translation of web site content
US11409828B2 (en) 2010-07-13 2022-08-09 Motionpoint Corporation Dynamic language translation of web site content
US8914395B2 (en) 2013-01-03 2014-12-16 Uptodate, Inc. Database query translation system
US9953630B1 (en) * 2013-05-31 2018-04-24 Amazon Technologies, Inc. Language recognition for device settings
US11562813B2 (en) 2013-09-05 2023-01-24 Optum360, Llc Automated clinical indicator recognition with natural language processing
US11288455B2 (en) 2013-10-01 2022-03-29 Optum360, Llc Ontologically driven procedure coding
US11200379B2 (en) 2013-10-01 2021-12-14 Optum360, Llc Ontologically driven procedure coding
US12045575B2 (en) 2013-10-01 2024-07-23 Optum360, Llc Ontologically driven procedure coding
US12124519B2 (en) 2020-10-20 2024-10-22 Optum360, Llc Auditing the coding and abstracting of documents

Also Published As

Publication number Publication date
US20030036911A1 (en) 2003-02-20

Similar Documents

Publication Publication Date Title
USH2098H1 (en) Multilingual communications device
US5393236A (en) Interactive speech pronunciation apparatus and method
Cowan et al. Cross-modal, auditory-visual Stroop interference and possible implications for speech memory
Arnold et al. The old and thee, uh, new: Disfluency and reference resolution
Allen The location of rhythmic stress beats in English: An experimental study I
US4985697A (en) Electronic book educational publishing method using buried reference materials and alternate learning levels
Eckman et al. Some principles of second language phonology
US5065345A (en) Interactive audiovisual control mechanism
US20060115800A1 (en) System and method for improving reading skills of a student
JPH0883041A (en) Method and system for education based on interactive visual and hearing presentation system
Price et al. Incorporating computer-aided language sample analysis into clinical practice
US20060216685A1 (en) Interactive speech enabled flash card method and system
US20020143549A1 (en) Method and apparatus for displaying and manipulating account information using the human voice
Benati The effects of structured input and traditional instruction on the acquisition of the English causative passive forms: An eye-tracking study measuring accuracy in responses and processing patterns
WO1990005350A1 (en) Interactive audiovisual control mechanism
Diehl Listen and learn? A software review of Earobics®
Patterson Predicting second language listening functor comprehension probability with usage-based and embodiment approaches
Smith et al. Computer-generated speech and man-computer interaction
Shao et al. How a question context aids word production: Evidence from the picture–word interference paradigm
Wang Beyond segments: Towards a lexical model for tonal bilinguals
Thomas et al. Human factors and synthetic speech
JP2008525883A (en) Multicultural and multimedia data collection and documentation computer system, apparatus and method
WO2004061794A2 (en) System and method for an audio guidebook
Edwards et al. A multimodal interface for blind mathematics students
JPH10312151A (en) Learning support device for english word, etc., and recording medium recording learning support program of english word, etc.

Legal Events

Date Code Title Description
AS Assignment

Owner name: UNITED STATES OF AMERICA, THE, AS REPRESENTED BY T

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MORIN, LEE M. E.;REEL/FRAME:006878/0620

Effective date: 19940217

STCF Information on status: patent grant

Free format text: PATENTED CASE