US20020072039A1 - Method and apparatus for fluency language training - Google Patents

Method and apparatus for fluency language training Download PDF

Info

Publication number
US20020072039A1
US20020072039A1 US09/942,529 US94252901A US2002072039A1 US 20020072039 A1 US20020072039 A1 US 20020072039A1 US 94252901 A US94252901 A US 94252901A US 2002072039 A1 US2002072039 A1 US 2002072039A1
Authority
US
United States
Prior art keywords
user
message
language
file
learner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/942,529
Inventor
Dimitry Rtischev
Philip Hubbard
Leonardo Neumeyer
Kaori Shibatani
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MINDSTECH INTERNATIONAL Inc
Original Assignee
Minds and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US16443399P priority Critical
Priority to US09/473,550 priority patent/US6302695B1/en
Application filed by Minds and Technology filed Critical Minds and Technology
Priority to US09/942,529 priority patent/US20020072039A1/en
Publication of US20020072039A1 publication Critical patent/US20020072039A1/en
Assigned to MINDSTECH INTERNATIONAL INC. reassignment MINDSTECH INTERNATIONAL INC. ADDRESS CHANGE/CHANGE OF NAME Assignors: MINDS & TECHNOLOGIES, INC.
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • G09B19/08Printed or written appliances, e.g. text books, bilingual letter assemblies, charts

Abstract

A method for language fluency training on a computer system having an audio output device includes invoking a web browser program, receiving a pre-recorded file including a message in a spoken language from a conversation partner, playing the message to a user seeking fluency training in the spoken language from within the web browser program on the audio output device, asynchronously with playing the message, recording a user file including a message in the spoken language from the user in response to the message from within the web browser program, outputting the user file to the conversation partner and to a language instructor, receiving an instruction file including an instruction message in the spoken language from the language instructor in response to the user message and playing the instruction message to the user from within the web browser program on the audio output device.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present invention disclosure claims priority to Provisional U.S. Patent Application No.______, Attorney Docket Number 020038-000100US, filed is Nov. 9, 1999, entitled METHOD AND APPARATUS FOR FOREIGN LANGUAGE FLUENCY. This application is herein by incorporated by reference for all purposes.[0001]
  • NOTICE REGARDING COPYRIGHTED MATERIAL
  • A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office file or records, but otherwise reserves all copyright rights whatsoever. [0002]
  • BACKGROUND OF THE INVENTION
  • The present invention relates to language training. More specifically, the present invention relates to improved methods and apparatus for fluency training in spoken human languages. [0003]
  • Prior art solutions for fluency training in a spoken language have been via synchronous conversations between language teachers and students. Such conversations are considered synchronous because of the real-time interaction between the language teachers and students. Such training has been typically carried out by face to face classroom situations or other types of meetings such as over the telephone, via a video link, and the like. [0004]
  • Drawbacks to synchronous conversation include that it requires both the language teachers and students to coordinate their schedules so they can practice. This drawback is especially large where the teacher and the student are widely geographically separated. As an example, it would be very difficult for a language teacher in New York to continuously communicate with a student in Tokyo. [0005]
  • Another drawback is the difficulty for students in a geographic areas to find a large number of language teachers in a desired language also in the same geographic area. For example, the number of language teachers in Ethiopian is not believed to be very high in Waco, Tex. [0006]
  • Another drawback is that it has been found that synchronous conversations often places students under a high amount of stress. Accordingly, in such situations, students tend to use easier and simpler phrases while speaking, and thus students do not develop or practice more complex phrases. As a result, the students do not readily achieve the fluency in the language they truly desire. [0007]
  • Another drawback is that synchronous conversation is more expensive because the teacher needs to be paid while the student is thinking/speaking and repeating, etc. [0008]
  • Another solution for fluency training has been through the use of audio tapes. In particular, language teachers and students record messages on audio tapes, and pass the tapes to the other party. For example, a language teacher will record a message on an a tape and pass the tape to the student. The student in turn picks up the tape, listens to the tape, records a message for the language teacher, and passes the tape to the language teacher. [0009]
  • This solution has the same drawback as other solutions. In particular, it limits the language teacher and the student to be in roughly the same geographic area so that many messages can be exchanged. If the language teacher and the student are too far apart, the round trip time between taped messages would be quite long, thus fluency would be obtained very slowly, if at all. [0010]
  • Other drawbacks to prior art solutions have been that the language teacher must perform two different roles. In particular, the language teacher must be both a conversation partner and a language teacher. As a teacher, the language teacher must instruct the student as to proper use of the language, and as a conversation partner, the language teacher must provide conversation that is entertaining and interesting to the student. However, most language teachers often find the former role is much easier to play than the latter. For the students, finding a language teacher who is both a good teacher and an interesting conversation partner greatly restricts the number of qualified language teachers to choose from. [0011]
  • Another drawback to the two different roles the language teacher must play is that paying a language teacher to play a mere conversation partner is not cost effective. For example, students can usually find conversation partners among their friends, and the like. Yet another drawback is that when students converse with teachers, it often alters the students' psychology of the interaction because the students know the teachers will correct the students' mistakes. As a result, the conversations tend to be simpler and less complex, and again fluency is difficult to achieve. [0012]
  • Thus, what is required are improved techniques and systems that enable language fluency training. [0013]
  • SUMMARY OF THE INVENTION
  • The present invention relates to language training. More specifically, the present invention relates to methods and apparatus for language fluency training. [0014]
  • The embodiments of the present invention enables an economically and useful division of labor such that the student (or learner) obtains enjoyable, realistic conversations from a conversation partner and separately obtains useful feedback on these conversations from a teacher. [0015]
  • The invention includes a client-server computer system coupled via a network, such as the Internet. This enables asynchronous spoken message exchange between geographically disperse groups of learners, conversation partners, and teachers for the purpose of training the learners to become fluent in a desired spoken language. [0016]
  • The embodiments of the present invention incorporate non-obvious, novel, and useful elements. [0017]
  • According to an embodiment of the present invention, a method for language fluency training on a computer system having an audio output device includes invoking a web browser program, receiving a pre-recorded file including a message in a spoken language from a conversation partner, and playing the message to a user seeking fluency training in the spoken language from within the web browser program on the audio output device. The method also includes asynchronously with playing the message, recording a user file including a message in the spoken language from the user in response to the message from within the web browser program, and outputting the user file to the conversation partner and to a language instructor. Receiving an instruction file including an instruction message in the spoken language from the language instructor in response to the user message, and playing the instruction message to the user from within the web browser program on the audio output device are also performed. [0018]
  • According to another embodiment of the present invention, a computer program product for a computer system including a processor, and an audio output device, for language fluency training is disclosed. The computer program product includes code that directs the processor to receive a recorded message in a spoken human language from a conversation partner, and code that directs the processor to play the recorded message with the audio output device to a user who is not fluent in the spoken human language. The computer program product also includes code that directs the processor to record a user message in the spoken human language from the user after the recorded message is played, and code that directs the processor to send the user message to the conversation partner and to a language instructor. Code that directs the processor to receive an instruction message from the language instructor, the instruction message responsive to the user message, and code that directs the processor to play the instruction message with the audio output device to the user are also contemplated. The codes reside on a tangible media. [0019]
  • According to yet another embodiment, a computer system for language fluency training includes a processor, an audio output device coupled to the processor, and a readable memory coupled to the processor. The readable memory includes code that implements a web browser, code that directs the processor to store a recorded file comprising speech in a spoken language from a conversation partner, and code that directs the processor to play the recorded file to a user desiring to be fluent in the spoken language with the audio output device. The readable memory also includes code that directs the processor to record a user file comprising speech from the user in the spoken language, the user file formed after the recorded file has been played, and code that directs the processor to send the user file to the conversation partner and to a language instructor. Code that directs the processor to store an instruction file from the language instructor, the instruction file formed in response to the user file, and code that directs the processor to play the instruction file to the user with the audio output device are also provided.[0020]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order to more fully understand the present invention, reference is made to the accompanying drawings. Understanding that these drawings are not to be considered limitations in the scope of the invention, the presently described embodiments and the presently understood best mode of the invention are described with additional detail through use of the accompanying drawings in which: [0021]
  • FIG. 1 is a simplified overview diagram of a language fluency system according to an embodiment of the present invention; [0022]
  • FIG. 2 is a block diagram of a typical computer network device according to an embodiment of the present invention; [0023]
  • FIG. 3 is a more detailed description of the architecture of the system; [0024]
  • FIG. 4 illustrates interactions according to the one embodiment of the present invention; [0025]
  • FIGS. [0026] 5A-C illustrate a flow diagram according to an embodiment of the present invention;
  • FIGS. 6A and 6B illustrate a flow diagram according to an embodiment of the present invention; [0027]
  • FIGS. 7A and 7B illustrate a flow diagram according to an embodiment of the present invention; [0028]
  • FIG. 8 illustrates a flow diagram according to an embodiment of the present invention; [0029]
  • FIG. 9 illustrates a typical graphical user interface (GUI) according to an embodiment of the present invention; [0030]
  • FIGS. 10 and 11 illustrate typical graphical user interfaces (GUIs) according to an embodiment of the present invention; [0031]
  • FIGS. [0032] 13-16 illustrate typical graphical user interfaces (GUIs) according to an embodiment of the present invention.
  • DESCRIPTION OF THE SPECIFIC EMBODIMENTS
  • FIG. 1 is a simplified overview diagram of a language fluency system according to an embodiment of the present invention. This diagram is merely an illustration which should not limit the scope of the claims herein. One of ordinary skill in the art would recognize many other variations, modifications, and alternatives. [0033]
  • FIG. 1 illustrates a computer network [0034] 10 coupled to a computer server 20 and to computer network devices 30, 32, and 40, among other devices. Computer network devices 30, 32, and 40 typically include software therein enabling users to receive data from and transmit data to computer network 10. Not shown is an administrator that can control computer server 20.
  • In the present embodiment, computer network [0035] 10 is the Internet. In alternative embodiments of the present invention, computer network 10 may be any computer network, such as an intranet, a wide area network, a local area network, an internet, and the like. Computer network 10 provides data communication among computer network devices 30, 32, 40 and computer server 20. As will be described further below, data communication may include transfer of HTML based data, textual data, plug-in programs or viewers, applets, audio data, video data, and the like. Although computer network 10 is illustrated as a single entity, it should be understood that computer network 10 may actually be a network of individual computers and servers.
  • Computer server [0036] 20 is embodied as a web server on computer network 10. In the present embodiment, computer server 20 provides HTML-based data such as web pages, plug-in programs or viewers, applets, or servlets, such as Java and ActiveX applets, audio/visual/textual data, as will be described below, and the like. Many other types of data may also be provided, for example, database entries, and the like. These data are provided to other computers such as network devices 30, 32, and 40. Computer server 20 is also configured to receive submissions, such as text data files, audio data files, video data files, form-based submissions, and the like from computer network devices 30, 32, and 40.
  • In the present embodiment, computer network devices [0037] 30, 32, and 40 are typically coupled directly to computer network 10, or indirectly to computer network 10 via an Internet service provider (ISP), or the like. For example, a computer network device may be a computer coupled through a corporate firewall to computer network 10, a set-top box coupled via a modem and an ISP to computer network 10, a network computer coupled through a router to computer network 10, a personal computer coupled via a digital subscriber line, cable modem, and the like to computer network 10, and the like.
  • A computer network device implementing an embodiment of the present invention typically includes web browser software operating thereon, as will be described below. The web browser software typically provides a graphical user interface allowing users to easily interact with data available on computers such as computer server [0038] 20, on computer network 10. For example, the web browser software allows the user to view web pages from computer server 20, provides execution of plug-in programs (viewers) or applets, from computer server 20, enables the user to submit form data, hear audio playback of data files, view video playback of video files, and other types of data from computer server 20, and the like. Many other operations are provided by web browser software, and are known to one of ordinary skill in the art.
  • FIG. 2 is a block diagram of a typical computer network device [0039] 30 according to an embodiment of the present invention. Computer network device 30 typically includes a monitor 35 a computer 45, a keyboard 50, a graphical input device 60, a network interface 70, and audio input/output devices 80.
  • Computer [0040] 45 includes familiar computer components such as a processor 90, and memory storage devices, such as a random access memory (RAM) 100, a disk drive 110, and a system bus 85 interconnecting the above components.
  • Graphical input device [0041] 60 is typically embodied as a computer mouse, a trackball, a track pad, wireless remote, and the like. Graphical input device 60 typically allows the users to graphically select objects, icons, text and the like output on monitor 35 in combination with a cursor. For example, users graphically select buttons or icons on monitor 35 as illustrated in the examples below to hear and record audio and textual messages.
  • Embodiments of network interface [0042] 70 include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) units, and the like. Network interface 70 is coupled to network 120.
  • RAM [0043] 100 and disk drive 110 are examples of tangible media for storage of data, audio message files, computer programs, browser software, embodiments of the herein described applets, applet interpreters or compilers, virtual machines, and the like.
  • Other types of tangible media include floppy disks, removable hard disks, optical storage media such as CD-ROMS and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), and battery-backed volatile memories, and the like. In embodiments of the present invention such as set top boxes, mass storage, such as disk drive [0044] 110, and the like may be dispensed with.
  • Audio input/output devices [0045] 80 are typically embodied as microphones, or the like; speakers, headphones, and the like; in conjunction with an analog/digital sound board. As will be described in embodiments below, a user typically records spoken messages into the computer in digital form via a microphone and a sound board. Further, the user also typically listens to spoken in analog form via a sound board and speakers.
  • In one embodiment, computer network device [0046] 30 includes a PC compatible computer having an x86 based microprocessor, such as an AthlonJ microprocessor from Advanced Micro Devices, Inc. Further, in the present embodiment, computer network device 30 typically includes WindowsJ (Windows95J, Windows98j, Windows NTJ) operating system from Microsoft Corporation.
  • In the present embodiment, web browser software are typically separate applications programs from the operating system. The web browser software may be embodied as Netscape NavigatorJ 4.x, Microsoft's Internet ExplorerJ 5.x, or the like. In the present embodiments, web browser software includes virtual machines that enable interpreting of applets downloaded from computer network [0047] 10. For example, one virtual machine is Java virtual machine, version 1.1, or later; in another example, virtual machine is an ActiveX virtual machine; and the like. In alternative embodiments, just-in-time compilers may also be used for enabling executing of downloaded applets.
  • FIG. 2 is representative of but one type of system for embodying the present invention. It will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention. For example, other types of processors are contemplated, such as the PentiumJ-class or a CeleronJ-class microprocessor from Intel Corporation, K6-x-class microprocessors from Advanced Micro Devices, PowerPC G3, G4 microprocessors from Motorola, Inc., and the like. Further, other types of operating systems are contemplated, such as LINUX, UNIX, MAC OS 9 from Apple Computer Corporation, BeOS, and the like. [0048]
  • Set top boxes, such as those provided by WebTV Networks, Incorporated, and the like, may also be used in embodiments of the present invention. Set top boxes may include cable reception boxes, satellite reception boxes, network computers, game consoles, and other types of units incorporating processors, microcontrollers, ASICs, and the like. [0049]
  • FIG. 3 is a more detailed description of the architecture of the system. FIG. 3 includes a server computer [0050] 200 logically coupled via a computer network 210 to a client computer 220. In this embodiment, computer network 210 may be any computer network, such as the Internet.
  • In FIG. 3, client computer [0051] 220 is typically embodied as described above in FIG. 2. Logically, client computer 220 includes a web browser 230, a cache 240, client software 250 and 260, and audio drivers 270. Web browser 230 typically includes Hypertext transfer protocol (HTTP) client software 280.
  • As described above, web browser [0052] 230, excluding client software 250 and 260, are typically pre-installed in client computer 220. Audio drivers 270 are typically system level drivers typically provided by the operating system.
  • In the present embodiment, client software [0053] 250 and 260 are downloaded into client computer 220, typically from server computer 200. In alternative embodiments, client software 250 and 260 may be provided by other sources such as from download sites such as: www.zdnet.com, www.downloads.com, or the like. Further, client software 250 and 260 may be loaded from a disk, such as a CD, a floppy disk, a network drive, and the like.
  • Client software [0054] 250 and 260 are typically termed “plug-in” programs for web browser 230. In one example, client software 250 and 260 are typically applets written in Microsoft's ActiveX. In alternative embodiments of the present invention, client software 250 and 260 may be Java-based applets, or others. In the present embodiment, client software 250 and 260 are typically on the order of a 60 Kbyte download. In alternative embodiments, with optimizations and/or expansions in functionality, the size of client software 250 and 260 may be smaller or larger than 60 Kbytes.
  • Client software [0055] 250 and 260 may be downloaded and installed into client computer 220 prior to coupling client computer 220 to computer network 210, or may be downloaded and installed into client computer 220, on the fly. As an example of the latter, client software 250 and 260 may be downloaded onto client computer 220 every time client computer 220 is coupled to server computer 200. In other embodiments of the present invention, client software 260 is initially downloaded and installed into client computer 220 as a web browser plug-in application, and client software 250 is dynamically downloaded as an applet when client computer 220 is coupled to server computer 200.
  • Cache [0056] 240 represents a memory that typically stores audio data downloaded from server computer 200. The audio data is typically automatically downloaded from server computer 200 when client computer 220 begins a session with server computer 200. Alternatively, the audio data is downloaded as required or in batches, as required by the application. Cache 240 also typically includes audio data to be uploaded to server 200, as will be described below. The memory for cache 240 may be random access memory, virtual memory, disk-based memory, and the like.
  • In the present embodiment, cache [0057] 240 typically stores only audio data. In alternative embodiments of the present invention, cache 240 may store audio and video data, and the like. Further, client software 250 and 260 may also be embodied to play and record audio and video data, and the like.
  • In the present embodiment, client software [0058] 250 typically provides the user with navigational tools to play-back audio data from cache 240 from within the web browser environment. For example, playing back spoken messages, playing back portions of messages, reviewing saved or previous messages, and the like. Client software 250 also typically provides the user with tools for recording, reviewing and editing of spoken messages. As will be described further below, the user may record messages, may record pronunciation exercises, may record grammar exercises, and the like.
  • Client software [0059] 260, typically a web browser plug-in application, in this embodiment, provides for the outputting of audio messages from cache 240 to audio drivers 270. Further, client software 260 also receives audio messages from audio drivers 270 and stores the audio messages into cache 240. In this embodiment, HTTP client software 280 outputs data in the manner specified by an HTTP server, for example, text, audio, layout, and the like. Client software 250 and 260 are embodied as an AchronosJ applet and an AchronosJ plug-in program, respectively, provided by Minds and Technology, Inc., the current assignee of the present application.
  • In FIG. 3, server computer [0060] 200 may be embodied as described above in FIG. 2, and may include other server specific components. For example, in this example, server computer 200 logically includes a database 290, an HTTP server 300, a server software 310, and the like.
  • In this example, database [0061] 290 may be any conventional database application from Oracle, Corporation, or the like. Further, HTTP server 300 may be any well known web server based upon LINUX, Solaris7 (Sun Microsystems), Windows NTJ, or the like. Server software 310 is provided in this embodiment, to handle the passing of data between database 290 and client software 250 and 260. Server software 310 is written by Minds and Technologies, Inc., the current assignee of the present application.
  • FIG. 4 illustrates a diagram according to the present invention. In particular, FIG. 4 illustrates a conceptual block diagram of interaction between users. FIG. 4 illustrates three types of users: a learner [0062] 400, a conversation partner 410 (a speaking partner), and a teacher 420.
  • In the present embodiment, learner [0063] 400 is a person seeking to become more fluent in speaking a particular human language, such as English, Chinese, Russian, and the like. For example, the learner could be a native from Japan desiring to speak English more fluently and more like native English speakers. As another example, the learner could be a person seeking to improve her native language speaking skills. Skills and fluency may include pronunciation, grammar, inflection, use of idioms and slang, tone, and the like. Other types of general training may include speech making, customer service skills, counseling skills, and the like.
  • Conversation partner [0064] 410 is typically a native speaker of the language. As will be described below, conversation partner 410 engages in conversations with learner 400. In particular, conversation partner 410 records and exchanges audio messages to and from learner 400.
  • In the current embodiment, teacher [0065] 420 is typically a skilled instructor in the particular language. It is contemplated that teacher 420 is prequalified to participate in the embodiment of the present invention.
  • In operation, learner [0066] 400 and conversation partner 410 exchange a series of messages between each other, as indicated by conversation 430. As will be described further below, the messages are passed asynchronously between learner 400 and speaking partner. The asynchronous nature of these conversations allows learner 400 to practice speaking a message, before delivering it to the conversation partner, among other features.
  • The message recorded by learner [0067] 400 is sent to conversation partner 410 and also to teacher 420. In response, to the message, teacher 420 can critique the speech of the learner 400, and provide feedback to learner 400. In the present embodiment, the feedback is termed a workbook 440. Workbook 440 may include a series of tips or advice on correct or proper language usage, pronunciation, and the like; exercises for learner 400 to practice her newly learned skills; hints for learner 400 for self-assessment and self-critique; and the like. Further, workbook 440 may include graded exercises. Further examples of such feedback is described below.
  • FIG. 4 also includes a coordination loop [0068] 450 between conversation partner 410 and teacher 420. In one embodiment of the present invention, coordination loop 450 allows teacher 420 to instruct conversation partner 410 as to particular areas of speech learner 400 should work on. For example, if teacher 420 determines that learner 400 needs more practice with past verb tenses, teacher 420 may prompt conversation partner 410 to ask learner 400 questions about things that happened yesterday, or the like.
  • FIGS. [0069] 5A-5C illustrate a flow diagram according to an embodiment of the present invention. FIGS. 5A-5C are described using reference numerals found in FIG. 3.
  • Initially, the user's (learner's) computer (client computer [0070] 220) is coupled to server computer 200 via computer network 210 and implements embodiments of the present invention, step 500. In the present example, server computer 200 is provided by Minds and Technology, the present assignee of this patent application. The web address of an embodiment of the present invention is www.sophopia.com, however, is not yet currently available to the public. In response, HTTP server 300 retrieves the home page of the web site and provides the page to the HTTP client software 280 on client computer 220. The home page then displayed on the users computer (client computer 220), step 510. The home page typically has a number of actions the learner can perform, such as obtaining information regarding the services provided, opportunities available, logging in, and the like.
  • If the learner already has an account, the learner typically types in a user name and a password, step [0071] 520. In the present embodiment, to register, the data entered by the learner is gathered by HTTP client software 280 and sent to HTTP server 300 via computer network 210, step 525. Data may include a name, address, hobbies, interests, goals, language in which fluency is desired, payment information, and the like.
  • Next, typically if the learner is new, the HTTP server [0072] 300 along with server software 310 provides client software 250 and 260, described above, for downloading and/or invoking on to the learner's computer (client computer 220), step 530. Operation of client software 250 and 260 is described further below.
  • In the present embodiment, after registering, the learner is typically presented with a web page listing the languages available for fluency training, step [0073] 540. This page is provided by HTTP server 300 and handled by HTTP client software 280, as described above. In response, the learner typically graphically selects a language that she desires to be more fluent in, step 550. This selection is sent back to HTTP server 300, as described above, and stored with the learner information in database 290.
  • In response to the learner language selection, database [0074] 290 typically returns a list of conversation partners available in the selected language, step 560. In this embodiment, this list of conversation partners is provided to the learner in the form of another web page. In one embodiment of the present invention, the list may include data such as the name of the conversation partner, biographic data, a list of personal interests, and the like. In other embodiments, other data may also include a rating of the conversation partner by other learners, availability, personal contact information, photograph, pre-recorded audio/video self-introduction, and the like.
  • The learner typically reviews the list of conversation partners including the additional data and selects one or several conversation partner, step [0075] 570. This selection is sent back to HTTP server 300, as described above, and is stored with the learner information in database 290.
  • In the present embodiment, conversation partners are typically native speakers in the selected language. In alternative embodiments, conversation partners may simply be fluent in the selected language, but not necessarily a native in the selected language. For example, a conversation partner for a Japanese native seeking to become fluent in English may be another Japanese native, who has lived in the United States for a long period of time (e.g. 10 to 20 years). [0076]
  • In this embodiment, conversation partners are typically pre-qualified before being placed on the list of conversation partners. In the present example, Minds and Technology may review a resume or equivalent of a potential conversation partner, before placing her name on the conversation partner list. In alternative embodiments, an operator of the web site may have different conversation partner qualification criteria. Typical criteria for conversation partners in a native language may include speaking ability, education level, grammar usage, and the like. In some embodiments of the present invention, conversation partners may be tested or interviewed, and compensated for their services. In other embodiments, conversation partners can be friends or associates of this learner, or even any native speaker. [0077]
  • The conversation partner selected by the learner is typically notified of her selection via electronic mail (e-mail), step [0078] 575. In alternative embodiments, the selected conversation partner is notified of her selection after she has logged into server computer 200, or the like. If the conversation partner does not agree to provide such services, step 580, next time the learner logs into server computer, the learner is prompted for a new conversation partner, step 585.
  • In response to the learner language selection, database [0079] 290 also typically returns a list of teachers available in the selected language, step 590. In this embodiment, the list of teachers is provided to the learner in the form of another web page. In one embodiment of the present invention, the list may include data such as the name of the teacher, biographic data, education level, teaching credentials, and the like. In other embodiments, other data may also include a rating of the teacher by other learners, availability, personal contact information, and the like.
  • In the present embodiment, it is envisioned that the list of teachers for any particular language will be very large, because geographic limitations discussed in the background of the invention are eliminated. In another embodiment, the teacher is assigned automatically on a first available basis, or the like. [0080]
  • The learner typically reviews the list of teachers including the additional data and typically selects one teacher, step [0081] 595. This selection is sent back to HTTP server 300, as described above, and stored with the learner information in database 290.
  • In the present embodiment, similar to conversation partners, teachers are typically native speakers in the selected language. In alternative embodiments, teachers may be educated in the selected language, but not be necessarily a native in the selected language. [0082]
  • In this embodiment, teachers are pre-qualified before being placed on the list of teachers. In the present example, Minds and Technology may review a resume or equivalent of a potential teacher before placing her name on the teachers list. In alternative embodiments, an operator of the web site may have different teacher qualification criteria. Typical criteria for teachers may include speaking ability, education level, grammar usage, teaching experience, and the like. In some embodiments of the present invention, teachers may be interviewed, tested, and/or compensated for their services. [0083]
  • The selected teacher is also typically notified of her selection via e-mail, voice mail, or the like step [0084] 600. In alternative embodiments, the selected teacher is notified of her selection after she has logged into server computer 200. If the teacher does not agree to provide such services, step 605, the next time the learner logs into server computer, the learner is prompted for a new teacher selection, step 607.
  • When a conversation partner and a teacher have agreed to provide such services, typically the next step is for the conversation partner to submit a message to the learner. In the present embodiment, the message is in the form of a spoken message in the selected language. In alternative embodiments, the message may also include a video images, static images, textual data, file attachments, and the like. The message for the learner is typically stored in database [0085] 290. Further detail regarding actions taken by a conversation partner are discussed in FIG. 6, below.
  • Because the actions of the conversation partner and the learner are asynchronous in nature, the learner need not be online at the same time as the conversation partner or the teacher. [0086]
  • In step [0087] 620, after logging in, data related to the learner is retrieved from database 290. In particular, server software 310 may access database 290 to retrieve data associated with the learner. Such data may include lists of on-going conversations, new and old messages from conversation partners and teachers; messages recorded by the learner; exercises and other feedback from teachers, and the like. Not all data need be retrieved from the database at the same time. In one embodiment, a textual history of when messages have been sent back and forth between the learner and a conversation partner is retrieved. Further, a file, or multiple files that include the audio messages that have been sent between the learner and conversation partner and/or teacher is also retrieved.
  • In the present embodiment, the textual history and the file or multiple files are then downloaded to the learner's computer, step [0088] 630. The textual history may include the date of when conversations occur, topic, and the like, further the history is typically displayed to the learner via client software 280. Further, the file or multiple files including audio messages are typically stored in cache 240 in the learners computer.
  • In the present embodiment, client software [0089] 280 typically lists actions available to the learner, for example, to review previous messages sent and received to and from a conversation partner, to record a new message, and the like, step 640. The learner typically selects an action via clicking upon an icon, or the like on the display, 650. In response, an action associated with the icon may be performed.
  • In one case, when the learner wishes to review previous audio messages, step [0090] 660, the learner typically clicks upon a particular text phrase, icon or the like on the display. In response, client software 250 and 260 are activated to retrieve appropriate audio data that was downloaded and stored in cache 240 for playback via audio drivers 270, step 670. In alternative embodiments, image data, video data, and the like may also be reviewed and output to the learner.
  • In another case, when the learner wishes to record a new audio message, step [0091] 680, the learner typically clicks upon a particular text phrase, icon or the like on the display. In response, client software 250 and 260 are activated to store audio data received from audio drivers 270 into cache 240, step 690. The learner may speak into a microphone coupled to client computer 220 to enter the speech data. In other embodiments of the present invention, the learner may also record video images, attach files, and the like for the conversation partner, or the teacher.
  • After the learner has finished recording the message, she may review the message, step [0092] 695. When the learner is satisfied with the message, step 700, a file is sent to server computer 200 that includes the recorded message, step 710. By allowing the learner to relax and record and re-record her message, it is believed that the learner will achieve a higher comfort level in speaking the language, as well as a higher degree of fluency in the language. Further, by allowing the learner to record, listen, and re-record her message asynchronously from the conversation partner and teacher, it is possible to economize on labor costs. In the present embodiment, client software 250 and 260 are typically used to send the appropriate audio data as a file to server software 310.
  • Next, server software [0093] 310 typically receives the audio file and stores the file into database 290, step 720. Server software 310 also typically notifies the conversation partner, step 730, and the teacher, step 735, that a new message has been received. In one embodiment of the present invention, the conversation partner may be notified by e-mail, voice mail, or the like. Alternatively, the conversation partner may see a notice of a new message once she logs into the system. In the present embodiment, server software 310 also notifies a teacher, in possibly the same manner as above, that a new message has been received.
  • In response to the message from the learner, the teacher typically critiques the message and gives feedback to the learner, step [0094] 740. Further detail regarding actions taken by the teacher are discussed in FIG. 7, below.
  • In the present embodiment, feedback from the teacher may include a series of practice exercises. For example, using tools provided by client software [0095] 250 and 260, and others, the teacher may isolate snippets of the learner's speech from her message and include those snippets into exercises. Types of feedback may include notice of grammatical problems, pronunciation problems, or the like.
  • Using tools provided by client software [0096] 250 and 260, the learner typically reviews the teacher's feedback and practices the advice given by downloading exercises, step 760, and doing the exercises, step 770. Continuing the example immediately above, the learner plays snippets identified by the teacher to hear herself speak, and in response attempts to say the snippet in the correct way, an alternative way, or the like. A more complete list of options available to the learner are found in the attached appendix.
  • The learner's attempts in these exercises may be sent back to the teacher in step [0097] 780. In response to the learner's attempts, the teacher may grade the learner and/or give further advice and feedback, step 790.
  • FIG. 6 illustrates a flow diagram according to an embodiment of the present invention. In particular, FIG. 6 illustrates actions performed by the conversation partner. In the present embodiment, the conversation partner's computer may be configured as illustrated in FIG. 3. [0098]
  • Initially, the conversation partner's computer (client computer [0099] 220) is coupled to server computer 200 via computer network 210 implementing embodiments of the present invention, step 900. In the present example, server computer 200 is also provided by Minds and Technology, the present assignee of this patent application. The web address of an embodiment of the present invention is www.sophopia.com. In response, HTTP server 300 retrieves the home page of the web site and provides the page to the HTTP client software 280 on client computer 220. The home page is then displayed on the conversation partner's computer, step 910. The home page typically has a number of actions the conversation partner can perform, such as obtaining information regarding the services provided, providing a list of learners looking for a conversation partner, logging in as a conversation partner, and the like.
  • The home page typically prompts conversation partners who are already qualified for a user name and a password. If the conversation partner has already been qualified, the conversation partner typically types in a user name and a password, step [0100] 920. In the present embodiment, the data entered by the conversation partner is gathered by HTTP client software 280 and sent to HTTP server 300 via computer network 210.
  • The home page also typically prompts potential conversation partners to submit applications to become conversation partners. In one embodiment, the types of data requested for potential conversation partners may include demographic data, educational background, language experience, and the like. In response to the prompts, the potential conversation partner enters the requested data, step [0101] 940. Further, telephonic and/or face to face interviews may be required. In the present embodiment, the data entered by the potential conversation partner is gathered by HTTP client software 280 and sent to HTTP server 300 via computer network 210, step 950.
  • If the potential conversation partner is qualified, step [0102] 960, the person is typically given a login name and a password, step 970. If the person is not qualified, the person is typically sent a polite rejection note, step 980. In the present embodiment, the potential conversation partner is typically notified by e-mail, telephone, or the like. In an alternative embodiment, the qualification process may occur in real-time, e.g. while waiting. When the person is qualified, steps 900-940 may be repeated.
  • In step [0103] 990, after the conversation partner has logged in, data related to the conversation partner is retrieved from database 290. In particular, server software 310 may access database 290 to retrieve data associated with the conversation partner. Such data may include lists of on-going conversations, new and old messages from learners, messages recorded by the conversation partner, and the like. Not all data need be retrieved from the database at the same time. In one embodiment, only a list of current learners is first sent to the conversation partner, step 1000.
  • In response to the list of learners, the conversation partner may select a particular learner via graphical user interface. This selection is then sent to server computer [0104] 200, step 1010. In response to the selection, server computer 200 typically retrieves “learner data” related to the learner and the conversation partner from database 290, step 1020.
  • The learner data typically includes a textual history of when messages have been sent back and forth between the learner and the conversation partner. Further, the learner data typically includes a file, or multiple files that include the audio messages that have been sent between the learner and conversation partner. In the present embodiment, the textual history and the file or multiple files are downloaded to the conversation partner's computer, step [0105] 1030. The textual history is typically displayed to the conversation partner via client software 280. Further, the file or multiple files including audio messages are typically stored in cache 240.
  • In the present embodiment, client software [0106] 280 typically then lists actions available to the conversation partner, for example, to review previous messages, to record a new message, and the like, step 1040. The conversation partner typically selects an action via clicking upon an icon, or the like on the display, 1050. In response, an action associated with the icon may be performed.
  • In one case, when the conversation partner wishes to review previous audio messages, step [0107] 1060, the conversation partner may click upon a particular text phrase, icon or the like representing the message of interest. In response, client software 250 and 260 are activated to retrieve appropriate audio data from cache 240 for playback via audio drivers 270, step 1070. In alternative embodiments, image data, video data, and the like may also be reviewed.
  • In one case, when the conversation partner wishes to record a new audio message, step [0108] 1080, the conversation partner may click upon a particular text phrase, icon or the like, representing that function. In response, client software 250 and 260 are activated to store audio data received from audio drivers 270 into cache 240, step 1090. For example, the conversation partner may speak into a microphone to enter audio data. In other embodiments, the conversation partner may also record video images, attach files, and the like.
  • In the present embodiment, after the conversation partner has finished recording the message, she may review the recorded message, step [0109] 1095. If the conversation partner is satisfied with the message, step 1100, she may send a file or other data mechanism (data stream), to server computer 200 that includes the recorded message, step 1110. Otherwise, the conversation partner may re-record the message. In the present embodiment, client software 250 and 260 are activated send the appropriate audio data as a file to server software 310.
  • The server software [0110] 310 typically receives the file and stores the file into database 290, step 1120. Further, server software 310 typically notifies the learner that a new message has been received, step 1130. In one embodiment of the present invention, the learner may be notified by e-mail, voice mail, or the like. Alternatively, the learner may see a notice of a new message once she logs into the system.
  • In light of the present disclosure, many other actions are available to the conversation partner and are contemplated in alternative embodiments of the present invention. For example, the conversation partner may receive text messages, audio messages, and the like not only form learners, but also teachers, other conversation partners, and the like. As illustrated in FIG. 4, in one example, teachers and conversation partners may coordinate their actions and messages to provide an enhanced learning environment for the learner. Similarly, the conversation partner may record messages in various form for different teachers, conversation partners, and the like. A more complete list of options available to the conversation partner are found in the attached appendix. [0111]
  • FIG. 7 illustrates a flow diagram according to an embodiment of the present invention. In particular, FIG. 7 illustrates actions performed by the teacher. In the present embodiment, the teacher's computer may also be configured as illustrated in FIG. 3. [0112]
  • As above, initially, the teacher's computer is coupled to server computer [0113] 200 via computer network 210 implementing embodiments of the present invention, step 1200. In response, HTTP server 300 retrieves the home page of the web site and provides the page to the HTTP client software 280 on the teacher's computer 220. The home page is then displayed on the teacher's computer, step 1210. The home page typically has a number of actions the teacher can perform, such as obtaining information regarding the services provided, providing a list of learners looking for teachers, logging in as a teacher, and the like.
  • The home page typically prompts teachers who are already qualified for a user name and a password. If the teacher has already been qualified, the teacher typically types in a user name and a password, step [0114] 1220. Typically, as described above, the data entered by the teacher is gathered by HTTP client software 280 and sent to HTTP server 300 via computer network 210.
  • The home page also typically prompts potential teachers to submit applications to become teachers. In one embodiment, the types of data requested for potential teachers may include demographic data, educational background, language training experience, and the like. In response to the prompts, the potential teacher may enter the requested data, step [0115] 1240. Again, the data entered by the potential teacher is gathered by HTTP client software 280 and sent to HTTP server 300 via computer network 210, step 1250. In addition to the above, telephonic or face to face interviews with the potential teacher may be performed, references of the teachers may be checked, tests may be given, and the like.
  • If the teacher is qualified, step [0116] 1260, the person is typically given a login name and a password, step 1270. Further, the teacher is typically given technical as well as other training as needed. If the person is not qualified, the person is typically sent a polite note, step 1280. In the present embodiment, the potential teacher is typically notified by e-mail, telephone, or the like. In an alternative embodiment, the qualification process may occur in real-time, e.g. within five or ten minutes. When the person is qualified, steps 1200-1240 may be repeated.
  • In step [0117] 1290, after the teacher has logged into the system, data related to the teacher is typically retrieved from database 290. For example, server software 310 accesses database 290 to retrieve data associated with the teacher. Such data typically includes lists of on-going conversations between learners and conversation partners, new and old messages from learners and/or conversations partners, messages recorded by the teachers, and the like. Other data includes exercises assigned by the teacher, responses to exercises, performance evaluations, and the like. Not all data need be retrieved from the database at the same time. In one embodiment, only a list of current learners is first sent to the teacher, step 1300.
  • In response to the list of learners, the teacher typically selects one learner via graphical user interface. This selection is then sent to server computer [0118] 200, step 1310. In response to the selection, server computer 200 typically retrieves “learner data” related to the learner and to the teacher from database 290, step 1320.
  • The learner data typically includes a textual history of when messages have been sent back and forth among the learner, the conversation partner, and the teacher, exercises assigned to the learner, the learner's responses to exercises, evaluations of the learner's performance, and the like. Further, the learner data typically includes a file, or multiple files that include the audio messages that have been sent between the learner and conversation partner. In the present embodiment, the textual history and the file or multiple files are downloaded to the teacher's computer, step [0119] 1330. The textual history is typically displayed to the teacher via client software 280. Further, the file or multiple files including audio messages are typically stored in cache 240.
  • In the present embodiment, client software [0120] 280 typically lists actions available to the teacher, for example, to review previous messages among the parties, to record a new message, to create exercises for the learner, to grade learner's performance of exercises, and the like, step 1340. The teacher typically selects an action via clicking upon an icon, or the like on the display, step 1350. In response, an action associated with the icon may be performed.
  • As illustrated in FIG. 7B and above for the conversation partner, the teacher may review conversations between the learner and the conversation partner, may create exercises for the learner, may grade the learners performance, may record messages for either party, and the like. A more complete list of options available to the teacher are found in the attached appendix. [0121]
  • FIG. 8 illustrates a flow diagram according to an embodiment of the present invention. In particular, FIG. 8 illustrates the asynchronous nature of the conversations and workbooks discussed in FIG. 4, above. [0122]
  • In this example, on the first Monday, learner selects conversation (or speaking) partner [0123] 410 and topic for discussion. On Tuesday, the chosen conversation partner 410 leaves a message for learner 400 about the topic. On Wednesday, learner 400 listens to the spoken message from conversation partner 410 and generates a message in response. As mentioned above, because learner 400 can record and re-record a responsive message at her own leisure, learner 400 can practice responding to the message until she is satisfied with her performance without having to pass for conversation partner or teacher time as much as in synchronous interactions.
  • Next, the learner message is forwarded to both conversation partner [0124] 410 and teacher 420, as described above. On Thursday, conversation partner 410 listens to the learner's message and responds to the message with another message. Also on Thursday, teacher 420 listens to the learner's message and creates exercises based upon problems with the learner's message. The exercises, hints, and advice (together workbook) are then forwarded back to learner 400.
  • On Friday, learner [0125] 400 does the exercises given to her by teacher 420 in view of teacher's advice and other feedback. The exercise results are forwarded back to teacher 420 on Friday. Further, on Friday, learner 400 listens to the latest spoken message from conversation partner 410 and generates a new message in response. On Saturday, teacher 420 grades the exercise results from learner 400.
  • As can be seen, the rate at which conversation partner [0126] 410 and teacher 420 respond to learner 400 are mostly independent. For example, conversation partner 410 may respond to learner 410 in two days, whereas teacher 420 responds to learner 400 on the same day learner 400 posts a message. As another example, learner 400 and conversation partner 410 may send messages to each other once a day, whereas teacher 420 responds to a message from learner 400 two days after learner 400 had sent the message.
  • In other embodiments of the present invention, communication between conversation partner [0127] 410 and teacher 420 may also be provided, although not explicitly shown in FIG. 8.
  • FIG. 9 illustrates a typical graphical user interface (GUI) according to an embodiment of the present invention. FIG. 9 includes a typical web browser interface [0128] 1400 including navigation portions 1410 and a display portion 1420 presented to a learner or a conversation partner.
  • In this embodiment, display portion [0129] 1420 includes a typical GUI presented to a learner. The GUI includes a conversation partner portion 1430, a learner portion 1440, and a workbook icon 1450. As is illustrated, conversation partner portion 1430 and a learner portion 1440 include a series of icons 1460-1480.
  • In the present embodiment, when the user selects icon [0130] 1460, the spoken message by the respective person is played out from the computer. When the user selects icon 1470, the message playback pauses. Further, when the user selects icon 1480, the message playback stops. For example, if the user selects icon 1460 in learner portion 1440, the user will hear the audio message left by the learner.
  • When a learner selects workbook icon [0131] 1450, in this embodiment, the learner retrieves exercises, hints, and the like from the teacher, as is seen in the next two figures. In other embodiments, other types of functions may be made available via the GUI, for example, composing a textual message for a teacher, a conversation partner, a learner, and the like.
  • FIGS. 10 and 11 illustrate typical graphical user interfaces (GUIs) according to an embodiment of the present invention. FIGS. 10 and 11 include a typical web browser interface [0132] 1500 including navigation portions 1510 and a display portion 1520 presented to a learner. As illustrated in this example, display portion 1520 points out to the learner that the learner has made a pronunciation error in her message to the conversation partner.
  • In this embodiment, display portion [0133] 1520 includes a typical GUI presented to a learner. The GUI includes a snippet portion 1530 and an exercise portion 1540. As is illustrated, snippet portion 1530 includes a series of icons 1550-1570.
  • In the present embodiment, when the learner selects icon [0134] 1550, the snippet of spoken message by the learner is played out from the computer. As an example, if during the learner's message to her conversation partner she says “vege TA ble,” the teacher may copy that portion of the message and store that message snippet. When icon 1550 is selected, that snippet is typically played back to the learner. When the user selects icon 1560, the playback pauses. Further, when the user selects icon 1570, the playback stops.
  • In the example, exercise portion includes icons [0135] 1580, 1590, and 1595. When the learner selects icon 1580, the learner can record her attempt to correct the problem pointed-out by the teacher. For example, in this case, the learner will attempt to pronounce the word “vegetable” differently. Next, by selecting icon 1590, the learner can play back her attempt to correct the problem. In one embodiment of the present invention, the learner is given a set number of attempts to correct the problem on her own until the correct solution is presented to the learner. It is believed that such self-analysis improves language fluency.
  • In the present embodiment, selection of icon [0136] 1595, the learners recorded speech is sent to the server. In other embodiments, other types of functions may be made available via the GUI, for example, composing a textual message for a teacher, a conversation partner, a learner, and the like.
  • In the example illustrated in FIG. 10, a GUI of the exercise solutions are presented to the learner after two tries as shown. In alternative embodiments, greater or lesser numbers of attempts or tries may be provided to the learner depending on specific application. In this example, the learner may be presented with portions [0137] 1600 and 1610 associated with the learner's attempts to correct the problem.
  • With this GUI, the learner is also presented with a solution portion [0138] 1620.
  • In this example, solution portion allows the learner to hear how a native speaker of the language would say the snippet, and the like. Continuing the example above, selection of play icon [0139] 1650, results in the playing of the word “VEGE ta ble.”
  • This embodiment also includes a portion [0140] 1630 that allows the learner to record and listen to her attempts to repeat the “solution” to the problem. A learner clicks on icon 1655 to record her attempt. In the present embodiment, the learner's attempts in portions 1600 and 1610, as well as in 1630 are forwarded to the teacher for the teacher's analysis. In alternative embodiments, only the recording made in portion 1630 is sent to the teacher.
  • In one embodiment of the present invention, a text portion [0141] 1640 is provided to explain to the learner in more detail the nature of the problem in the learner's speech, and the like. Further, text portion 1640 may include a transcript of the snippet. For example, if the problem is related to verb tenses, the teacher may transcribe the words in the snippet, and explain in detail what is wrong and how to correct such errors.
  • In other embodiments, other types of functions may be made available via the GUI, for example, composing a text or spoken message for a teacher, a conversation partner, a learner, and the like. Further, fewer functions may also be offered to the learner in more compact embodiments of the present invention. [0142]
  • In the present embodiment, the following actions are performed by the teacher in response to the spoken message from the learner. Initially the teacher may listen to the entire recording produced by the learner. During this process, the user may note, via a mouse click, or the like, when the teacher notes a “mistake.” This marking may be done in conjunction with a teacher graphical user interface illustrated below. [0143]
  • In this embodiment, when the teacher marks or notes a mistake, a segment of speech immediate preceding the mark is noted. The segment of speech may be of any length. In this embodiment, the segment of speech represents five seconds. The segments marked are further processed below. [0144]
  • After the teacher has reviewed the entire spoken message, the user may give an over all grade. Next, the teacher may select marked segments graphically. Upon selection of the marked segments, the speech corresponding to the mark is played back to the teacher. The beginning and ending points for each marked segments may be adjusted by the teacher to capture the entire problem speech. For example, the teacher may stretch the time from 5 seconds to 10 seconds, with 7 seconds proceeding the original mark, and 3 seconds following the original mark. In this embodiment, the defined segment is the “snippet” described in the embodiment above. [0145]
  • In the present embodiment, for each segment, the teacher categorizes the type of error in language production. Typical categories may include word choice, grammar, pronunciation, vocabulary, and the like. Next, the teacher records the speech in the segment the way the teacher would say it. The teacher may also include further text description of the problem, if desired. [0146]
  • Once the errors are marked and processed by the teacher, the teacher is finished. In one embodiment of the present invention, the teacher marks only a limited number of mistakes per learner message, for example, three. In other embodiments, the number of mistakes may be definiable by the particular application, by the learner, or the like. The resulting exercise is then forwarded to the learner, for her practice, as described above. [0147]
  • The learner's results are returned to the teacher, who then may grade the learner's performance, or the like. [0148]
  • FIGS. [0149] 12-16 illustrate graphical user interfaces according to an embodiment of the present invention. FIG. 12 illustrates the GUI provided to the teacher for overall grading of the learner's performance. FIG. 13 illustrates the GUI provided to the teacher for marking segments of the learner's message for later review. FIG. 14 illustrates the GUI provided to the teacher for reviewing and moving the end points of the segments of the learner's message. FIG. 15 illustrates the GUI provided to the teacher for recording the speech the way the teacher would say it. FIG. 16 illustrates the GUI provided to the teacher for noting the type of problem in the speech segment (snippet) as well as providing textual instruction, exercises, feedback, and the like.
  • Although not explicitly shown, typically the teacher may point other types of errors in each learner's message. For example, the teacher may point out only three errors in each message. In such a case, it is believed that the teacher will flag only the most important speaking errors to the earner. Having substantially more than three errors being pointed out by the teacher for the message is not only burdensome for teachers, but may be discouraging to learners attempting language fluency. As a result, indication of a smaller number of errors per message is used in the current embodiments of the present invention. [0150]
  • CONCLUSION
  • In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. Many changes or modifications are readily envisioned. For example, the computer network [0151] 210 may be a LAN, or other type of network.
  • In light of the present patent application, the addition of other functionality in alternative embodiments is envisioned. For example, on the server-side, storage of different information in the database and database queries may be provided to enhance language teaching methods. For example, the feedback from teachers to learners may be maintained and classified in any number of “common” error categories. As another example, the entire stream of messages between the learners, conversation partners, and teachers may be stored in the database. Such data may be useful for evaluating the performance of the teacher, the conversation partner, and the like. [0152]
  • In another embodiment, the server-side may include speech processing algorithms to convert speech to text. Such an embodiment would reduce the workload of the teacher identifying segments of speech to cut and paste into the feedback to the learner. The teachers may also have additional client software downloaded onto their computers that enable them to easily listen to the learner's message, isolate and manipulate portions of the learner's message, and use these portions (snippets) to put together exercises for the learner. [0153]
  • In other embodiments of the present invention, streaming media technologies may also be used to pass speech data from the database to the learner, the teacher, and/or the conversation partner. Examples of streaming media include those available from RealNetworks, Inc., and the like. In such examples, the role of cache [0154] 240 may be reduced.
  • In an alternative embodiment of the present invention, the web browser application program is not required. Instead, any type of client application may be developed for use on the Teacher's, Conversation Partner's, Learner's, or Administrator's computers. The client application may provide all the functionality described above. [0155]
  • In light of the present patent disclosure, other applications to the above technology are envisioned. For example, various aspects of “language production” may be analyzed and critiques, such as speech rate, grammar, word choice, intonation, vocabulary, pronunciation, and the like. Further, the above may be applied to music training, art training, collaborative learning, customer support, training, counseling, and the like. [0156]
  • The block diagrams of the architecture and flow charts are grouped for ease of understanding. However it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present invention. [0157]
  • The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims. [0158]

Claims (25)

What is claimed is:
1. A method for language fluency training on a computer system having an audio output device comprises:
invoking an application program;
receiving a pre-recorded file including a message in a spoken language from a conversation partner;
playing the message to a user seeking fluency training in the spoken language from within the application program on the audio output device;
asynchronously with playing the message, recording a user file including a message in the spoken language from the user in response to the message from within the application program;
outputting the user file to the conversation partner and to a language instructor;
receiving an instruction file including an instruction message in the spoken language from the language instructor in response to the user message; and
playing the instruction message to the user from within the application program on the audio output device.
2. The method of claim 1 wherein the user file comprises audio data.
3. The method of claim 1 wherein the application program is a web browser.
4. The method of claim 2 wherein the instruction file includes portions of the audio data from the user message.
5. The method of claim 1 wherein the instruction message includes critiques of language production of the user.
6. The method of claim 1 wherein the instruction message includes critiques of language production of the user selected from the class: grammar, pronunciation, and word choice.
7. The method of claim 1 wherein the instruction file from the language instructor comprises audio and textual data.
8. The method of claim 1 wherein the instruction message from the language instructor comprises grammatical exercises for the user.
9. A computer program product for a computer system including a processor, and an audio output device, for language fluency training, the computer program product comprising:
code that directs the processor to receive a recorded message in a spoken human language from a conversation partner;
code that directs the processor to play the recorded message with the audio output device to a user who is not fluent in the spoken human language;
code that directs the processor to record a user message in the spoken human language from the user after the recorded message is played;
code that directs the processor to send the user message to the conversation partner and to a language instructor;
code that directs the processor to receive an instruction message from the language instructor, the instruction message responsive to the user message; and
code that directs the processor to play the instruction message with the audio output device to the user;
wherein the codes reside in a tangible media.
10. The computer program product of claim 9 wherein the recorded message comprises audio data.
11. The computer program product of claim 9 wherein the recorded message comprises audio and video data.
12. The computer program product of claim 10 wherein the instruction message includes portions of the user message.
13. The computer program product of claim 10 wherein the instruction message includes grammatical feedback to the user.
14. The computer program product of claim 10 wherein the instruction message from the language instructor comprises grammatical exercises for the user.
15. The computer program product of claim 10 wherein the instruction message from the language instructor comprises audio and textual data.
16. The computer program product of claim 10 wherein the instruction message from the language instructor comprises pronunciation exercises for the user.
17. A computer system for language fluency training comprises:
a processor;
an audio output device coupled to the processor; and
a readable memory coupled to the processor, the readable memory comprising:
code that implements a web browser;
code that directs the processor to store a recorded file comprising speech in a spoken language from a conversation partner;
code that directs the processor to play the recorded file to a user desiring to be fluent in the spoken language with the audio output device;
code that directs the processor to record a user file comprising speech from the user in the spoken language, the user file formed after the recorded file has been played;
code that directs the processor to send the user file to the conversation partner and to a language instructor;
code that directs the processor to store an instruction file from the language instructor, the instruction file formed in response to the user file; and
code that directs the processor to play the instruction file to the user with the audio output device.
18. The computer system of claim 17 wherein the user file comprises audio data.
19. The computer system of claim 17 wherein the user file comprises audio and video data.
20. The computer system of claim 18 wherein the instruction file includes portions of the audio data from the user file.
21. The computer system of claim 17 wherein the instruction file includes grammatical feedback to the user.
22. The computer system of claim 17 wherein the instruction file includes pronunciation feedback to the user.
23. The computer system of claim 17 wherein the instruction file from the language instructor comprises audio and textual data.
24. The computer system of claim 17 wherein the instruction file from the language instructor comprises pronunciation exercises for the user
25. The method of claim 1 wherein the conversation partner is alco the teacher.
US09/942,529 1999-11-09 2001-08-29 Method and apparatus for fluency language training Abandoned US20020072039A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16443399P true 1999-11-09 1999-11-09
US09/473,550 US6302695B1 (en) 1999-11-09 1999-12-28 Method and apparatus for language training
US09/942,529 US20020072039A1 (en) 1999-11-09 2001-08-29 Method and apparatus for fluency language training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/942,529 US20020072039A1 (en) 1999-11-09 2001-08-29 Method and apparatus for fluency language training

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/473,550 Continuation US6302695B1 (en) 1999-11-09 1999-12-28 Method and apparatus for language training

Publications (1)

Publication Number Publication Date
US20020072039A1 true US20020072039A1 (en) 2002-06-13

Family

ID=26860555

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/473,550 Expired - Fee Related US6302695B1 (en) 1999-11-09 1999-12-28 Method and apparatus for language training
US09/942,529 Abandoned US20020072039A1 (en) 1999-11-09 2001-08-29 Method and apparatus for fluency language training

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/473,550 Expired - Fee Related US6302695B1 (en) 1999-11-09 1999-12-28 Method and apparatus for language training

Country Status (4)

Country Link
US (2) US6302695B1 (en)
JP (1) JP2003514257A (en)
AU (1) AU1477001A (en)
WO (1) WO2001035378A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040003048A1 (en) * 2002-03-20 2004-01-01 Bellsouth Intellectual Property Corporation Outbound notification using customer profile information
US20040019645A1 (en) * 2002-07-26 2004-01-29 International Business Machines Corporation Interactive filtering electronic messages received from a publication/subscription service
US20040019637A1 (en) * 2002-07-26 2004-01-29 International Business Machines Corporaion Interactive one to many communication in a cooperating community of users
US20060111902A1 (en) * 2004-11-22 2006-05-25 Bravobrava L.L.C. System and method for assisting language learning
US20070026958A1 (en) * 2005-07-26 2007-02-01 Barasch Michael A Method and system for providing web based interactive lessons
US20070166685A1 (en) * 2005-12-22 2007-07-19 David Gilbert Automated skills assessment
US20080118905A1 (en) * 2006-11-16 2008-05-22 Oki Electric Industry Co., Ltd. Interactive lecture support system
US20080256563A1 (en) * 2007-04-13 2008-10-16 Cheng Han Systems and methods for using a lodestone in application windows to insert media content
US20080299523A1 (en) * 2007-05-31 2008-12-04 National Taiwan University Teaching material generation methods and systems
US20080306738A1 (en) * 2007-06-11 2008-12-11 National Taiwan University Voice processing methods and systems
US20080318200A1 (en) * 2005-10-13 2008-12-25 Kit King Kitty Hau Computer-Aided Method and System for Guided Teaching and Learning
US20090083288A1 (en) * 2007-09-21 2009-03-26 Neurolanguage Corporation Community Based Internet Language Training Providing Flexible Content Delivery
US20100120002A1 (en) * 2008-11-13 2010-05-13 Chieh-Chih Chang System And Method For Conversation Practice In Simulated Situations
US20100211888A1 (en) * 2004-08-03 2010-08-19 Research In Motion Limited Method and apparatus for providing minimal status display
US20100323332A1 (en) * 2009-06-22 2010-12-23 Gregory Keim Method and Apparatus for Improving Language Communication
US20110143323A1 (en) * 2009-12-14 2011-06-16 Cohen Robert A Language training method and system
US20120179978A1 (en) * 2003-12-01 2012-07-12 Research In Motion Limited Previewing a new event on a small screen device
US8740621B1 (en) * 2007-07-17 2014-06-03 Samuel Gordon Breidner Apparatus and system for learning a foreign language
EP2783358A1 (en) * 2011-11-21 2014-10-01 Age of Learning, Inc. Language teaching system that facilitates mentor involvement

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8342854B2 (en) * 1996-09-25 2013-01-01 Educate Online Technology, Llc Language-based computer generated instructional material
US6898411B2 (en) * 2000-02-10 2005-05-24 Educational Testing Service Method and system for online teaching using web pages
EP1358594A1 (en) 2000-03-13 2003-11-05 Volt Information Sciences, Inc. System and method for internet based procurement of goods and services
WO2001091028A1 (en) * 2000-05-20 2001-11-29 Leem Young Hie On demand contents providing method and system
US6705869B2 (en) * 2000-06-02 2004-03-16 Darren Schwartz Method and system for interactive communication skill training
KR100355072B1 (en) * 2000-07-06 2002-10-05 한상종 Devided multimedia page and method and system for studying language using the page
US20020115044A1 (en) * 2001-01-10 2002-08-22 Zeev Shpiro System and method for computer-assisted language instruction
US20020058234A1 (en) * 2001-01-11 2002-05-16 West Stephen G. System and method for teaching a language with interactive digital televison
GB2393013B (en) * 2001-02-08 2005-05-04 Kim Ji-Tae The method of education and scholastic management for cyber education system utilizing internet
US20100055658A1 (en) * 2002-02-26 2010-03-04 Spherical Dynamics Inc System for and method for psychological assessment
WO2003067552A1 (en) * 2002-02-08 2003-08-14 Geoffrey Alan Mead Language learning method and system
US20030170596A1 (en) * 2002-03-07 2003-09-11 Blank Marion S. Literacy system
US8210850B2 (en) 2002-03-07 2012-07-03 Blank Marion S Literacy education system for students with autistic spectrum disorders (ASD)
US20030200168A1 (en) * 2002-04-10 2003-10-23 Cullen Andrew A. Computer system and method for facilitating and managing the project bid and requisition process
US7925568B2 (en) * 2002-04-10 2011-04-12 Volt Information Sciences, Inc. Computer system and method for producing analytical data related to the project bid and requisition process
US7558745B2 (en) 2002-09-30 2009-07-07 Volt Information Sciences, Inc. Method of and system for enabling and managing sub-contracting entities
US7698146B2 (en) 2002-04-24 2010-04-13 Volt Information Sciences Inc. System and method for collecting and providing resource rate information using resource profiling
US20030212604A1 (en) 2002-05-09 2003-11-13 Cullen Andrew A. System and method for enabling and maintaining vendor qualification
US20040152055A1 (en) * 2003-01-30 2004-08-05 Gliessner Michael J.G. Video based language learning system
US7524191B2 (en) 2003-09-02 2009-04-28 Rosetta Stone Ltd. System and method for language instruction
US20060121422A1 (en) * 2004-12-06 2006-06-08 Kaufmann Steve J System and method of providing a virtual foreign language learning community
US20050053900A1 (en) * 2003-09-05 2005-03-10 Steven Kaufmann Method of teaching a foreign language to a student providing measurement in a context based learning system
US20050181336A1 (en) * 2004-02-17 2005-08-18 Bakalian Kalust C. System and method for learning letters and numbers of a foreign language
EP1730657A4 (en) 2004-03-02 2008-04-23 Volt Inf Sciences Inc Method of and system for consultant re-seller business informatiojn transfer
US8109765B2 (en) * 2004-09-10 2012-02-07 Scientific Learning Corporation Intelligent tutoring feedback
JP2006133672A (en) * 2004-11-09 2006-05-25 Matsushita Electric Ind Co Ltd Portable device and system for learning language
WO2006086690A2 (en) * 2005-02-11 2006-08-17 Volt Information Sciences Inc. Project work change in plan/scope administrative and business information synergy system and method
US20060188860A1 (en) * 2005-02-24 2006-08-24 Altis Avante, Inc. On-task learning system and method
US8764455B1 (en) 2005-05-09 2014-07-01 Altis Avante Corp. Comprehension instruction system and method
US20060194184A1 (en) * 2005-02-25 2006-08-31 Wagner Geum S Foreign language instruction over the internet
US20090239201A1 (en) * 2005-07-15 2009-09-24 Richard A Moe Phonetic pronunciation training device, phonetic pronunciation training method and phonetic pronunciation training program
EP1915732A4 (en) * 2005-08-01 2010-07-14 Volt Inf Sciences Inc Outsourced service level agreement provisioning management system and method
US7657221B2 (en) * 2005-09-12 2010-02-02 Northwest Educational Software, Inc. Virtual oral recitation examination apparatus, system and method
CN1963887A (en) * 2005-11-11 2007-05-16 王薇茜 Self-help language study system comply to speech sense
JP5426066B2 (en) * 2006-04-12 2014-02-26 任天堂株式会社 Display update program and display update apparatus
US8219402B2 (en) * 2007-01-03 2012-07-10 International Business Machines Corporation Asynchronous receipt of information from a user
US20090148823A1 (en) * 2007-12-05 2009-06-11 Wall Street Institute, Kft System, method, and computer program product for providing distributed learning content
US8340968B1 (en) * 2008-01-09 2012-12-25 Lockheed Martin Corporation System and method for training diction
US8652202B2 (en) 2008-08-22 2014-02-18 Edwards Lifesciences Corporation Prosthetic heart valve and delivery apparatus
WO2011019934A1 (en) 2009-08-12 2011-02-17 Volt Information Sciences, Inc System and method for productizing human capital labor employment positions/jobs
US20120189989A1 (en) * 2011-01-24 2012-07-26 Ann Margaret Mowery Assisted Leveled Reading System & Method
US20130316314A1 (en) * 2011-03-17 2013-11-28 Dah-Torng Ling Process for producing perfect-content-validity tests
US9111457B2 (en) * 2011-09-20 2015-08-18 International Business Machines Corporation Voice pronunciation for text communication
US20130115586A1 (en) * 2011-11-07 2013-05-09 Shawn R. Cornally Feedback methods and systems
US20180151087A1 (en) * 2016-11-25 2018-05-31 Daniel Wise Computer based method for learning a language

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2777901A (en) * 1951-11-07 1957-01-15 Leon E Dostert Binaural apparatus for teaching languages
GB8817705D0 (en) * 1988-07-25 1988-09-01 British Telecomm Optical communications system
US5810599A (en) * 1994-01-26 1998-09-22 E-Systems, Inc. Interactive audio-visual foreign language skills maintenance system and method
US5697789A (en) * 1994-11-22 1997-12-16 Softrade International, Inc. Method and system for aiding foreign language instruction
IL120622A (en) * 1996-04-09 2000-02-17 Raytheon Co System and method for multimodal interactive speech and language training
US5766015A (en) * 1996-07-11 1998-06-16 Digispeech (Israel) Ltd. Apparatus for interactive language training
US6077085A (en) * 1998-05-19 2000-06-20 Intellectual Reserve, Inc. Technology assisted learning

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040003048A1 (en) * 2002-03-20 2004-01-01 Bellsouth Intellectual Property Corporation Outbound notification using customer profile information
US7996481B2 (en) * 2002-03-20 2011-08-09 At&T Intellectual Property I, L.P. Outbound notification using customer profile information
US7720910B2 (en) 2002-07-26 2010-05-18 International Business Machines Corporation Interactive filtering electronic messages received from a publication/subscription service
US20040117444A1 (en) * 2002-07-26 2004-06-17 International Business Machines Corporation Instant message response message with user information incorporated therein
US20040122906A1 (en) * 2002-07-26 2004-06-24 International Business Machines Corporation Authorizing message publication to a group of subscribing clients via a publish/subscribe service
US20040019637A1 (en) * 2002-07-26 2004-01-29 International Business Machines Corporaion Interactive one to many communication in a cooperating community of users
US20050267896A1 (en) * 2002-07-26 2005-12-01 International Business Machines Corporation Performing an operation on a message received from a publish/subscribe service
US20050273499A1 (en) * 2002-07-26 2005-12-08 International Business Machines Corporation GUI interface for subscribers to subscribe to topics of messages published by a Pub/Sub service
US20060020658A1 (en) * 2002-07-26 2006-01-26 International Business Machines Corporation Saving information related to a concluding electronic conversation
US20060031295A1 (en) * 2002-07-26 2006-02-09 International Business Machines Corporation Querying a dynamic database with a message directed to anonymous subscribers of a pub/sub service
US20060031533A1 (en) * 2002-07-26 2006-02-09 International Business Machines Corporation Throttling response message traffic presented to a user
US20060036679A1 (en) * 2002-07-26 2006-02-16 International Business Machines Corporation Pub/sub message invoking a subscribers client application program
US9124447B2 (en) 2002-07-26 2015-09-01 International Business Machines Corporation Interactive client computer communication
US9100219B2 (en) 2002-07-26 2015-08-04 International Business Machines Corporation Instant message response message
US8849893B2 (en) 2002-07-26 2014-09-30 International Business Machines Corporation Querying a dynamic database with an electronic message directed to subscribers of a publish/subscribe computer service
US8301701B2 (en) 2002-07-26 2012-10-30 International Business Machines Corporation Creating dynamic interactive alert messages based on extensible document definitions
US20040019645A1 (en) * 2002-07-26 2004-01-29 International Business Machines Corporation Interactive filtering electronic messages received from a publication/subscription service
US7941488B2 (en) * 2002-07-26 2011-05-10 International Business Machines Corporation Authorizing message publication to a group of subscribing clients via a publish/subscribe service
US7720914B2 (en) * 2002-07-26 2010-05-18 International Business Machines Corporation Performing an operation on a message received from a publish/subscribe service
US7831670B2 (en) 2002-07-26 2010-11-09 International Business Machines Corporation GUI interface for subscribers to subscribe to topics of messages published by a Pub/Sub service
US7734709B2 (en) 2002-07-26 2010-06-08 International Business Machines Corporation Controlling computer response message traffic
US7890572B2 (en) * 2002-07-26 2011-02-15 International Business Machines Corporation Pub/sub message invoking a subscribers client application program
US20040128353A1 (en) * 2002-07-26 2004-07-01 Goodman Brian D. Creating dynamic interactive alert messages based on extensible document definitions
US20120179978A1 (en) * 2003-12-01 2012-07-12 Research In Motion Limited Previewing a new event on a small screen device
US8631353B2 (en) * 2003-12-01 2014-01-14 Blackberry Limited Previewing a new event on a small screen device
US9830045B2 (en) 2003-12-01 2017-11-28 Blackberry Limited Previewing a new event on a small screen device
US20100211888A1 (en) * 2004-08-03 2010-08-19 Research In Motion Limited Method and apparatus for providing minimal status display
US8595630B2 (en) 2004-08-03 2013-11-26 Blackberry Limited Method and apparatus for providing minimal status display
US20060111902A1 (en) * 2004-11-22 2006-05-25 Bravobrava L.L.C. System and method for assisting language learning
US8272874B2 (en) * 2004-11-22 2012-09-25 Bravobrava L.L.C. System and method for assisting language learning
US20070026958A1 (en) * 2005-07-26 2007-02-01 Barasch Michael A Method and system for providing web based interactive lessons
US20080318200A1 (en) * 2005-10-13 2008-12-25 Kit King Kitty Hau Computer-Aided Method and System for Guided Teaching and Learning
US20070166685A1 (en) * 2005-12-22 2007-07-19 David Gilbert Automated skills assessment
US20080118905A1 (en) * 2006-11-16 2008-05-22 Oki Electric Industry Co., Ltd. Interactive lecture support system
US20080256563A1 (en) * 2007-04-13 2008-10-16 Cheng Han Systems and methods for using a lodestone in application windows to insert media content
US20080299523A1 (en) * 2007-05-31 2008-12-04 National Taiwan University Teaching material generation methods and systems
US8758017B2 (en) * 2007-05-31 2014-06-24 National Taiwan University Teaching material generation methods and systems
US20080306738A1 (en) * 2007-06-11 2008-12-11 National Taiwan University Voice processing methods and systems
US8543400B2 (en) * 2007-06-11 2013-09-24 National Taiwan University Voice processing methods and systems
US8740621B1 (en) * 2007-07-17 2014-06-03 Samuel Gordon Breidner Apparatus and system for learning a foreign language
US20090083288A1 (en) * 2007-09-21 2009-03-26 Neurolanguage Corporation Community Based Internet Language Training Providing Flexible Content Delivery
US20100120002A1 (en) * 2008-11-13 2010-05-13 Chieh-Chih Chang System And Method For Conversation Practice In Simulated Situations
US20100323332A1 (en) * 2009-06-22 2010-12-23 Gregory Keim Method and Apparatus for Improving Language Communication
US8840400B2 (en) * 2009-06-22 2014-09-23 Rosetta Stone, Ltd. Method and apparatus for improving language communication
US20110143323A1 (en) * 2009-12-14 2011-06-16 Cohen Robert A Language training method and system
EP2783358A1 (en) * 2011-11-21 2014-10-01 Age of Learning, Inc. Language teaching system that facilitates mentor involvement
EP2783358A4 (en) * 2011-11-21 2015-04-22 Age Of Learning Inc Language teaching system that facilitates mentor involvement

Also Published As

Publication number Publication date
US6302695B1 (en) 2001-10-16
AU1477001A (en) 2001-06-06
JP2003514257A (en) 2003-04-15
WO2001035378A1 (en) 2001-05-17

Similar Documents

Publication Publication Date Title
US9536441B2 (en) Organizing online test taker icons
US6513042B1 (en) Internet test-making method
US6705869B2 (en) Method and system for interactive communication skill training
Hsu et al. Using audioblogs to assist English-language learning: An investigation into student perception
US8385525B2 (en) Internet accessed text-to-speech reading assistant
Mark et al. Reducing the effects of isolation and promoting inclusivity for distance learners through podcasting
US6751439B2 (en) Method and system for teaching music
US6909874B2 (en) Interactive tutorial method, system, and computer program product for real time media production
Kemp et al. Clinical language sampling practices: Results of a survey of speech-language pathologists in the United States
US7058354B2 (en) Learning activity platform and method for teaching a foreign language over a network
US20060286527A1 (en) Interactive teaching web application
US7031651B2 (en) System and method of matching teachers with students to facilitate conducting online private instruction over a global network
US20020085029A1 (en) Computer based interactive collaboration system architecture
US20050233293A1 (en) Computer system configured to store questions, answers, and keywords in a database that is utilized to provide training to users
US20020087592A1 (en) Presentation file conversion system for interactive collaboration
US6527556B1 (en) Method and system for creating an integrated learning environment with a pattern-generator and course-outlining tool for content authoring, an interactive learning tool, and related administrative tools
US20020085030A1 (en) Graphical user interface for an interactive collaboration system
US20040191744A1 (en) Electronic training systems and methods
US20020172931A1 (en) Apparatus, system and method for remote monitoring of testing environments
TWI263950B (en) System and method for providing educational service via communication network
US20050053900A1 (en) Method of teaching a foreign language to a student providing measurement in a context based learning system
US8023636B2 (en) Interactive dialog-based training method
AU2006301793B2 (en) Computer-aided method and system for guided teaching and learning
US20030039948A1 (en) Voice enabled tutorial system and method
CN1333611A (en) Interactive multimedium virtual classroom needing small online network band width

Legal Events

Date Code Title Description
AS Assignment

Owner name: MINDSTECH INTERNATIONAL INC., CALIFORNIA

Free format text: ADDRESS CHANGE/CHANGE OF NAME;ASSIGNOR:MINDS & TECHNOLOGIES, INC.;REEL/FRAME:013433/0646

Effective date: 20020927

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION