WO2008027033A1 - A system and method to enhance human associative memory - Google Patents

A system and method to enhance human associative memory Download PDF

Info

Publication number
WO2008027033A1
WO2008027033A1 PCT/US2006/033670 US2006033670W WO2008027033A1 WO 2008027033 A1 WO2008027033 A1 WO 2008027033A1 US 2006033670 W US2006033670 W US 2006033670W WO 2008027033 A1 WO2008027033 A1 WO 2008027033A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
associative
memory
pair
review
Prior art date
Application number
PCT/US2006/033670
Other languages
French (fr)
Inventor
Yang Wei
Steve Chen
Shuanhu Wang
Qi Yu
Original Assignee
Init Technology Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Init Technology Inc. filed Critical Init Technology Inc.
Priority to PCT/US2006/033670 priority Critical patent/WO2008027033A1/en
Publication of WO2008027033A1 publication Critical patent/WO2008027033A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/06Electrically-operated teaching apparatus or devices working with questions and answers of the multiple-choice answer-type, i.e. where a given question is provided with a series of answers and a choice has to be made from the answers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances

Definitions

  • This invention relates generally to a computer-facilitated system to exercise the memory mechanism of a user's brain and to enhance his associative memory.
  • the system contains a memory engine to optimize the user-computer interaction so as to optimally exercise the memory mechanism of the user's brain and to generate a superior long-term user memory.
  • Associative memory the memory that A goes with or equals B, is one of the fundamental components of human intelligence. It should also be noted that associative memory is not only limited to human beings and can be observed from many other living beings as well. Here, A and B form an associative pair and each could be concepts, words, symbols or sensations presented in visual, acoustic or other multi-media formats such as tactile and smelling sensations to stimulate a human. The process of forming an associative memory is called associative learning, which is illustrated by the following examples:
  • Training an appropriate behavior forming an association between an occasion or environment setting and an appropriate behavior.
  • An example of an associative pair A is to B is: A is an English word “apple” and B is the corresponding Chinese word “PingGuo", the Chinese word for "apple”.
  • A is an English word "apple” and B is the corresponding Chinese word "PingGuo", the Chinese word for "apple”.
  • PingGuo the Chinese word for "apple”.
  • the term "golden sequence” is defined as the temporal pattern of the simple repetitions of learning a specific associative pair (A, B) that generates the strongest long-term memory in a specific user.
  • the learning or exercise task can be expressed as forming associations between a set of associative pairs like (Ai, Bi), (A2, B2),...,(Aj, Bi),..., (AN, BN). Therefore, to achieve the best learning result, it is desirable to arrange the overall learning sequence for the set of associative pairs so that the actual temporal pattern for the repetitions of each associative pair /5 or is close to its golden sequence.
  • the review time intervals of the golden sequence for a user with a better memory is longer than those with a poor memory
  • the review time intervals of the golden sequence of a word with an abstract meaning e.g., EFFECT
  • a concrete meaning e.g., APPLE
  • a computer facilitated system is proposed to efficiently enhance a user's associative memory.
  • the system includes: a user interface device for presenting information to the user and receiving user responses. a processor for processing the above information and user responses. a contents database for storing subject matter to be presented to the user in the form of an associative pair.
  • a memory engine an operating program. a user interface design that allows user to self-evaluate his answer.
  • the operating program presents the user, through the user interface device, with associative pairs retrieved from the contents database, secondly the operating program creates a trial and memory history in the memory engine based upon a user response to the associative pairs, thirdly the memory engine determines an optimal real-time sequence and order based upon the trial and memory history and fourthly the operating program presents the user with the associative pairs again and a set of new associative pairs.
  • the user response to an associative pair can further include: a user response content. a user response time equal to the time interval between a first instant when the associative pair is presented to the user and a second instant when the user response content is executed by the user.
  • the user response time can optionally include a pre-determined maximum range, called maximum response time, beyond which the user response content will be defaulted into a category of no response by the operating program.
  • the user response content can differ for different forms of presentations. It can be as simple as an answer such as "A", “B”, “C” or “D” in multiple choice questions. It can further include an initial response and a confirmation response where the former captures the initial response from the user, and the latter allows the user to evaluate whether his initial response is correct after seeing the right answer.
  • Each of the associative pairs can be, but is not limited to, a pairing of: a displayed language word and a definition. a spoken language word and a spelling. a displayed language word and a picture. a pairing of a word in Braille and a pronunciation for the case of a blind user, a spoken language word and a choice amongst multiple text descriptions, a question and multiple answers.
  • the associative pairs can further be embedded into a game scenario or into a user's interactive reading session with the system.
  • the contents database can be a database in a conventional sense, or it can be implemented on a unit removable from the system with the subject matter stored on a mobile memory device.
  • the memory engine can further include: a user history database for recording a user profile, a usage chronology having chronologically recorded results of the user recalling each associative pair, and the trial and memory history. a review interval optimizer for processing the trial and memory history and determining a best review interval for each associative pair. a sequences database for storing the best review intervals for numerous associative pairs presented to the user. a process optimizer having a set of scheduling algorithms for retrieving data from the sequences database and for determining a next associative pair and its schedule to be presented to the user for achieving an enhanced long-term memory of the user.
  • the process optimizer can also incorporate a user-defined study schedule into determining a real-time sequence for presenting the associative pairs to the user.
  • the computer facilitated system can be implemented in the form of a web-based service/application, a desktop application, a program/game running on mobile devices, such as a cellular phone, a personal digital assistant or even a toy (watch, pen, etc.).
  • the memory engine can be further equipped to drive, via the operating program, the user interface device to display information indicating the user's real-time progress statistics and trial and memory history, calculated from the user history database.
  • the review interval optimizer further includes a neurophysiologically rooted dynamic memory model based upon which the review interval optimizer determines, for each user and for each associative pair, a golden sequence defined as the best review time intervals of the repetitions of user trials of the associative pair for generating the strongest long-term memory for the user, and sends the golden sequence to the sequences database.
  • the dynamic memory model includes the following functions: a short-term memory activation trace MT(t) that is a decreasing function of time describing the short-term decay of an association intensity of the associative pair initiated in the user's brain due to each presentation of the associative pair to the user and his subsequent response.
  • a long-term memory association MA(t) that is a function of time describing the long-term course of an association intensity of the associative pair formed in the user's brain as a combination of: a) a time-consolidation from each of the short-term memory activation trace, a time-integration of MT(t). b) a long-term decay of MA(t) itself that is a decreasing function of time.
  • the lifetime of both the short-term decay and the long- term decay are both user and associative pair dependent while the long-term lifetime is much longer than the short-term lifetime.
  • the review interval optimizer further sets each of the best review time intervals sufficiently long such that majority of the time- consolidation of the corresponding short-term memory activation trace is complete so as to maximize the long-term memory association MA(t).
  • the review interval optimizer sets each of the best review time intervals sufficiently long such that from about 60% to about 90% of the time-consolidation from each of the short-term memory activation trace is complete.
  • ⁇ s is the lifetime of the short-term decay of MT(t).
  • An example of the functional form of MA(t) is:
  • parameter A can be further adjusted by the review interval optimizer for a strongest buildup of long-term memory association
  • the review interval optimizer can set the golden sequence for each associative pair to a predetermined pair-specific default golden sequence.
  • the default golden sequence can be set to correspond to the best review time intervals for each associative pair learned by a user with average memory power.
  • the review interval optimizer can set the golden sequence for all associative pairs to a predetermined default golden sequence.
  • the default golden sequence can be set to correspond to the best review time intervals for an associative pair with average difficulty and learned by a user with average memory power.
  • the review interval optimizer can further regularly adjust the associated golden sequence in real-time using numerous results of the user recalling each specific associative pair and the trial and memory history from the user history database.
  • the review interval optimizer can further compare a prediction of the dynamic memory model with an actual result from the user recalling an associative pair and adjusts the dynamic memory model accordingly.
  • the review interval optimizer can further track an error rate computed from numerous trial results of the user from the user history database and adjusts the dynamic memory model accordingly to maintain the error rate within a predetermined range.
  • the advantages include reduced user frustration from too high an error rate and reduced disruption of the ongoing long-term memory consolidation process due to redundant reviews of too many associative pairs with too low an error rate.
  • the predetermined range is from about 5 percent to about 10 percent.
  • the process optimizer can further include a learning mode for determining whether to present a past associative pair or a new associative pair to the user under a set of learning scenarios such as: when no past associative pair is due for review. when only one past associative pair is due for review. when a plurality of past associate pairs are due for review.
  • the learning mode updates the trial and memory history and review interval for each of the above scenario after the user is presented with and responds to a past associative pair or a new associative pair.
  • the process optimizer can further include a review only mode for presenting the user with only past associative pairs thus blocking any new associative pair from being presented.
  • the review only mode then updates the trial and memory history and review interval after the user is presented with and responds to a past associative pair.
  • Fig. 1 is a schematic block diagram of one embodiment of the overall structure of the present invention as a dynamic system to optimize human associative learning and memory exercise;
  • Fig. 2 shows a flowchart illustrating one embodiment of the workflow during a user's learning process under the present invention
  • Fig. 3 illustrates a screenshot of the user interface showing step 208 of the flowchart of Fig. 2 wherein RI , R2 and R3 represent three different user responses according to an embodiment of the present invention
  • Fig. 4 illustrates a screenshot of the user interface following step 216 of the flowchart of Fig. 2 wherein R4 and R5 are two different user responses according to an embodiment of the present invention
  • Fig. 5 illustrates a screenshot of the user interface following step 214 of the flowchart of Fig. 2 according to an embodiment of the present invention
  • Fig. 6 illustrates some testing results of the embodiment illustrated in Fig. 1 through Fig. 5;
  • Fig. 7 illustrates the workflow of a spelling practice task according to an embodiment of the present invention
  • Fig. 8 illustrates the workflow of a listening-comprehension task according to an embodiment of the present invention
  • Fig. 9 illustrates the workflow and user interfaces for a multiple- choice task according to an embodiment of the present invention
  • Fig. 10 illustrates the workflow and associated user interface for a multiple choice task according to an embodiment of the present invention.
  • FIG. 1 is a schematic block diagram illustrating an embodiment of the overall structure of the present invention as a dynamic system to optimize human associative learning and memory exercise.
  • each user's learning data called trial and memory history
  • trial and memory history are individually processed, stored and analyzed by the system.
  • a returning user logs in to the system it can recognize the user, retrieve the user's detailed trial and memory history and optimize his/her learning process accordingly.
  • the user interface 104 component of the system includes an output unit and an input unit as signified by the left and right pointing arrows next to the user interface 104.
  • the output unit may include a visual display (e.g., monitor screen, LCD display) or an auditory device (e.g., speaker) of information.
  • the user interface 104 may display a subject matter to be presented to the user for learning, information related to the user's learning progress and trial and memory history related information to inform the user 102 of his/her learning progress.
  • the input unit is used to receive the user's responses to the presented subject matter.
  • the input unit may include a mouse, keyboard, keypad, joystick, microphone, or other similar devices.
  • the user interface 104 may even include devices for handling Braille images and characters.
  • the user interface 104 is capable of presenting multi-media information to the user 102 and receiving his/her responses.
  • a computer processor 106 is coupled to the user interface 104. Inter Alias, the computer processor 106 takes user responses from the user interface 104, process them and delivers subject matter for learning back to the user interface 104.
  • a contents database 108 is coupled to the computer processor 106. The contents database 108 stores the subject matter to be presented to the user 102 for learning or for memory exercise. The subject matter is usually stored in the contents database 108 in the form of numerous associative pairs although others forms are possible. For example, in a case of multiple choices Q&A (question and answer), a subject matter entry can include a Q-portion that is a question, followed by an A-portion that itself is a list of four candidate answers with one of the four answers being the correct answer to the Q-portion.
  • Q&A questions and answer
  • Some examples of the subject matter are: alphabet learning, phonics learning, word or concept learning (including pronunciation, meaning, or its foreign language representation), sentence grammar learning, anatomy terminology learning, behavioral training (forming an association between an occasion or environment and an appropriate behavior), training question in various fields or tests such as SAT, LSAT, etc.
  • the subject matter can be built in the present invention system, stored in a database external to the present invention system, or made as mobile memory devices removable from the present invention system such as CDs, floppy disks or memory cards.
  • the present invention system includes a memory engine 120, a crucial component coupled to the computer processor 106, for tracking the real-time memory status of each user 102 and optimize the learning sequence of the subject matter accordingly to ensure that every entry of the subject matter gets reviewed at its right moment to achieve a superior long-term memory in the user 102.
  • the memory engine 120 further includes a user history database 122, a review interval optimizer 124, a sequences database 126 and a process optimizer 128.
  • the contents database 108, the user history database 122 and the sequences database 126 can be grouped into a larger database.
  • the present invention system further includes an operating program working with the computer processor 106, the user interface 104, the contents database 108 and the various components of the memory engine 120 as signified by the various arrows communicating there between. More details of the operating program will be presently described. A further remark is that, depending upon the desired performance of the present invention and the number of users simultaneously served by it, the computer processor 106 can be embodied with a number of computer processors as necessary.
  • the user history database 122 records and stores a user profile, a usage chronology and the trial and memory history.
  • the user profile includes detailed personal information for identifying and characterizing a user, such as user name, account name, account number, password, demographic data, memory grade level versus subject matter, etc.
  • the usage chronology is a set of chronologically recorded results of the user recalling each associative pair under each subject matter.
  • the trial and memory history contains individualized evaluation results based upon user responses to the associative pairs. Additionally, the user history database 122 also sends individualized progress related information to the user interface 104.
  • the review interval optimizer 124 takes and process the trial and memory history from the user history database 122 to simulate the user's temporal dynamic memory process of each learned associative pair and determines, based upon a dynamic memory model, its golden sequence that is its best review intervals. The review interval optimizer 124 then sends these best review intervals to the sequences database 126 for storage.
  • each of the user responses can include both a user response content and a user response time.
  • An example of the user response content is, following a multiple choice form (choices "A'V'B'V'CY'D'V'E") of an associative pair presentation to the user 102, a choice "E" entered by the user 102.
  • the user response time for an associative pair is defined as the time interval between a first instant when the associative pair is presented to the user 102 and a second instant when the user response content is executed by the user 102. While the user response content is clearly an important indicator of the user's memory status of the associative pair, the user response time can also be important in determining his/her memory status as, for example, given the same user response content that is correct, a shorter user response time usually indicates a strong user memory status of the subject matter. For those skilled in the art, the user response time can be implemented with an event-triggered timer in hardware, software or combination there of.
  • the user response time In another example of training a fast response of forming an association between the onset of a particular computer display and the user 102 pushing any key of the keyboard, the user response time even becomes the dominant indicator of the user's memory status. Under this situation, a predetermined maximum range, called maximum response time can be implemented beyond which the user response content will be defaulted into a category of no response by the operating program.
  • the maximum response time should be appropriately set and is in general dependent upon the class of subject matter being learned. Considering most typical applications, a maximum response time range of from about 0.1 second to about 10 minutes should be sufficient.
  • the user response time should be measured with a resolution of from about 10 millisecond to about 500 millisecond.
  • the review interval optimizer 124 is a computer- facilitated review time sequence optimizer based on both the user response content and the user response time.
  • an un-optimized review time sequence typically results from numerous possible scenarios.
  • a user does not use a computer and repeatedly reviews a set of associative pairs randomly.
  • a user does not use a computer and repeatedly reviews the set of associative pairs sequentially without any optimization.
  • a user strives to review the set of associative pairs efficiently without the help of a computer or other automated devices.
  • the process optimizer 128 retrieves the best review intervals from the sequences database 126 and determines a next associative pair and its schedule to be presented to the user 102.
  • the process optimizer 128 contains a set of scheduling algorithms to optimize the learning process.
  • the set of scheduling algorithms are adopted to generate the best long-term memory results in the user 102 based upon the golden sequences stored in the sequences database 126.
  • the best review intervals of different associative pairs within a subject matter can conflict with one another due to coincidence. The situation of a single user concurrently learning multiple subject matters would in general result in more conflicts.
  • Weekends, holidays, vacations and other engagements and commitments of the user 102 can also cause the user 102 to miss some best review intervals.
  • user's time schedule can either be known already, as in the case of conflict among different associative pairs within a subject matter, can be imported from another user schedule program into the operating program or can be manually inputted by the user 102 through the user interface 104.
  • the process optimizer 128 can now take into account the golden sequences of a user and the user's time schedule as constraints and determines a next associative pair and its schedule for an optimized learning process.
  • a specific example is presented here to illustrate the scheduling algorithm.
  • a user is learning ten (10) words, say Wl , W2, ..., Wl O.
  • an embodiment of the present invention does not force the user 102 to study at a specified time. Instead, the process optimizer 128 derives an optimized actual learning schedule to fit the user's schedule.
  • the operating program first presents the user 102 with associative pairs retrieved from the contents database 108.
  • the operating program then creates a trial and memory history in the memory engine 120 based upon a user response time and user response content to the associative pairs.
  • the memory engine 120 determines an optimal real-time sequence and order based upon the trial and memory history.
  • the operating program uses the optimal real-time sequence and order and presents the user 102 with the associative pairs again during an ongoing learning process.
  • the present invention system can be implemented as Internet web applications on a web browser, as desktop PC applications, or as applications on various mobile handheld devices such as a personal digital assistant or a cellular phone.
  • the present invention system can be implemented to serve multiple users in the case of Internet web or PC applications, or only one or a small number of users in the case of personalized handheld devices. More details of the operation mechanism of the memory engine 120 will be revealed in the following description of a typical workflow.
  • FIG. 2 shows a flowchart illustrating one embodiment of the workflow during a user's learning process under the present invention.
  • the work flow can be viewed as a simplified representation of the next level details of the operating program.
  • the illustrated workflow is for a common associative learning task - a recall task 210 for associative pair contents directing at learning English vocabulary.
  • Other forms of associative learning tasks will also be described herein.
  • Fig. 3 illustrates a screenshot of the user interface showing a step of the flowchart of Fig. 2.
  • Fig. 2 illustrates a typical workflow of a user: login, build vocabulary, and logout.
  • user login 200 the user is identified by the system.
  • the user can then choose a subject matter with specific contents to study with customize content 202.
  • the system can be programmed to allow the user to further customize the learning contents, such as adding or deleting some contents, or reordering the contents by alphabetical or semantic criteria.
  • the memory engine 120 starts to run and determines which word to be presented in the next trial in order to achieve the best long-term memory results. This step corresponds to determine word for next trial 204. As an example, the memory engine 120 selects the word INQUISITIVE in the next trial and it is presented. The user is asked to recall its meaning in present and recall meaning 208. Three possible user response content of the recall follow:
  • FIG. 3 illustrates the corresponding screenshot of the user interface at step 208.
  • Fig. 4 illustrates a screenshot of the user interface following step 216 of the flowchart of Fig. 2 wherein R4 and R5 are two different user responses. Here, the user indicates his user response content, whether his answer is correct, by clicking one of the two buttons R4 YES 21 7, R5 NO 219.
  • Fig. 5 illustrates a screenshot of the user interface following step 214 of the flowchart of Fig. 2.
  • the user response content is simply a clicking on the continue button (>) R6 CONTINUE 21 5 to move on to the word for next trial.
  • the simple term "user response content” actually includes a sequence of logically connected responses from the user.
  • the user response content includes two sequential user responses: a first initial response, and a second confirmation response.
  • the initial response is illustrated in Fig. 3 and can have three values indicating (Know, Not sure, Don't know).
  • the confirmation response is illustrated in Fig. 4 and can have two values indicating (Right, Wrong) only when the initial response indicates know or Not sure.
  • the confirmation response becomes moot when the initial response indicates Don't know.
  • the memory engine 120 saves the associated trial data for the current trial in the user history database 122. Based on the trial data, the memory engine 120 then updates the trial and memory history of the word just processed and, with DETERMINE NEXT REVIEW INTERVAL 222, determines a best review interval after which the same word should be reviewed again. This best review interval is stored in the golden sequences database 126.
  • the system will then UPDATE PROGRESS INFORMATION 224 and sent it out through the user interface 104.
  • the progress information can display the user's real-time progress statistics and trial and memory history calculated from the user history database 122.
  • Examples of the progress statistics are current session progress 240 and overall progress 242. Other examples are total number of words in the selected subject matter to study and its percentage already tried.
  • Examples of the trial and memory history include a table sequentially listing each word tried versus its cumulative trial number, number of "Know", number of "Not Know”, number of "Don't know", number of "Right", and number of "Wrong", etc. After each trial or a certain number of trials, the system will EVALUATE MODEL 228 and ADJUST MODEL 226 if it is needed.
  • the review interval optimizer 124 can be further improved to regularly adjust the associated golden sequence in real-time using the results of the user recalling the associative pair and the trial and memory history from the user history database 122.
  • the review interval optimizer 124 can compare a prediction of the dynamic memory model with an actual result from the user recalling an associative pair and adjusts the dynamic memory model accordingly for a strongest buildup of long-term memory. More details of the dynamic memory model adjustment will be described.
  • the user interface 104 can also be driven by the operating program to provide additional memory cues to supplement the associative learning.
  • the memory cues can be visually, acoustically, or semantically related words; the prefix, root and appendix of a compound word; the picture for a visible concept, the sound for an acoustical concept; or a short story, or even a short movie to name a few.
  • FIG. 3 includes an alphabetic pronunciation cue 244 for the word “inquisitive” and a clickable speaker icon 246 producing an audible pronunciation of the word “inquisitive” upon activation by the user.
  • a clickable speaker icon 246 producing an audible pronunciation of the word "inquisitive” upon activation by the user.
  • an example sentence 248 containing the word "inquisitive” is presented to help the memory.
  • the above-described system can be simplified to fit into easy-to-carry handhold devices such as personal digital assistants and cell phones. As most handhold devices have quite limited processing power, memory and peripheral devices, the memory engine can be simplified to handle only one user. As illustrations for implementation in a small display with low image resolution, the corresponding screenshots can be reduced into handheld screenshot 250, handheld screenshot 252 and handheld screenshot 254. To reduce memory and processing power, the steps 202, 224, 226, 228 in Fig. 2 are options which may be left out for simplicity. THE MEMORY ENGINE
  • the main function of the memory engine 120 is to track in real-time, the memory status of each user, determine the golden sequence for each word learned, and accordingly optimize the review time intervals to insure that each word will be reviewed at the time intervals that closely correspond to the golden sequence for the word.
  • the memory engine drives the learning process according to the memory status of the learned materials in the user's brain so that the user-computer interaction is resonant. This type of resonant interaction achieves superior memory results that are far beyond those obtainable through presently available regular human learning such as learning without the aid of a computer or through conventional computer systems which do not have the memory engine technology according to the embodiments of the present invention.
  • Another advantage of learning with the memory engine is that the process is not just much faster, but also easy and full of fun. This is because the computer tracks the user's memory status of each associative pair in great detail and automatically delivers the right material for review at the right time. Learning with the memory engine can make the most tedious learning process such as building vocabulary easy and fun. This in turn can change the psychology of the language learning users and builds their learning interest and confidence. (058) Yet another advantage is that, when the human memory mechanism is optimally stimulated, the memory mechanism becomes more robust. Like building muscles in the gymnasium under the instruction of a personal trainer, the trainee's muscles get much stronger than when he does it by himself. Learning with the memory engine enhances memory efficiently. Thus, after weeks of regularly using the embodiment of the present system, many users feel they can remember things like phone numbers and addresses much better.
  • the system can alternatively be implemented as an exercise machine for the memory mechanism of the user's brain.
  • Embodiments of the present invention can have the following applications:
  • Another object of the embodiment of the present invention is that the user does not forget any material learned with the system if used regularly.
  • the memory engine will detect any learned material that a user is about to forget and presents it to the user for review.
  • the learning is focused on difficult contents. As difficult contents are more quickly forgotten, the memory engine will arrange the material for the user to review the difficult contents more frequently.
  • the review interval optimizer 124 is one of the core components of the memory engine 120 according to an embodiment of the present invention.
  • the review interval optimizer 124 retrieves data from the user history database 122 to obtain the user's current memory status of the present word under trial, the review interval optimizer 124 further takes into account the current learning trial results to update the user's memory status then determines the best review interval for the next review.
  • This review interval is sent to the sequence database 126, which in turn is used by the process optimizer 128 to arrange an actual learning sequence.
  • the review interval optimizer 124 generates the intervals of the golden sequence for each word and for each user.
  • the golden sequences are the scientific base the process optimizer 128 relies upon to arrange the actual learning sequence for achieving the best long-term memory results.
  • the system presents its meaning B with PRESENT WORD EXPLANATION 214.
  • SMT short-term memory activation trace
  • the co-activation of A and B in the user's brain initiates a short-term association between A and B, MT(t).
  • MT(t) short-term association between A and B
  • MT(t) the co-activation of A and B in the user's brain
  • the co-activation of A and B in the user's brain initiates a short-term association between A and B, MT(t).
  • MT(t) the user's SMT is denoted by the function of time MT(t).
  • a higher MT(t) value represents a stronger SMT.
  • the SMT In the absence of any future stimuli affecting the associative pair [A,B], the SMT generally decays and tapers off toward zero (0) with time (t).
  • the time course of the persistence of the short-term memory activation trace can be modeled as:
  • MT(t) is the level of the short-term memory activation trace at time t.
  • Exp is the exponential function that is the inverse function of natural logarithm.
  • Ts is the lifetime of the short-term memory activation trace decay. In this example and for convenience, it is the time it takes for the SMT to decay from its initial full level of 1 to 1 /e, where "e" is approximately 2.71 8.
  • MA(t) is the strength of the long-term memory association at time t.
  • T c is the time constant of the consolidation.
  • a review trial is a repeat trial when the user sees the associative pair [A 1 B] again.
  • One crucial difference between a review trial and an initial trial is that, by the time of the repetition, both the LMA MA(t) and the SMT MT(t) between A and B are not 0 due to the residual effect of the initial learning.
  • the residual MT(t) gets recharged from its current level to full strength 1 .
  • the decay lifetime of this short-term activation ⁇ s is determined by the current strength of MA(t):
  • Function (4) implies that the lifetime of the activation of an association increases with the LMA MA(t). Note that while illustrated using an exponential function, this memory model is not restricted to the specific functional form. For example, functions (1 ) and (4) can be expressed as a power function, a polynomial function, a hyperbolic function of limited range or even a trigonometric function of limited range, etc. Under the pretext that all functional forms should approximately describe the general time course of the user's memory, systematic experimental efforts are nevertheless required to identify which one works better than the others. Anyhow, the short-term activation now continues to consolidate to an existing LMA MA(t). Thus:
  • MA(t) is the strength of the long-term association at time t
  • Ti is the lifetime of the long-term connection with r,» ⁇ s .
  • the long-term association MA(t) traces a time course that is a combination of a time-consolidation from each of the short- term activation MT(t) and the long-term decay of MA(t) itself. Therefore, taking the long-term memory decay into consideration, for the best results in long-term memory it is desirable to repeat an associative pair before the decay of its long-term memory sets in.
  • An important remark is that both ⁇ s and are both user and associate pair dependent.
  • the description below describes how to determine the best time to review a word according to the above dynamic memory model as an embodiment of the present invention.
  • the golden sequence is the sequence of time intervals successfully summing up to the best future times to review a word after its initial learning, the 1 st repetition, 2 nd repetition, and so on.
  • the dynamic memory model provides a picture of the temporal pattern of long-term memory change. Following a learning trial, the trial-produced stimulus is consolidated into the long-term memory with time constant ⁇ s . After majority of the consolidation is accomplished, the long-term memory decay gradually starts to dominate with time constant % If a repetition occurs before the completion of the consolidation of the previous short-term memory activation trace, the previous trace will be recharged to full strength before a full consolidation. Thus, it is desirable to repeat a trial after the consolidation of the previous activation trace is complete but before the long-term decay starts to set in.
  • sequence (8) is not and should not be the only way of mathematically expressing the golden sequence.
  • the Exp function can be substituted by other forms of mathematical functions such as a power function, a polynomial function, a hyperbolic function of limited range or even a trigonometric function of limited range, etc.
  • sequence (8) is used as the golden sequence format.
  • the crucial parameter components of the memory model that determine the golden sequence are parameter A and the £YJP function. While the golden sequence is expected to be associative pair specific and user specific, in practice the present invention system would not know such individualized golden sequence for a new user and for a new word a priori. Hence, when a new user begins his learning process, the user can start with a default golden sequence before the system has a chance to detect the user's individual memory power and the learning difficulty of each individual word for the user. As an embodiment, the default golden sequence are the best review intervals for a word with average difficulty and learned by a user with average memory power. The default golden sequence can be experimentally and statistically determined from a representative user population before the system deployment.
  • pair-specific default golden sequence defined as the best review time intervals for each specific associate pair learned by a user with average memory power, can also be implemented if so desired, etc.
  • set A 1 second
  • the default golden sequence becomes: [1, e, e 2 , e 3 , e ⁇ , ...] (9)
  • the system can simply assign the next review time interval to be e (n ⁇ 1 > second where n is the trial sequence number.
  • n is the trial sequence number.
  • the associative pair is presented for the first time; when n>l , the corresponding trials are sequential repetitions.
  • the user has memory power that is either above or below the population average.
  • the first event is testing, in which a user's memory status of a word is tested. This will provide the system with information about the user's long-term memory status of the presented word, based upon which the system will update the user's memory status after a second learning trial. Concurrently, the system will also compare the user's actual performance with the prediction of the memory model so the memory model can be adjusted if the user's actual performance is too high or too low.
  • the second event is the learning trial, in which the correct answer is presented for the user to learn. Through the learning trial, the short-term memory activation trace is triggered and the short- term memory activation trace further consolidates and contributes to the long-term memory association in time.
  • URT is the user response time in seconds.
  • AMA(O) denotes "Regular increment for MA(O) without considering the user response time” min(URT, 10) is a function whose value is the minimum of “URT” and “10".
  • max(URT, 1 ) is a function whose value is the maximum of "URT" and "l ".
  • the present invention recognizes that both the user response content and the user response time can be important indicators of the user's memory status of the subject matter.
  • numerous embodiments using other functional forms can instead be employed to simultaneously model the user response content and user response time and these embodiments are still considered to be within the scope of the present invention.
  • a pre-determined maximum response time can be employed to restrict the user response time beyond which the user response content will be defaulted into a category of no response by the operating program. This no response can be equated to NO IDEA.
  • the maximum response time can be further made specific to each associative pair and can even be dynamically adjusted afterwards.
  • the parameter A of A*Exp(n) may not be accurately determined.
  • the system does not restrict the user to review each word at exactly the next review time as determined by the golden sequence. Instead, it permits the user to follow his own study schedule, the system will then arrange the learning sequence accordingly to achieve the best result.
  • words are overdue when being reviewed. That is, the words can actually be reviewed very late compared to their best review time. Consequently, a word's long-term memory status at its review time may not be MA n -I as predicted by the model.
  • the best estimation is initially set forth based on prior testing experience. Afterwards, the system allows an adjusting mechanism, as illustrated below, to constantly evaluate the actual performance of the memory model and recalibrate the MA(t) value accordingly when the model does not predict the user's performance well. In a repeated trial, the MA(T) can be recalibrated by the following rules:
  • MA(t) MA(Q. 2: R2 (RECALLED BUT UNSURE 21 1 ) followed by R4 (YES 21 7):
  • MA(t) is incremented by 1 due to the major consolidation of the short-term memory activation trace accompanying a trial.
  • the review interval optimizer can compare a prediction of the dynamic memory model with actual results from the user recalling an associative pair and adjusts the dynamic memory model accordingly.
  • the review interval optimizer first tracks an error rate computed from the numerous trial results of the user from the user history database.
  • the review interval optimizer can then adjust the dynamic memory model accordingly to maintain the error rate within a predetermined range, say from about 5% to about 10%.
  • Too high an error rate of the user indicates that the memory model over estimated the memory power of the user with the user frequently forget the subject matter being learned. This leads to repetitive relearning of the previously learned yet forgotten subject matter and can cause user frustration.
  • too low an error rate indicates that the memory model under estimated the user's memory power, with the likelihood that many subject matters are redundantly reviewed. This can cause frequent disruption of the user's ongoing long-term memory consolidation process.
  • the system can evaluate the error rate and adjusts the /4-value accordingly after a certain number of trials, e.g. between 10 and 100 trials, by the following exemplary rule:
  • the functional form Exp can also be replaced with other forms such as a power function to afford more degrees of freedom in the adjustment hence more accurate results.
  • the memory model can be adjusted at a global level thus tailoring to each individual user.
  • the memory model can be adjusted for each word as well. For example, the error rate for reviewing each specific word can be calculated and the A-value for that word is then accordingly adjusted, etc.
  • the system does not restrict the user to review each word at exactly the next review time as determined by the golden sequence. Instead, it permits the user to define his own study schedule, the process optimizer 128 will then arrange the real-time learning sequence based upon the user-defined schedule and his memory status of each word to achieve the best result.
  • the process optimizer 128 access data from the golden sequences database 1 26 to know the best review times of each word.
  • the process optimizer 128 determines which word should be presented in the next trial to achieve the best overall long-term memory results.
  • the process optimizer 128 includes a set of algorithms to arrange the real-time trial sequence.
  • the process optimizer 128 works in two modes, a learning mode and a review only mode.
  • the main difference between these two modes is that in the review only mode, any new associative pairs are blocked from being presented, and the user only reviews the past associative pairs that were learned before.
  • the system updates the trial and memory history and review interval after the user is presented with and responds to each past associative pair. This can be desirable when the user wants a break for more than a few days wherein he can focus on the past associative pairs learned to avoid massive forgetting during the break.
  • Fig. 6 shows some testing results obtained at Northwestern Polytechnic University with 40 adult Chinese students studying in the ESL (English as Second Language) program.
  • the English words used in the test were most frequently tested TOEFL (Test Of English as a Foreign Language) words and were mostly new to these students. Most students spent about half an hour per day throughout this test.
  • TOEFL Test Of English as a Foreign Language
  • Each curve terminated with a dot represents the progress of a student.
  • the horizontal- axis represents the total training time in hours whereas the vertical- axis represents the number of new words remembered by a student.
  • the dashed straight line represents the average speed of progress amongst the 40 students, 50 new words per hour.
  • the learning speed ranged from 30 words per hour to 100 words per hour with an average speed of 50 words per hour.
  • the learning progress is substantially linear. That is, if a student acquired sixty words per hour, the student is likely to maintain this speed up to hundreds of hours of learning.
  • the recall task can be substituted with other associative learning tasks as well in either learning mode or review only mode to offer a rich learning experience for the users and to provide broad training on various aspects of the associative learning.
  • the learning contents are not limited to vocabulary building.
  • the learning contents for instance, can be any of the associative pair contents described before.
  • various embodiments of the present invention may be presented with a combination of any of the associative learning tasks and any of the associative pair contents.
  • Reverse recall task Present B and ask the user to recall A.
  • the user interface and workflow are the same as the recall task already illustrated in Fig. 2 through Fig. 6 as embodiments of the present invention.
  • Fig. 7 illustrates a simple workflow for this task according to an embodiment of the present invention.
  • Fig. 8 illustrates a simple workflow for this task according to an embodiment of the present invention.
  • FIG. 9 illustrates a simple user interface to implement this task according to an embodiment of the present invention.
  • Multiple-choice fill in task Presenting contexts B for target A and leaving a space for the user to fill in A with a correct one of multiple-choices.
  • Fig. 10 illustrates a workflow and simple user interface to implement this task according to an embodiment of the present invention.
  • a plurality of associative pairs [Al , Bl ], [A2, B2], . . . , [An, Bn] can be imbedded into computer game like scenarios to make the associative learning more fun according to an embodiment of the present invention.
  • a general way of extending the system is to separate the learning trials from the reviewing trials, and to change the learning trials from a recall task to a more natural learning task of reading. Specifically, whenever the user encounters a word whose meaning the user does not know, a word that the user is not sure how to pronounce, or a sentence that the user cannot understand, the user can simply indicate so through the system interface. The system then stores these user indications and the corresponding associated contents as associative pairs for the content in building the user's associative memory.
  • learning in such a general reading system can be embodied to have two main working modules: one module facilitates reading the learning materials with assistance for in-process difficulties; the other is the associative memory building module with which the user systematically builds and consolidates his associative memory for the subject matters found difficult during the reading.
  • the system can further recommend which working module the user should be in according to a number of preset criteria. For example, when the amount of overdue associative pairs for review by a user is above a certain threshold level, the system will recommend building the associative memory. Otherwise, the system will recommend that the user continues his reading to learn new materials. (100) While the description above contains many specificities, these specificities should not be constructed as accordingly limiting the scope of the present invention but as merely providing illustrations of numerous presently preferred embodiments of this invention. For example, the present invention system can be integrated with a user's reading activities to systematically handle his vocabulary building during a normal reading process. For another example, using properly simplified subject matters, the user can even be an animal instead of a human.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A system and method to optimize human learning process to achieve superior long-term memory results and to exercise the brain to enhance human memory. The system comprises a user interface device for presenting stimuli in visual/acoustical form, displaying progress information, and receiving a user's responses, and a database containing contents for learning or stimuli for memory exercising. Another embodiment presents a review interval optimizer to track the temporal dynamics of human associative memory, and a process optimizer to optimize the real-time learning sequence to achieve superior and robust long-term memory results. The invention can be implemented as a web application, a PC application, a game device, and on a wireless device such as a cell phone or PDA. Importantly, the system optimizes and automates the learning/exercising process for the user.

Description

A System and Method to Enhance Human Associative
Memory
CROSS REFERENCE TO RELATED APPLICATIONS
FIELD OF INVENTION
(001 ) This invention relates generally to a computer-facilitated system to exercise the memory mechanism of a user's brain and to enhance his associative memory. The system contains a memory engine to optimize the user-computer interaction so as to optimally exercise the memory mechanism of the user's brain and to generate a superior long-term user memory.
BACKGROUND OF THE INVENTION
(002) Associative memory, the memory that A goes with or equals B, is one of the fundamental components of human intelligence. It should also be noted that associative memory is not only limited to human beings and can be observed from many other living beings as well. Here, A and B form an associative pair and each could be concepts, words, symbols or sensations presented in visual, acoustic or other multi-media formats such as tactile and smelling sensations to stimulate a human. The process of forming an associative memory is called associative learning, which is illustrated by the following examples:
■ Learning an alphabet: forming an association between a written alphabet and its corresponding pronunciation;
■ Learning a new word or concept: forming an association between a written word or concept and its meaning, pronunciation, or its counterpart expressed in a different language;
■ Learning a new word or concept with a question of multiple choices. For example, a student is presented with a question followed by four (4) answers, with one of them being the correct answer. In this case, the associative pair consists of the question and the correct answer whereas the wrong answers are also present intentionally to interfere with the student's thinking;
■ Learning a new sentence in a foreign language: forming an association between the foreign sentence and its meaning;
■ Mastering an anatomical term: forming an association between an anatomical part and its corresponding name;
■ Training an appropriate behavior: forming an association between an occasion or environment setting and an appropriate behavior.
(003) The formation and consolidation of a long-term associative memory in the human memory system is a dynamic process and usually requires repeated learning trials of the same pair of stimuli (A1 B). As the process of forming long-term memory involves several stages and each stage involves a specific time course, the final strength of the thus formed long-term memory is highly dependent upon a temporal pattern that is a specific time sequence of the learning repetitions. This is like pushing a ball pendulum that has an intrinsic cycle time determined by its own physical properties. To efficiently make the pendulum swing with a wide range, briefly pushes it at the right moment with a time interval equal to its intrinsic cycle time. Otherwise, no matter how frequently one pushes it, the ball just swings within a small range.
(004) An example of an associative pair A is to B is: A is an English word "apple" and B is the corresponding Chinese word "PingGuo", the Chinese word for "apple". During associative learning of the pair (A, B) to form a long-term memory, one needs to repeat an initial trial after a few seconds before one forgets it, then a few minutes, a few hours, a few days, and so on. From the viewpoint of neurophysiology, there is a best temporal pattern, or time sequence, of the trial repetitions to form a permanent memory of the pair (Apple, PingGuo) most efficiently. Accordingly, the term "golden sequence" is defined as the temporal pattern of the simple repetitions of learning a specific associative pair (A, B) that generates the strongest long-term memory in a specific user.
(005) Under normal learning scenarios for regular learning or clinical training to improve memory, the learning or exercise task can be expressed as forming associations between a set of associative pairs like (Ai, Bi), (A2, B2),...,(Aj, Bi),..., (AN, BN). Therefore, to achieve the best learning result, it is desirable to arrange the overall learning sequence for the set of associative pairs so that the actual temporal pattern for the repetitions of each associative pair /5 or is close to its golden sequence. For more information on the research into human memory, reference is made to the book entitled: Models of Human Memory, by Donald A. Norman, Academic Press New York 1970. This book provides a broad and authoritative perspective on the various models about human memory.
(006) Under a conventional learning practice, the user controls the learning or exercising process. While there are numerous computer hardware and software available to facilitate the retrieval and presentation of the learning material to the user, no utility exists to intelligently optimize the learning time sequence according to the intrinsic temporal dynamics of memory formation in the user's brain. The lack of such utility is possibly due to the difficulty of determining the golden sequence for the human learning process. This is more challenging than it appears as the golden sequence varies across individuals and also across different associative pairs. For example, the review time intervals of the golden sequence for a user with a better memory is longer than those with a poor memory, the review time intervals of the golden sequence of a word with an abstract meaning (e.g., EFFECT) is shorter than those of a word with a concrete meaning (e.g., APPLE). (007) In view of the above discussion, human associative learning and memory exercising for forming long-term memory, both highly important aspects of the human life, remain to be controlled by human beings in a casual or intuitive manner thus the corresponding learning process is far from being efficient. Hence, there exists a need to scientifically optimize the associative learning process to gain superior learning speed and to enhance human associative memory.
SUMMARY OF THE INVENTION
(008) A computer facilitated system is proposed to efficiently enhance a user's associative memory. The system includes: a user interface device for presenting information to the user and receiving user responses. a processor for processing the above information and user responses. a contents database for storing subject matter to be presented to the user in the form of an associative pair.
A memory engine. an operating program. a user interface design that allows user to self-evaluate his answer. During operation, firstly the operating program presents the user, through the user interface device, with associative pairs retrieved from the contents database, secondly the operating program creates a trial and memory history in the memory engine based upon a user response to the associative pairs, thirdly the memory engine determines an optimal real-time sequence and order based upon the trial and memory history and fourthly the operating program presents the user with the associative pairs again and a set of new associative pairs.
(009) The user response to an associative pair can further include: a user response content. a user response time equal to the time interval between a first instant when the associative pair is presented to the user and a second instant when the user response content is executed by the user.
Where the user response time can optionally include a pre-determined maximum range, called maximum response time, beyond which the user response content will be defaulted into a category of no response by the operating program.
(010) The user response content can differ for different forms of presentations. It can be as simple as an answer such as "A", "B", "C" or "D" in multiple choice questions. It can further include an initial response and a confirmation response where the former captures the initial response from the user, and the latter allows the user to evaluate whether his initial response is correct after seeing the right answer. (01 1 ) Each of the associative pairs can be, but is not limited to, a pairing of: a displayed language word and a definition. a spoken language word and a spelling. a displayed language word and a picture. a pairing of a word in Braille and a pronunciation for the case of a blind user, a spoken language word and a choice amongst multiple text descriptions, a question and multiple answers.
(012) The associative pairs can further be embedded into a game scenario or into a user's interactive reading session with the system.
(01 3) The contents database can be a database in a conventional sense, or it can be implemented on a unit removable from the system with the subject matter stored on a mobile memory device.
(014) The memory engine can further include: a user history database for recording a user profile, a usage chronology having chronologically recorded results of the user recalling each associative pair, and the trial and memory history. a review interval optimizer for processing the trial and memory history and determining a best review interval for each associative pair. a sequences database for storing the best review intervals for numerous associative pairs presented to the user. a process optimizer having a set of scheduling algorithms for retrieving data from the sequences database and for determining a next associative pair and its schedule to be presented to the user for achieving an enhanced long-term memory of the user.
(01 5) For added flexibility, the process optimizer can also incorporate a user-defined study schedule into determining a real-time sequence for presenting the associative pairs to the user.
(016) The computer facilitated system can be implemented in the form of a web-based service/application, a desktop application, a program/game running on mobile devices, such as a cellular phone, a personal digital assistant or even a toy (watch, pen, etc.).
(017) The memory engine can be further equipped to drive, via the operating program, the user interface device to display information indicating the user's real-time progress statistics and trial and memory history, calculated from the user history database.
(01 8) The review interval optimizer further includes a neurophysiologically rooted dynamic memory model based upon which the review interval optimizer determines, for each user and for each associative pair, a golden sequence defined as the best review time intervals of the repetitions of user trials of the associative pair for generating the strongest long-term memory for the user, and sends the golden sequence to the sequences database.
(019) As one exemplary embodiment based upon neurophysiology, the dynamic memory model includes the following functions: a short-term memory activation trace MT(t) that is a decreasing function of time describing the short-term decay of an association intensity of the associative pair initiated in the user's brain due to each presentation of the associative pair to the user and his subsequent response. a long-term memory association MA(t) that is a function of time describing the long-term course of an association intensity of the associative pair formed in the user's brain as a combination of: a) a time-consolidation from each of the short-term memory activation trace, a time-integration of MT(t). b) a long-term decay of MA(t) itself that is a decreasing function of time.
In the above, the lifetime of both the short-term decay and the long- term decay are both user and associative pair dependent while the long-term lifetime is much longer than the short-term lifetime.
(020) As the long-term lifetime is much longer than the short-term lifetime, the review interval optimizer further sets each of the best review time intervals sufficiently long such that majority of the time- consolidation of the corresponding short-term memory activation trace is complete so as to maximize the long-term memory association MA(t).
(021 ) As a practice, the review interval optimizer sets each of the best review time intervals sufficiently long such that from about 60% to about 90% of the time-consolidation from each of the short-term memory activation trace is complete.
(022) An example of the functional form of MT(t) is: MT(t) = Exp(-X/ τs)
Where τs is the lifetime of the short-term decay of MT(t). An example of the functional form of MA(t) is:
MA(D = Exp(-t/ τι)
Where τι is the lifetime of the long-term decay of MA(t) with τι » τs., the corresponding example of golden sequence is about:
A *Exp(0), A *Exp(l), A *Exp(2), ... A *Exp(n), ...
Where the parameter A can be further adjusted by the review interval optimizer for a strongest buildup of long-term memory association
MA(t) .
(023) For a new user of the computer facilitated system, the review interval optimizer can set the golden sequence for each associative pair to a predetermined pair-specific default golden sequence. The default golden sequence can be set to correspond to the best review time intervals for each associative pair learned by a user with average memory power.
(024) As a simplification for a new user of the computer facilitated system, the review interval optimizer can set the golden sequence for all associative pairs to a predetermined default golden sequence. The default golden sequence can be set to correspond to the best review time intervals for an associative pair with average difficulty and learned by a user with average memory power.
(025) During an actual learning process, for each user and for each associative pair the review interval optimizer can further regularly adjust the associated golden sequence in real-time using numerous results of the user recalling each specific associative pair and the trial and memory history from the user history database.
(026) During an actual learning process, the review interval optimizer can further compare a prediction of the dynamic memory model with an actual result from the user recalling an associative pair and adjusts the dynamic memory model accordingly.
(027) During an actual learning process, the review interval optimizer can further track an error rate computed from numerous trial results of the user from the user history database and adjusts the dynamic memory model accordingly to maintain the error rate within a predetermined range. The advantages include reduced user frustration from too high an error rate and reduced disruption of the ongoing long-term memory consolidation process due to redundant reviews of too many associative pairs with too low an error rate. As an example, the predetermined range is from about 5 percent to about 10 percent.
(028) The process optimizer can further include a learning mode for determining whether to present a past associative pair or a new associative pair to the user under a set of learning scenarios such as: when no past associative pair is due for review. when only one past associative pair is due for review. when a plurality of past associate pairs are due for review. Here, the learning mode updates the trial and memory history and review interval for each of the above scenario after the user is presented with and responds to a past associative pair or a new associative pair.
(029) The process optimizer can further include a review only mode for presenting the user with only past associative pairs thus blocking any new associative pair from being presented. The review only mode then updates the trial and memory history and review interval after the user is presented with and responds to a past associative pair. (030) These aspects of the present invention and their numerous embodiments are further made apparent, in the remainder of the present description, to those of ordinary skill in the art.
BRIEF DESCRIPTION OF THE DRAWINGS
(031 ) In order to more fully describe numerous embodiments of the present invention, reference is made to the accompanying drawings. However, these drawings are not to be considered limitations in the scope of the invention, but are merely illustrative.
Fig. 1 is a schematic block diagram of one embodiment of the overall structure of the present invention as a dynamic system to optimize human associative learning and memory exercise;
Fig. 2 shows a flowchart illustrating one embodiment of the workflow during a user's learning process under the present invention;
Fig. 3 illustrates a screenshot of the user interface showing step 208 of the flowchart of Fig. 2 wherein RI , R2 and R3 represent three different user responses according to an embodiment of the present invention;
Fig. 4 illustrates a screenshot of the user interface following step 216 of the flowchart of Fig. 2 wherein R4 and R5 are two different user responses according to an embodiment of the present invention;
Fig. 5 illustrates a screenshot of the user interface following step 214 of the flowchart of Fig. 2 according to an embodiment of the present invention; Fig. 6 illustrates some testing results of the embodiment illustrated in Fig. 1 through Fig. 5;
Fig. 7 illustrates the workflow of a spelling practice task according to an embodiment of the present invention;
Fig. 8 illustrates the workflow of a listening-comprehension task according to an embodiment of the present invention; Fig. 9 illustrates the workflow and user interfaces for a multiple- choice task according to an embodiment of the present invention; and Fig. 10 illustrates the workflow and associated user interface for a multiple choice task according to an embodiment of the present invention.
DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS
(032) The description above and below plus the drawings contained herein merely focus on one or more currently preferred embodiments of the present invention and also describe some exemplary optional features and/or alternative embodiments. The description and drawings are presented for the purpose of illustration and, as such, are not limitations of the present invention. Thus, those of ordinary skill in the art would readily recognize variations, modifications, and alternatives. Such variations, modifications and alternatives should be understood to be also within the scope of the present invention. Additionally, numerous section titles below are terse and are included for convenience only.
THE SYSTEM
(033) Fig. 1 is a schematic block diagram illustrating an embodiment of the overall structure of the present invention as a dynamic system to optimize human associative learning and memory exercise.
(034) To optimize the learning process of each user 102, each user's learning data, called trial and memory history, are individually processed, stored and analyzed by the system. Thus, after a returning user logs in to the system it can recognize the user, retrieve the user's detailed trial and memory history and optimize his/her learning process accordingly.
(035) The user interface 104 component of the system includes an output unit and an input unit as signified by the left and right pointing arrows next to the user interface 104. The output unit may include a visual display (e.g., monitor screen, LCD display) or an auditory device (e.g., speaker) of information. Thus, the user interface 104 may display a subject matter to be presented to the user for learning, information related to the user's learning progress and trial and memory history related information to inform the user 102 of his/her learning progress. The input unit is used to receive the user's responses to the presented subject matter. The input unit may include a mouse, keyboard, keypad, joystick, microphone, or other similar devices. To accommodate a blind user, the user interface 104 may even include devices for handling Braille images and characters. In general, the user interface 104 is capable of presenting multi-media information to the user 102 and receiving his/her responses.
(036) A computer processor 106 is coupled to the user interface 104. Inter Alias, the computer processor 106 takes user responses from the user interface 104, process them and delivers subject matter for learning back to the user interface 104. A contents database 108 is coupled to the computer processor 106. The contents database 108 stores the subject matter to be presented to the user 102 for learning or for memory exercise. The subject matter is usually stored in the contents database 108 in the form of numerous associative pairs although others forms are possible. For example, in a case of multiple choices Q&A (question and answer), a subject matter entry can include a Q-portion that is a question, followed by an A-portion that itself is a list of four candidate answers with one of the four answers being the correct answer to the Q-portion. Some examples of the subject matter are: alphabet learning, phonics learning, word or concept learning (including pronunciation, meaning, or its foreign language representation), sentence grammar learning, anatomy terminology learning, behavioral training (forming an association between an occasion or environment and an appropriate behavior), training question in various fields or tests such as SAT, LSAT, etc. The subject matter can be built in the present invention system, stored in a database external to the present invention system, or made as mobile memory devices removable from the present invention system such as CDs, floppy disks or memory cards.
(037) In accordance with an embodiment of the present invention, the present invention system includes a memory engine 120, a crucial component coupled to the computer processor 106, for tracking the real-time memory status of each user 102 and optimize the learning sequence of the subject matter accordingly to ensure that every entry of the subject matter gets reviewed at its right moment to achieve a superior long-term memory in the user 102. The memory engine 120 further includes a user history database 122, a review interval optimizer 124, a sequences database 126 and a process optimizer 128. As an alternative embodiment, the contents database 108, the user history database 122 and the sequences database 126 can be grouped into a larger database.
(038) While not specifically drawn and delineated here to avoid obscuring details, the present invention system further includes an operating program working with the computer processor 106, the user interface 104, the contents database 108 and the various components of the memory engine 120 as signified by the various arrows communicating there between. More details of the operating program will be presently described. A further remark is that, depending upon the desired performance of the present invention and the number of users simultaneously served by it, the computer processor 106 can be embodied with a number of computer processors as necessary. (039) The user history database 122 records and stores a user profile, a usage chronology and the trial and memory history. The user profile includes detailed personal information for identifying and characterizing a user, such as user name, account name, account number, password, demographic data, memory grade level versus subject matter, etc. The usage chronology is a set of chronologically recorded results of the user recalling each associative pair under each subject matter. The trial and memory history contains individualized evaluation results based upon user responses to the associative pairs. Additionally, the user history database 122 also sends individualized progress related information to the user interface 104.
(040) The review interval optimizer 124 takes and process the trial and memory history from the user history database 122 to simulate the user's temporal dynamic memory process of each learned associative pair and determines, based upon a dynamic memory model, its golden sequence that is its best review intervals. The review interval optimizer 124 then sends these best review intervals to the sequences database 126 for storage. As an important concept of the present invention, each of the user responses can include both a user response content and a user response time. An example of the user response content is, following a multiple choice form (choices "A'V'B'V'CY'D'V'E") of an associative pair presentation to the user 102, a choice "E" entered by the user 102. The user response time for an associative pair is defined as the time interval between a first instant when the associative pair is presented to the user 102 and a second instant when the user response content is executed by the user 102. While the user response content is clearly an important indicator of the user's memory status of the associative pair, the user response time can also be important in determining his/her memory status as, for example, given the same user response content that is correct, a shorter user response time usually indicates a strong user memory status of the subject matter. For those skilled in the art, the user response time can be implemented with an event-triggered timer in hardware, software or combination there of. In another example of training a fast response of forming an association between the onset of a particular computer display and the user 102 pushing any key of the keyboard, the user response time even becomes the dominant indicator of the user's memory status. Under this situation, a predetermined maximum range, called maximum response time can be implemented beyond which the user response content will be defaulted into a category of no response by the operating program. The maximum response time should be appropriately set and is in general dependent upon the class of subject matter being learned. Considering most typical applications, a maximum response time range of from about 0.1 second to about 10 minutes should be sufficient. Correspondingly, the user response time should be measured with a resolution of from about 10 millisecond to about 500 millisecond. Thus, the review interval optimizer 124 is a computer- facilitated review time sequence optimizer based on both the user response content and the user response time. In the absence of the present invention, an un-optimized review time sequence typically results from numerous possible scenarios. In a first scenario, a user does not use a computer and repeatedly reviews a set of associative pairs randomly. In a second scenario, a user does not use a computer and repeatedly reviews the set of associative pairs sequentially without any optimization. In a third scenario, a user strives to review the set of associative pairs efficiently without the help of a computer or other automated devices. As most users, unlike the computer, do not possess a huge amount of logical, data memories reliably, the user- achieved review time sequences are usually un-optimized leading to an inefficient learning process. Thus, a very important ingredient of the present invention is the realization that review time sequences from a traditional, non-computer facilitated practice need to be improved and/or optimized. Furthermore, for superior results, review time sequence should be optimized for each user and for each associative pair thus, to be practical and reliable, it is advisable to employ a computer to facilitate such optimization task. While under computer facilitation, a variety of algorithms can be utilized to identify and achieve similarly optimized review time sequences as will be presently illustrated, for those skilled in the art all such algorithms and their variants are considered to be within the scope of the present invention.
(041 ) The process optimizer 128 retrieves the best review intervals from the sequences database 126 and determines a next associative pair and its schedule to be presented to the user 102. The process optimizer 128 contains a set of scheduling algorithms to optimize the learning process. The set of scheduling algorithms are adopted to generate the best long-term memory results in the user 102 based upon the golden sequences stored in the sequences database 126. As a practical matter for a single user 102, the best review intervals of different associative pairs within a subject matter can conflict with one another due to coincidence. The situation of a single user concurrently learning multiple subject matters would in general result in more conflicts. Weekends, holidays, vacations and other engagements and commitments of the user 102 can also cause the user 102 to miss some best review intervals. For those skilled in the art, such user's time schedule can either be known already, as in the case of conflict among different associative pairs within a subject matter, can be imported from another user schedule program into the operating program or can be manually inputted by the user 102 through the user interface 104. Using the scheduling algorithms, the process optimizer 128 can now take into account the golden sequences of a user and the user's time schedule as constraints and determines a next associative pair and its schedule for an optimized learning process. A specific example is presented here to illustrate the scheduling algorithm. A user is learning ten (10) words, say Wl , W2, ..., Wl O. The best next review times for the 10 word are 1 , 2, 3, ..., 10 minutes from now. Unfortunately, the user has to leave his system for 5 minutes. When he comes back 5 minutes later, there will be more than one word that needs to be reviewed, and the scheduling algorithm of the process optimizer 128 resolves the conflict and determines which word should be learned next.
(042) Therefore, an embodiment of the present invention does not force the user 102 to study at a specified time. Instead, the process optimizer 128 derives an optimized actual learning schedule to fit the user's schedule.
(043) To recapitulate on the system operation for Fig. 1 , the operating program first presents the user 102 with associative pairs retrieved from the contents database 108. The operating program then creates a trial and memory history in the memory engine 120 based upon a user response time and user response content to the associative pairs. Next, the memory engine 120 determines an optimal real-time sequence and order based upon the trial and memory history. Finally, the operating program uses the optimal real-time sequence and order and presents the user 102 with the associative pairs again during an ongoing learning process. For those skilled in the art, by now it should become clear that the present invention system can be implemented as Internet web applications on a web browser, as desktop PC applications, or as applications on various mobile handheld devices such as a personal digital assistant or a cellular phone. Similarly, the present invention system can be implemented to serve multiple users in the case of Internet web or PC applications, or only one or a small number of users in the case of personalized handheld devices. More details of the operation mechanism of the memory engine 120 will be revealed in the following description of a typical workflow.
THE WORKFLOW
(044) The following will first describe a usage session through an embodiment of the present invention system to illustrate the details of the resonant human-computer interaction process achieved by this invention. The next part will describe more details of the core component of this invention - the memory engine 120 and its functions.
(045) Fig. 2 shows a flowchart illustrating one embodiment of the workflow during a user's learning process under the present invention. As such, the work flow can be viewed as a simplified representation of the next level details of the operating program. The illustrated workflow is for a common associative learning task - a recall task 210 for associative pair contents directing at learning English vocabulary. Other forms of associative learning tasks will also be described herein. Correspondingly, Fig. 3 illustrates a screenshot of the user interface showing a step of the flowchart of Fig. 2.
(046) In accordance with an embodiment of the present invention, Fig. 2 illustrates a typical workflow of a user: login, build vocabulary, and logout. Upon user login 200, the user is identified by the system. The user can then choose a subject matter with specific contents to study with customize content 202. To be flexible, the system can be programmed to allow the user to further customize the learning contents, such as adding or deleting some contents, or reordering the contents by alphabetical or semantic criteria.
(047) Upon the user entering the learning process, the memory engine 120 starts to run and determines which word to be presented in the next trial in order to achieve the best long-term memory results. This step corresponds to determine word for next trial 204. As an example, the memory engine 120 selects the word INQUISITIVE in the next trial and it is presented. The user is asked to recall its meaning in present and recall meaning 208. Three possible user response content of the recall follow:
Rl RECALLED AND SURE 209: Recalled its meaning and sure about the answer;
R2 RECALLED BUT UNSURE 21 1 : Recalled but not sure about the answer;
R3 NO IDEA 213: Have no idea what this word means.
(048) Fig. 3 illustrates the corresponding screenshot of the user interface at step 208. The user clicks on one of the three response buttons at the right side to indicate his user response content. If the user response content is Rl RECALLED AND SURE 209 or R2 RECALLED BUT UNSURE 21 1 , the explanation of the word is presented with PRESENT WORD EXPLANATION 216. Next, Fig. 4 illustrates a screenshot of the user interface following step 216 of the flowchart of Fig. 2 wherein R4 and R5 are two different user responses. Here, the user indicates his user response content, whether his answer is correct, by clicking one of the two buttons R4 YES 21 7, R5 NO 219.
(049) If the user response content in Fig. 3 is R3 NO IDEA 21 3, the explanation of the word is also presented with PRESENT WORD EXPLANATION 214. Next, Fig. 5 illustrates a screenshot of the user interface following step 214 of the flowchart of Fig. 2. Here, the user response content is simply a clicking on the continue button (>) R6 CONTINUE 21 5 to move on to the word for next trial. Thus, in general the simple term "user response content" actually includes a sequence of logically connected responses from the user. In the above example, the user response content includes two sequential user responses: a first initial response, and a second confirmation response. The initial response is illustrated in Fig. 3 and can have three values indicating (Know, Not sure, Don't know). The confirmation response is illustrated in Fig. 4 and can have two values indicating (Right, Wrong) only when the initial response indicates know or Not sure. The confirmation response becomes moot when the initial response indicates Don't know.
(050) An important remark about the user interface embodiments as illustrated from Fig. 2 through Fig. 5 is that, upon completion of all user responses to the presented word, the present invention system already has sufficient data to determine the user's memory status of the word. In other words, in these cases the presented information on the screenshots and the subsequently received user responses are designed to automatically realize a self-evaluation, by the user, of his memory status of the word. As a drastically contrasting example of user interface design that does not automatically realize a self- evaluation, the user can alternatively be presented with the word then asked to type its meaning in a short sentence. While in this case the present invention system can still incorporate an artificial intelligence program for language-processing to "grade" the short sentence thus assessing the user's memory status, the result would in general be less straight forward and less accurate.
(051 ) After each trial, the memory engine 120 saves the associated trial data for the current trial in the user history database 122. Based on the trial data, the memory engine 120 then updates the trial and memory history of the word just processed and, with DETERMINE NEXT REVIEW INTERVAL 222, determines a best review interval after which the same word should be reviewed again. This best review interval is stored in the golden sequences database 126.
(052) The system will then UPDATE PROGRESS INFORMATION 224 and sent it out through the user interface 104. The progress information can display the user's real-time progress statistics and trial and memory history calculated from the user history database 122. Examples of the progress statistics are current session progress 240 and overall progress 242. Other examples are total number of words in the selected subject matter to study and its percentage already tried. Examples of the trial and memory history include a table sequentially listing each word tried versus its cumulative trial number, number of "Know", number of "Not Know", number of "Don't know", number of "Right", and number of "Wrong", etc. After each trial or a certain number of trials, the system will EVALUATE MODEL 228 and ADJUST MODEL 226 if it is needed. An important remark is that, in practice a computer-implemented finite model can only approximate, albeit closely, the golden sequence that is the idealized best review intervals for an associative pair. Furthermore, the user's brain can be regularly and uncontrollably subject to numerous other stimuli potentially afftecting the memory status of the associative pair thus its golden sequence. A simple example is that the user has been concurrently reading a story book that has the same word "inquisitive" in it. In essence, within some degree of variation the golden sequence can turn out to be dynamic in practice. Therefore, for each user and for each associative pair the review interval optimizer 124 can be further improved to regularly adjust the associated golden sequence in real-time using the results of the user recalling the associative pair and the trial and memory history from the user history database 122. In one embodiment, the review interval optimizer 124 can compare a prediction of the dynamic memory model with an actual result from the user recalling an associative pair and adjusts the dynamic memory model accordingly for a strongest buildup of long-term memory. More details of the dynamic memory model adjustment will be described.
(053) Finally, the operating program moves on to the next word for trial unless the user chooses to exit the system via EXIT ? 232 and STOP 236. (054) The user interface 104 can also be driven by the operating program to provide additional memory cues to supplement the associative learning. For example, In the above case of vocabulary learning, the memory cues can be visually, acoustically, or semantically related words; the prefix, root and appendix of a compound word; the picture for a visible concept, the sound for an acoustical concept; or a short story, or even a short movie to name a few. Fig. 3 includes an alphabetic pronunciation cue 244 for the word "inquisitive" and a clickable speaker icon 246 producing an audible pronunciation of the word "inquisitive" upon activation by the user. In Fig. 4, an example sentence 248 containing the word "inquisitive" is presented to help the memory.
A SIMPLIFIED SYSTEM
(055) The above-described system can be simplified to fit into easy-to-carry handhold devices such as personal digital assistants and cell phones. As most handhold devices have quite limited processing power, memory and peripheral devices, the memory engine can be simplified to handle only one user. As illustrations for implementation in a small display with low image resolution, the corresponding screenshots can be reduced into handheld screenshot 250, handheld screenshot 252 and handheld screenshot 254. To reduce memory and processing power, the steps 202, 224, 226, 228 in Fig. 2 are options which may be left out for simplicity. THE MEMORY ENGINE
(056) As described above, according to an embodiment of the present invention, the main function of the memory engine 120 is to track in real-time, the memory status of each user, determine the golden sequence for each word learned, and accordingly optimize the review time intervals to insure that each word will be reviewed at the time intervals that closely correspond to the golden sequence for the word. Stated in another way, the memory engine drives the learning process according to the memory status of the learned materials in the user's brain so that the user-computer interaction is resonant. This type of resonant interaction achieves superior memory results that are far beyond those obtainable through presently available regular human learning such as learning without the aid of a computer or through conventional computer systems which do not have the memory engine technology according to the embodiments of the present invention.
(057) Another advantage of learning with the memory engine is that the process is not just much faster, but also easy and full of fun. This is because the computer tracks the user's memory status of each associative pair in great detail and automatically delivers the right material for review at the right time. Learning with the memory engine can make the most tedious learning process such as building vocabulary easy and fun. This in turn can change the psychology of the language learning users and builds their learning interest and confidence. (058) Yet another advantage is that, when the human memory mechanism is optimally stimulated, the memory mechanism becomes more robust. Like building muscles in the gymnasium under the instruction of a personal trainer, the trainee's muscles get much stronger than when he does it by himself. Learning with the memory engine enhances memory efficiently. Thus, after weeks of regularly using the embodiment of the present system, many users feel they can remember things like phone numbers and addresses much better.
(059) Therefore, according to additional embodiments of the present invention, the system can alternatively be implemented as an exercise machine for the memory mechanism of the user's brain. Embodiments of the present invention can have the following applications:
■ facilitating the development of the memory system of children;
■ enhancing the memory power of adults;
■ preventing or slowing down memory deterioration from normal aging;
■ preventing or slowing down memory loss at the very early stage of various dementia;
■ rehabilitating brain memory mechanism after various types of brain damage.
(060) Yet another advantage is that the learning process is automatic and the required operations of the users are very simple. As the learning process has been optimized for the specific user by the system, the user does not have to worry about the arrangement of the learned items. The user simply responds when instructed by the system as in a computer game situation.
(061 ) Yet another advantage is that the very detailed information regarding the user's memory status of the materials learned and the progress statistics are displayed for the user. The memory building process is thus transparent to the user giving him a strong feeling of achievement.
(062) Another object of the embodiment of the present invention is that the user does not forget any material learned with the system if used regularly. The memory engine will detect any learned material that a user is about to forget and presents it to the user for review. In addition, the learning is focused on difficult contents. As difficult contents are more quickly forgotten, the memory engine will arrange the material for the user to review the difficult contents more frequently.
(063) Yet another advantage is that the user interface for learning is multi-media in nature that helps the user to transfer his learned skills in real life. With multimedia technology, the user uses various sensory aspects in the learning process.
(064) Yet another advantage is that the learning is aimed at building associative memory into habit. Unlike a conventional learning of just recalling A given B and vice versa, learning using the present invention system will continue to build the user's long-term memory until his response becomes spontaneous and effortless. THE REVIEW INTERVAL OPTIMIZER
(065) The review interval optimizer 124 is one of the core components of the memory engine 120 according to an embodiment of the present invention. The review interval optimizer 124 retrieves data from the user history database 122 to obtain the user's current memory status of the present word under trial, the review interval optimizer 124 further takes into account the current learning trial results to update the user's memory status then determines the best review interval for the next review. This review interval is sent to the sequence database 126, which in turn is used by the process optimizer 128 to arrange an actual learning sequence. The review interval optimizer 124 generates the intervals of the golden sequence for each word and for each user. The golden sequences are the scientific base the process optimizer 128 relies upon to arrange the actual learning sequence for achieving the best long-term memory results.
DYNAMIC MEMORY MODEL: MEMORY MECHANISM OF INITIAL LEARNING
(066) The following sections describe various aspects of the dynamic memory model, the scientific basis upon which the review interval optimizer 124 relies to determine the best review intervals for the golden sequence. As a related reference, attention is directed to the following research paper: (067) Lifetime of Human Visual Sensory Memory: Properties and Neural Substrate. Author: Wei Yang. A dissertation submitted to the Department of Psychology at New York University for the degree of Doctor of Philosophy in the year 1999. This paper shows the background of one of the patent application inventor's experimental and theoretical research background in the area of human memory. Some aspects of the memory model in the dissertation were adopted in the model of human memory of the present patent application.
(068) References are made to Fig. 2 through Fig. 5. Suppose a new word A is presented with present and recall meaning 208 to a user and the user does not know its meaning B, this means that there is no long-term memory association between A and B, expressed as MA(O) = 0. Where the user's long-term memory association (LMA) is denoted by the function of time MA(t) with MA (0) = 0 in this case. As a convention, a higher MA(t) value represents a stronger LMA. Thus, a case of MA(tl) = 8 represents a stronger LMA than a case of MA(t2) = 3, etc. The user responds with R3 NO IDEA 213 to indicate that he does not know this word. Then, the system presents its meaning B with PRESENT WORD EXPLANATION 214. When the user sees both A and B, the co-activation of A and B in the user's brain initiates a short- term memory activation trace (SMT) as a short-term association between A and B, MT(t). Where the user's SMT is denoted by the function of time MT(t). As a similar convention, a higher MT(t) value represents a stronger SMT. In the absence of any future stimuli affecting the associative pair [A,B], the SMT generally decays and tapers off toward zero (0) with time (t). As an example, the time course of the persistence of the short-term memory activation trace can be modeled as:
MT(t) = Exp(-t/τs) (1 )
• MT(t) is the level of the short-term memory activation trace at time t.
• Exp is the exponential function that is the inverse function of natural logarithm.
• Ts is the lifetime of the short-term memory activation trace decay. In this example and for convenience, it is the time it takes for the SMT to decay from its initial full level of 1 to 1 /e, where "e" is approximately 2.71 8.
• Note that in function (1 ), the initial memory activation rising process is, as its time course is much shorter than the decay lifetime τs , not included in the present model.
(069) In time, the SMT consolidates to the LMA MA(t). The time course of the consolidation can be described as:
MA(t) = 1-exp[(-t-to)/TcJ (2)
• MA(t) is the strength of the long-term memory association at time t.
• Tc is the time constant of the consolidation.
• to is the delay time of the consolidation process relative to the activation. For simplicity of illustration, the model assumes t0 = 0 and τc = τs .
Thus,
MA(t) = 1-exp(-t/τs) = /MT(t)dt (3)
(070) Notice that, again for simplicity of illustration, a constant scaling factor of τs has been ignored in the above from the time integration of MT(t). Therefore, the time-integration of MT(t) contributes to the time course of buildup of MA(t). Let the symbol "od' denote "infinity" then exp(-∞/τs) = 0. Thus, an ultimate consolidation of SMT results in an LMA strength of one (1 ). Stated within the context of the above example, after the initial learning of a new associative pair [A,B], the LMA between A and B will ultimately reach the level of MA(t) = 1 . The following example is instructional:
τs = three (3) days results in: at t = τs = 3 days MA(t) = 63 %; at t = 1 .6 x T3 = 4.8 days MA(t) = 80 %; at t = 3 x τs = 9 days MA(t) = 95 %; at t = ∞ x τs = ∞ days MA(t) = 100 %;
Thus, while the majority (60% - 90%) of the time-consolidation from the SMT is complete within only a few periods of the decay lifetime, τs , one has to wait for an extremely long time for the time-consolidation to reach essentially 100%.
DYNAMIC MEMORY MODEL: MEMORY MECHANISM OF REPETITION (071 ) A review trial is a repeat trial when the user sees the associative pair [A1B] again. One crucial difference between a review trial and an initial trial is that, by the time of the repetition, both the LMA MA(t) and the SMT MT(t) between A and B are not 0 due to the residual effect of the initial learning. Thus, when the representation of A and B are co-activated in the user's brain again, the residual MT(t) gets recharged from its current level to full strength 1 . However, the decay lifetime of this short-term activation τs is determined by the current strength of MA(t):
τs - A*Exp(MA(t)) (4)
• Where A is in general an adjustable scaling factor. For simplicity of illustration set A=I here.
(072) Function (4) implies that the lifetime of the activation of an association increases with the LMA MA(t). Note that while illustrated using an exponential function, this memory model is not restricted to the specific functional form. For example, functions (1 ) and (4) can be expressed as a power function, a polynomial function, a hyperbolic function of limited range or even a trigonometric function of limited range, etc. Under the pretext that all functional forms should approximately describe the general time course of the user's memory, systematic experimental efforts are nevertheless required to identify which one works better than the others. Anyhow, the short-term activation now continues to consolidate to an existing LMA MA(t). Thus:
MA(t) = L+ {1-exp[(-t-to)/τc)} (5)
• L is the current strength of the Long-term association.
• MA(t) \s the strength of the long-term association at time L
• τc is the time constant of the consolidation.
• to is the delay time of the consolidation process relative to the activation.
For simplicity of illustration, the model assumes t0 = 0 and τc = T8.
Thus,
MA(t) = L+ {1-exp[(-t/τs)]} = L + /MT(t)dt (6)
Dynamic Memory Model: decay of long-term memory
(073) The long-term association is also subject to decay but over a much longer time course:
MA(t) = MA(O) *[1-Exp(-t/τ,)] (7)
• MA(t) is the strength of the long-term association at time t;
• MA(O) is the strength of the long-term association at time t=0;
• Ti is the lifetime of the long-term connection with r,» τs .
Thus, in general the long-term association MA(t) traces a time course that is a combination of a time-consolidation from each of the short- term activation MT(t) and the long-term decay of MA(t) itself. Therefore, taking the long-term memory decay into consideration, for the best results in long-term memory it is desirable to repeat an associative pair before the decay of its long-term memory sets in. An important remark is that both τs and
Figure imgf000039_0001
are both user and associate pair dependent.
DYNAMIC MEMORY MODEL: GENERATION OF GOLDEN SEQUENCE
(074) The description below describes how to determine the best time to review a word according to the above dynamic memory model as an embodiment of the present invention. The golden sequence is the sequence of time intervals successfully summing up to the best future times to review a word after its initial learning, the 1 st repetition, 2nd repetition, and so on.
(075) The dynamic memory model provides a picture of the temporal pattern of long-term memory change. Following a learning trial, the trial-produced stimulus is consolidated into the long-term memory with time constant τs. After majority of the consolidation is accomplished, the long-term memory decay gradually starts to dominate with time constant % If a repetition occurs before the completion of the consolidation of the previous short-term memory activation trace, the previous trace will be recharged to full strength before a full consolidation. Thus, it is desirable to repeat a trial after the consolidation of the previous activation trace is complete but before the long-term decay starts to set in. However, as was illustrated by a previous numerical example, while the majority (60% - 90%) of the time-consolidation from the SMT is complete within only a few periods of Ts, one has to wait for an extremely long time for the time- consolidation to reach essentially 100%. By then the long-term decay has already taken its toll. Therefore, a good time to repeat the trial is when majority, say 60% to 90% of the time-consolidation from the short-term memory activation trace, is complete.
(076) An exemplary single point selection within this range is where the activation has mainly consolidated about 2/3, at t = τs , hence the decay of the long-term memory has not started to set in yet. Using function (4), the best review time is t = τs = A*Exp(MA(t)). For the first learning trial, MACtJ = 0 before the learning trial. For each of the following repeat trials, MACtJ gets incremented by one (1 ) following each repeat trial from an approximately complete consolidation. Accordingly, the best review time intervals, or the golden sequence is about:
\A*Exp(0), A*Exp(1), A*Exp(2), ... AΕxp(n), ... ] (8)
where the global scaling factor A can be further adjusted by the review interval optimizer for a strongest buildup of long-term memory. For those skilled in the art, the above exemplary sequence (8) is not and should not be the only way of mathematically expressing the golden sequence. For example, the Exp function can be substituted by other forms of mathematical functions such as a power function, a polynomial function, a hyperbolic function of limited range or even a trigonometric function of limited range, etc. In the present preferred embodiment, sequence (8) is used as the golden sequence format.
DYNAMIC MEMORY MODEL: THE DEFAULT GOLDEN SEQUENCE
(077) As illustrated in the exemplary golden sequence (8), the crucial parameter components of the memory model that determine the golden sequence are parameter A and the £YJP function. While the golden sequence is expected to be associative pair specific and user specific, in practice the present invention system would not know such individualized golden sequence for a new user and for a new word a priori. Hence, when a new user begins his learning process, the user can start with a default golden sequence before the system has a chance to detect the user's individual memory power and the learning difficulty of each individual word for the user. As an embodiment, the default golden sequence are the best review intervals for a word with average difficulty and learned by a user with average memory power. The default golden sequence can be experimentally and statistically determined from a representative user population before the system deployment. Following similar logic, the concept of pair-specific default golden sequence, defined as the best review time intervals for each specific associate pair learned by a user with average memory power, can also be implemented if so desired, etc. In an exemplary embodiment, set A = 1 second, thus the default golden sequence becomes: [1, e, e2, e3, e^, ...] (9)
For a new word, the system can simply assign the next review time interval to be e(n~1> second where n is the trial sequence number. When n=l , the associative pair is presented for the first time; when n>l , the corresponding trials are sequential repetitions.
(078) As alluded to before, there are occasions where the golden sequence needs to be further adjusted to obtain the best memory results. Some examples are:
• The user already has some prior memory about a word when it is first presented by the system.
• The word is more difficult or easier than the average.
• The user has memory power that is either above or below the population average.
• During the learning period with the system, the user's other activities, unknown to the system, produced stimuli affecting the associative pair
How the golden sequence accommodates for these variances will be described next.
PROCESSING OF A WORD IN AN INITIAL TRIAL
(079) In accordance with an embodiment of the present invention, not all the words in this system are completely new to the user. In many cases, the user already has some prior memory of some words through his encounter elsewhere. The system handles this situation by testing each word first and assessing its pre-existing long-term memory status before activating the following learning process.
(080) Thus in each trial, two events occur. The first event is testing, in which a user's memory status of a word is tested. This will provide the system with information about the user's long-term memory status of the presented word, based upon which the system will update the user's memory status after a second learning trial. Concurrently, the system will also compare the user's actual performance with the prediction of the memory model so the memory model can be adjusted if the user's actual performance is too high or too low. The second event is the learning trial, in which the correct answer is presented for the user to learn. Through the learning trial, the short-term memory activation trace is triggered and the short- term memory activation trace further consolidates and contributes to the long-term memory association in time.
(081 ) As illustrated in the flowchart of Fig. 2, there are five possible response scenarios from a trial. In accordance with the present embodiment, an initial MA(O) value is assigned accordingly:
1 : Rl (RECALLED AND SURE 209) followed by R4 (YES 21 7); MA(O) =
50. 2: R2 (RECALLED BUT UNSURE 21 1) followed by R4 (YES 217); MA(O)
= 8.
3: R3 (NO IDEA 213) followed by R6 (CONTINUE 21 5); MA(O) = 0. 4: R2 (RECALLED BUT UNSURE 21 1) followed by R5 (NO 219); MA(O)
= 0. 5: Rl (RECALLED AND SURE 209) followed by R5 (NO 219); MA(O) = 0.
(082) As an example of the above assignment, if the user chooses R4 after R2, the system considers his long-term memory association to be at a level equivalent to that of a new word having experienced 8 trials with the system (one learning trial plus seven reviewing trials). While typically these initial values of MA(O) are assigned with a rough estimation, in time the system will work just as well even if these initial values are slightly off from their optimal value. This is because the memory model is adjusted in that, after each learning trial, MA(O) is incremented by 1 due to the learning consolidation and the next review time is set to be long enough to permit a major consolidation of the short-term memory activation trace.
ADJUSTMENT OF THE GOLDEN SEQUENCE FROM THE USER RESPONSE
(083) The following embodiment describes an example of adjustment of the memory model from a user response that includes both a user response content and a user response time. With reference again made to Fig. 2 through Fig. 5 and suppose the user response content are "RI : RECALLED AND SURE 209" followed by "R4: YES 217". We maintain that if someone really knows a word of average difficulty, he should be able to recall and provide the user response content in 5 seconds. Hence, if the corresponding user response time is 5 seconds, the system can simply adjust MA(O) based upon the afore presented algorithms. But what if the user response time is less than 5 seconds, or more than 5 seconds ? If it is more than 5 seconds, the system interprets that the user's knowingness of the word is not as good as a simple "RECALLED AND SURE 209" would have suggested. The system should therefore decrease MA(O) into a new value MA(NEW) from an old value MA(OLD) it would have set had the system ignored the user response time, etc. This idea can be illustrated mathematically as follows:
MA(NEW) = MA(OLD) + min(URT, 10)/ 5 * AMA(O) if URT> = 5
MA(NEW) = MA(OLD) + max(URT, 1 )/5 * AMA(O) if URT<5
where
URT is the user response time in seconds.
AMA(O) denotes "Regular increment for MA(O) without considering the user response time" min(URT, 10) is a function whose value is the minimum of "URT" and "10". max(URT, 1 ) is a function whose value is the maximum of "URT" and "l ".
(084) Thus, for URT> = 5 a progressively longer URT should result in an MA(NEW) that is progressively higher and higher than MA(OLD) with the effect saturated for URT>10 to avoid wild adjustments for real long URT, for example URT = 300 seconds. For URT<5 a progressively shorter URT should result in an MA(NEW) that is higher than MA(OLD) by a progressively lower and lower amount with the effect saturated for URT<1 . It is important to emphasize here that the present invention is not limited to the illustrated specific functions quantifying the co-processing of both user response content and user response time in the adjustment of the memory model. Rather, in a general sense the present invention recognizes that both the user response content and the user response time can be important indicators of the user's memory status of the subject matter. Within this context, it should be understood that numerous embodiments using other functional forms can instead be employed to simultaneously model the user response content and user response time and these embodiments are still considered to be within the scope of the present invention.
(085) Similarly, for other sequences of the user response content, the following exemplary adjustments can be proposed to simultaneously model the user response content and user response time:
RECALLED AND SURE — > YES Adjust MA(OLD) up for short URT RECALLED BUT UNSURE — > YES Do the same adjustment as above, adjust MA(OLD) up for small URT
RECALLED AND SURE — > NO Adjust MA(OLD) down for small URT RECALLED BUT UNSURE — > NO No Adjustment of MA(OLD) NO IDEA Adjust MA(OLD) up for large URT (because the user thinks he knows, but then decides that he does not know after a while)
As already mentioned before, a pre-determined maximum response time can be employed to restrict the user response time beyond which the user response content will be defaulted into a category of no response by the operating program. This no response can be equated to NO IDEA. Of course, the maximum response time can be further made specific to each associative pair and can even be dynamically adjusted afterwards.
ADJUSTMENT OF THE GOLDEN SEQUENCE ACCORDING TO WORD DIFFICULTY
(086) According to equations (8) and (9) on default golden sequence, the best review time interval for the nth trial is A Exp(n-l). In practice, there are non-ideal cases where such best review time interval estimation may not be very accurate:
1 . The parameter A of A*Exp(n) may not be accurately determined.
2. The functional form Exp in A*Exp(n) may not be the most appropriate for either the associative pair or the user.
3. The initial value A*Exp(MA(0)+ J) for the next review time interval may not be the best estimate.
4. As will be presently described, the system does not restrict the user to review each word at exactly the next review time as determined by the golden sequence. Instead, it permits the user to follow his own study schedule, the system will then arrange the learning sequence accordingly to achieve the best result. Hence, there will be cases wherein words are overdue when being reviewed. That is, the words can actually be reviewed very late compared to their best review time. Consequently, a word's long-term memory status at its review time may not be MAn-I as predicted by the model.
(087) Typically, a large amount of experimentation is required to obtain the best estimate of the above functional form and parameters. Therefore, in this embodiment the best estimation is initially set forth based on prior testing experience. Afterwards, the system allows an adjusting mechanism, as illustrated below, to constantly evaluate the actual performance of the memory model and recalibrate the MA(t) value accordingly when the model does not predict the user's performance well. In a repeated trial, the MA(T) can be recalibrated by the following rules:
1 : Rl (RECALLED AND SURE 209) followed by R4 (YES 217):
MA(t) = MA(Q. 2: R2 (RECALLED BUT UNSURE 21 1 ) followed by R4 (YES 21 7):
MA(Q = MA(Q - 0.5. 3: R3 (NO IDEA 213) followed by R6 (CONTINUE 21 5): MA(Q =
MA(Q - 6.
4: R2 (RECALLED BUT UNSURE 21 1 ) followed by R5 (NO 219): MA(Q = MA(Q - 6. 5: Rl (RECALLED AND SURE 209) followed by R5 (NO 219): MA(t) = MA(O - 6.
Following the calibration, MA(t) is incremented by 1 due to the major consolidation of the short-term memory activation trace accompanying a trial.
OTHER ADJUSTMENTS
(088) Recall the global scaling factor "A" of the golden sequence as expressed in equation (8). In a basic memory model, "A"can be simply set equal to 1 second (equation (9)). However, "A" can be further adjusted by the review interval optimizer for a strongest buildup of long-term memory. An enhanced memory model can even dynamically adjust it and this will be presently illustrated. Take an example of someone young and someone old. In a basic model they use the same "A" so, assuming every other learning related parameters are equal, the system would suggest them to review the word at the same time. Yet apparently, as their age group typically witness quite different memory power, the optimal review time interval for the two would be quite different. Furthermore, even among the same age group individuals can exhibit quite different memory power due to factors such as gender, demographic background, experience, the specific associative pair to be learned, etc. Hence a more superior approach is to choose different "A "for different people and, even for the same person, to further dynamically adjust its value. According to an embodiment of the present invention, a dynamic algorithm for probing the user's memory power and adjusting the memory model accordingly will be described.
(089) As the parameter A is the global scaling factor of the golden sequence, if its value is not appropriate for a user, the MA(ϊ)-value for many words will need frequent adjustment with degraded learning performance. Thus, the correctness of A-value determines the overall learning performance. Hence, the review interval optimizer can compare a prediction of the dynamic memory model with actual results from the user recalling an associative pair and adjusts the dynamic memory model accordingly. The review interval optimizer first tracks an error rate computed from the numerous trial results of the user from the user history database. The review interval optimizer can then adjust the dynamic memory model accordingly to maintain the error rate within a predetermined range, say from about 5% to about 10%.
(090) Too high an error rate of the user indicates that the memory model over estimated the memory power of the user with the user frequently forget the subject matter being learned. This leads to repetitive relearning of the previously learned yet forgotten subject matter and can cause user frustration. On the other hand, too low an error rate (say 1 %) indicates that the memory model under estimated the user's memory power, with the likelihood that many subject matters are redundantly reviewed. This can cause frequent disruption of the user's ongoing long-term memory consolidation process. Thus, the system can evaluate the error rate and adjusts the /4-value accordingly after a certain number of trials, e.g. between 10 and 100 trials, by the following exemplary rule:
• If Error rate >10% and A>0, A=A-0.5.
• If Error rate <S%, A=A+0.5.
If and when necessary, the functional form Exp can also be replaced with other forms such as a power function to afford more degrees of freedom in the adjustment hence more accurate results.
(091 ) The above illustrates how the memory model can be adjusted at a global level thus tailoring to each individual user. As was already alluded to, at a more detailed level, the memory model can be adjusted for each word as well. For example, the error rate for reviewing each specific word can be calculated and the A-value for that word is then accordingly adjusted, etc.
THE PROCESS OPTIMIZER
(092) As mentioned before, the system does not restrict the user to review each word at exactly the next review time as determined by the golden sequence. Instead, it permits the user to define his own study schedule, the process optimizer 128 will then arrange the real-time learning sequence based upon the user-defined schedule and his memory status of each word to achieve the best result. In operation, the process optimizer 128 access data from the golden sequences database 1 26 to know the best review times of each word. In combination with the user-defined schedule, the process optimizer 128 then determines which word should be presented in the next trial to achieve the best overall long-term memory results. As its internal operating mechanism, the process optimizer 128 includes a set of algorithms to arrange the real-time trial sequence.
(093) Furthermore, in another embodiment, the process optimizer 128 works in two modes, a learning mode and a review only mode. The main difference between these two modes is that in the review only mode, any new associative pairs are blocked from being presented, and the user only reviews the past associative pairs that were learned before. Thus, the system updates the trial and memory history and review interval after the user is presented with and responds to each past associative pair. This can be desirable when the user wants a break for more than a few days wherein he can focus on the past associative pairs learned to avoid massive forgetting during the break. Below are more detailed illustrative embodiments of these two modes: Learning mode:
• If there is no past associative pair due for review, present a new associative pair using the default order or the order selected by the user.
• If there is only one past associative pair due for review:
1 . Present it at the next trial then update its trial and memory history and memory status.
2. Update its next review time by adding the next review time interval to the previous review time. • If there are multiple past associative pairs due for review:
1. Select the item due that has the minimum review time.
2. Present it for trial then update its trial and memory history and the MA(t)-va\ue according to the trial result.
3. Update its next review time by adding the next review time interval to the previous review time.
4. Return to step 1 until no past associative pair is due. Review only mode:
1. New associative pairs are blocked. Present only past associative pairs.
2. Select the item that has the minimum review time.
3. Present the item then adjust the MA(t)~va\ue. The amount of adjustment has been reduced by a factor of two.
4. Update its next review time by adding the next review time interval to the previous review time.
5. Repeat step 2 until the user exits.
PRELIMILARY RESULTS
(094) Using the present invention as illustrated, Fig. 6 shows some testing results obtained at Northwestern Polytechnic University with 40 adult Chinese students studying in the ESL (English as Second Language) program. The English words used in the test were most frequently tested TOEFL (Test Of English as a Foreign Language) words and were mostly new to these students. Most students spent about half an hour per day throughout this test. Each curve terminated with a dot represents the progress of a student. The horizontal- axis represents the total training time in hours whereas the vertical- axis represents the number of new words remembered by a student. The dashed straight line represents the average speed of progress amongst the 40 students, 50 new words per hour. Following are a few important remarks on the results:
1 . The learning speed ranged from 30 words per hour to 100 words per hour with an average speed of 50 words per hour.
2. While the student is learning new words, the student does not forget previously learned words because the memory engine will ensure that all the learned words are still in memory before presenting new words to the student.
3. The learning progress is substantially linear. That is, if a student acquired sixty words per hour, the student is likely to maintain this speed up to hundreds of hours of learning.
4. Two students' acquisition speed increased after hours of using the system. This indicates that the students' memory power is enhanced with the system. Furthermore, the users of this system also exhibit better memory when attempting to remember other things such as addresses and phone numbers.
VARIATIONS
(095) The above illustrations describe embodiments of the system and method with a recall task in both learning mode and review only mode. The illustrated system, however, is not the only utilization of the present invention. In another embodiment, the recall task can be substituted with other associative learning tasks as well in either learning mode or review only mode to offer a rich learning experience for the users and to provide broad training on various aspects of the associative learning. By now it should become clear that the learning contents are not limited to vocabulary building. The learning contents, for instance, can be any of the associative pair contents described before. Thus, various embodiments of the present invention may be presented with a combination of any of the associative learning tasks and any of the associative pair contents.
OTHER FORMS OF ASSOCIATIVE LEARNING
(096) An associative learning trial can take on many different forms wherein an associative pair of items are presented to the user to learn and recall. Some of the possible forms of presenting these associative pairs are listed and illustrated below according to additional embodiments of the present invention: Variants of the recall task include but are not limited to:
• Reverse recall task: Present B and ask the user to recall A. Here the user interface and workflow are the same as the recall task already illustrated in Fig. 2 through Fig. 6 as embodiments of the present invention.
• Spelling practice task: The system presents B visually or acoustically then asks the user to type or spell out A. Fig. 7 illustrates a simple workflow for this task according to an embodiment of the present invention.
• Listening-comprehension task: The system presents A acoustically and asks the user to report B. Fig. 8 illustrates a simple workflow for this task according to an embodiment of the present invention.
(097) Other associative learning tasks for presenting associative pairs may include:
• Multiple-choice task: Presenting A, its associate B together with a few non-related items C, D, and E. The user is asked to indicate, among B, C, D, and E, which one is associated with A. Fig. 9 illustrates a simple user interface to implement this task according to an embodiment of the present invention.
• Multiple-choice fill in task: Presenting contexts B for target A and leaving a space for the user to fill in A with a correct one of multiple-choices. Fig. 10 illustrates a workflow and simple user interface to implement this task according to an embodiment of the present invention.
• Came: A plurality of associative pairs [Al , Bl ], [A2, B2], . . . , [An, Bn] can be imbedded into computer game like scenarios to make the associative learning more fun according to an embodiment of the present invention.
EXTENSION OF THE APPLICATION (098) According to another embodiment of the present invention, a general way of extending the system is to separate the learning trials from the reviewing trials, and to change the learning trials from a recall task to a more natural learning task of reading. Specifically, whenever the user encounters a word whose meaning the user does not know, a word that the user is not sure how to pronounce, or a sentence that the user cannot understand, the user can simply indicate so through the system interface. The system then stores these user indications and the corresponding associated contents as associative pairs for the content in building the user's associative memory. In general, as a reading task carries a momentum that makes it undesirable to interrupt the reading frequently with reviewing trials, learning in such a general reading system can be embodied to have two main working modules: one module facilitates reading the learning materials with assistance for in-process difficulties; the other is the associative memory building module with which the user systematically builds and consolidates his associative memory for the subject matters found difficult during the reading.
(099) The system can further recommend which working module the user should be in according to a number of preset criteria. For example, when the amount of overdue associative pairs for review by a user is above a certain threshold level, the system will recommend building the associative memory. Otherwise, the system will recommend that the user continues his reading to learn new materials. (100) While the description above contains many specificities, these specificities should not be constructed as accordingly limiting the scope of the present invention but as merely providing illustrations of numerous presently preferred embodiments of this invention. For example, the present invention system can be integrated with a user's reading activities to systematically handle his vocabulary building during a normal reading process. For another example, using properly simplified subject matters, the user can even be an animal instead of a human.
(101 ) Throughout the description and drawings, numerous exemplary embodiments were given with reference to specific configurations. It will be appreciated by those of ordinary skill in the art that the present invention can be embodied in numerous other specific forms and those of ordinary skill in the art would be able to practice such other embodiments without undue experimentation. The scope of the present invention, for the purpose of the present patent document, is hence not limited merely to the specific exemplary embodiments of the foregoing description, but rather is indicated by the following claims. Any and all modifications that come within the meaning and range of equivalents within the claims are intended to be considered as being embraced within the spirit and scope of the present invention.

Claims

What is claimed is:
1 . A computer facilitated system to efficiently increase and strengthen associative memory of a user, comprising: a user interface device capable of presenting multi-media information to the user and receiving a plurality of user responses; a computer processor, coupled to said user interface device, for processing said multi-media information and user responses; a contents database, coupled to said computer processor, for storing subject matter to be presented to the user in the form of an associative pair; a memory engine coupled to said computer processor; and an operating program working with said computer processor wherein firstly the operating program presents the user with a plurality of associative pairs retrieved from the contents database, secondly the operating program creates a trial and memory history in the memory engine based upon a user response to the plurality of associative pairs, thirdly the memory engine determines an optimal real-time sequence and order based upon the trial and memory history and fourthly the operating program, using said optimal real-time sequence and order, presents the user with the plurality of associative pairs again and a plurality of new associative pairs.
2. The system of claim 1 wherein said operating program further drives the user interface device in the presentation of multi-media information and the reception of said plurality of user responses so as to automatically realize a self-evaluation, by the user, of the user's memory status of the plurality of associative pairs.
3. The system of claim 1 wherein each of said user responses further includes:
a user response content; and a user response time defined as the time interval between a first instant when the plurality of associative pairs are presented to the user and a second instant when said user response content is executed by the user.
4. The system of claim 3 wherein said user response time has a predetermined maximum range, called maximum response time, beyond which said user response content will be defaulted into a category of no response by the operating program.
5. The system of claim 4 wherein said maximum response time has a range of from about 0.1 second to about 10 minutes.
6. The system of claim 3 wherein said user response time is measured, by the operating program, with a resolution of from about 1 0 millisecond to about 500 millisecond.
7. The system of claim 1 , wherein each of the associative pairs is a pairing of a displayed language word and a definition.
8. The system of claim 1 , wherein each of the associative pairs is a pairing of a spoken language word and a spelling.
9. The system of claim 1 , wherein each of the associative pairs is a pairing of a displayed language word and a picture.
10. The system of claim 1 , wherein each of the associative pairs is, for the case of a blind user, a pairing of a word in Braille and a pronunciation.
1 1 . The system of claim 1 , wherein each of the associative pairs is a pairing of a spoken language word and a choice amongst multiple text descriptions.
12. The system of claim 1 , wherein each of the associative pairs is a pairing of a question and multiple answers.
1 3. The system of claim 1 , wherein each of the associative pairs is embedded into a game scenario.
14. The system of claim 1 , wherein each of the associative pairs is embedded into a user's interactive reading session with the system.
1 5. The system of claim 1 , wherein the contents database is a unit removable from the system and the subject matter is stored on a mobile memory device. ^
16. The system of claim 1 , wherein the memory engine further comprises: a user history database for recording a user profile, a usage chronology defined as a set of chronologically recorded results of the user recalling each associative pair, and the trial and memory history; a review interval optimizer for processing the trial and memory history and determining a best review interval for each associative pair; a sequences database for storing a plurality of best review intervals for a plurality of associative pairs presented to the user; and a process optimizer comprising a set of scheduling algorithms for retrieving data from the sequences database and for determining a next associative pair and its schedule to be presented to the user thereby achieving an enhanced long-term memory of the user.
1 7. The system of claim 16, wherein the process optimizer further incorporates a user-defined study schedule into determining a realtime sequence for presenting the associative pairs to the user.
1 8. The system of claim 16, wherein the system is implemented in the form of a web-based service/application, a desktop application, a program/game running on a wireless device, a cellular phone, a personal digital assistant or a toy.
19. The system of claim 16, wherein the memory engine further drives, via the operating program, the user interface device to display information indicating the user's real-time progress statistics and trial and memory history, calculated from the user history database.
20. The system of claim 16, wherein the review interval optimizer further comprises a dynamic memory model based upon which the review interval optimizer determines, for each user and for each associate pair, a golden sequence defined as the best review time intervals of the repetitions of user trials of said each associate pair for generating the strongest long-term memory for said each user, and sends the golden sequence to the sequences database.
21 . The system of claim 20, wherein the dynamic memory model further comprises the following functions:
a short-term memory activation trace MT(t) being a decreasing function of time describing the short-term decay of an association intensity of the associate pair initiated in the user's brain due to each presentation of the associate pair to the user and his subsequent response thereto; and a long-term memory association MACtJ being a function of time describing the long-term course of an association intensity of the associate pair formed in the user's brain as a combination of: c) a time-consolidation from each of the short-term memory activation trace, being a time-integration of MTCtJ; and d) a long-term decay of MACtJ itself
wherein the lifetime of both the short-term decay and the long-term decay are both user and associate pair dependent with the long-term lifetime being much longer than the short-term lifetime.
22. The system of claim 21 , wherein the review interval optimizer, considering that the long-term lifetime being much longer than the short-term lifetime, further sets best review time intervals sufficiently long such that majority of the time-consolidation from each of the short-term memory activation trace is complete thereby maximize the long-term memory association MACtJ.
23. The system of claim 22, wherein the review interval optimizer sets best review time intervals sufficiently long such that from about 60% to about 90% of the time-consolidation from each of the short-term memory activation trace is complete.
24. The system of claim 21 wherein the functional form of MT(t) is:
MT(t) = Exp(-t/τs)
Where
Ts is the lifetime of the short-term decay of MT(t).
25. The system of claim 24 wherein the functional form of MA(t) is: MA(t) = Exp(-t/τι)
Where r/ is the lifetime of the long-term decay of MA(t) with τι » τs.
26. The system of claim 25, wherein the golden sequence is about: A *Ex V(O), A *Exp(l), A *Exp(2), ... A *Exp(n), ... wherein the parameter A is further adjustable by the review interval optimizer for a strongest buildup of long-term memory association MA(t).
27. The system of claim 20 wherein, for a new user beginning the learning process, the review interval optimizer further sets the golden sequence for each associate pair to a predetermined pair-specific default golden sequence, defined as the best review time intervals for said each associate pair learned by a user with average memory power.
28. The system of claim 20 wherein, for a new user beginning the learning process, the review interval optimizer further sets the golden sequence for all associate pairs to a predetermined default golden sequence, defined as the best review time intervals for an associate pair with average difficulty and learned by a user with average memory power.
29. The system of claim 20, wherein for each user and for each associate pair the review interval optimizer further regularly adjusts the associated golden sequence in real-time using: the plurality of results of said each user recalling said each specific associative pair and the plurality of trial and memory history from the user history database.
30. The system of claim 20, wherein the review interval optimizer further compares a prediction of the dynamic memory model with an actual result from the user recalling an associative pair and adjusts the dynamic memory model accordingly.
31 . The system of claim 20, wherein the review interval optimizer further tracks an error rate computed from the plurality of trial results of the user from the user history database and adjusts the dynamic memory model accordingly to maintain the error rate within a predetermined range thereby reduce: user frustration from too high an error rate; and frequent disruption of the ongoing long-term memory consolidation process due to redundant reviews of too many associate pairs with too low an error rate.
32. The system of claim 31 , wherein the predetermined range is from about 5 percent to about 10 percent.
33. The system of claim 16, wherein the process optimizer further comprises a learning mode for determining whether to present a past associative pair or a new associative pair to the user under a set of learning scenarios comprising: when no past associative pair is due for review; when only one past associative pair is due for review; and when a plurality of past associate pairs are due for review wherein the learning mode updates the trial and memory history and review interval for each scenario after the user is presented with and responds to a past associative pair or a new associative pair.
34. The system of claim 16, wherein the process optimizer further comprises a review only mode for presenting the user with only a plurality of past associative pairs whereby blocking any new associative pair from being presented, and the review only mode updates the trial and memory history and review interval after the user is presented with and responds to a past associative pair.
35. A computer implemented method for efficiently increasing and strengthening associative memory of a user, comprising:
(a) identifying a user through a profile data inputted by the user through a multi-media user interface and storing said profile data in a user history database;
(b) presenting the user with a plurality of associative pairs from a subject of study selected by the user in a contents database;
(c) requesting the user to recall an associative pair and receiving a corresponding user response through the user interface;
(d) recording the user response and updating the user's trial and memory history for the associative pair in a user history database;
(e) processing the updated trial and memory history, determining a best review interval and sending the best review interval to a sequences database;
(f) storing a plurality of best review intervals in the sequences database and sending an output from the sequences database to a process optimizer;
(g) determining an order and schedule for presenting a next associative pair to the user using a set of scheduling algorithms in the process optimizer and the output from the sequences database; and
(h) presenting the user with a past associative pair or a new associative pair as determined by the process optimizer such that the user achieves a strongest buildup of long-term memory.
36. The method of claim 35 wherein receiving a corresponding user response further includes:
receiving a user response content; and receiving a user response time defined as the time interval between a first instant when the plurality of associative pairs are presented to the user and a second instant when said user response content is executed by the user.
37. The method of claim 36 wherein receiving a user response time further comprises setting a pre-determined maximum range, called maximum response time, beyond which receiving a user response content will be followed by defaulting it into a category of no response.
38. The method of claim 35, wherein each of said associative pairs has a first and a second element and the user response is selected from the user interface presenting only the first or the second element and a plurality of choices comprising: a sure recall; an unsure recall; and no recall; wherein receiving a user response of sure recall or unsure recall is followed by presenting the user with the first or second element, not presented above on the user interface, and requesting the user to indicate whether his response is correct or not; and wherein receiving a user response of no recall is followed by presenting the user with the first or second element, not presented above on the user interface, and requesting the user to continue to a next associative pair.
39. The method of claim 35, further comprising: repeating steps (c) through (h) as a trial; updating the user's trial and memory history in the user history database; adjusting the best review intervals in the sequences database; adjusting the order and schedule for presenting the next associative pair; and storing the user's trial and memory history in the user history database for future retrieval by the user.
40. The method of claim 39, further comprising: activating a learning mode for determining whether to present a past associative pair or a new associative pair to the user under a set of learning scenarios comprising: when no past associative pair is due for review; when only one past associative pair is due for review; and when a plurality of past associative pairs are due for review wherein the learning mode updates the user's trial and memory history and review interval after the user is presented with and responds to a past associative pair or a new associative pair.
41 . The method of claim 39, further comprising: selecting a review only mode for presenting the user with only a plurality of past associative pairs whereby blocking any new associative pair from being presented; and for updating the user's trial and memory history and review interval after the user is presented with and responds to a past associative pair.
42. The method of claim 35, that is further implemented using a web application, a computer application, a wireless device, a personal digital assistant or a toy.
43. A computer facilitated apparatus for efficiently increasing and strengthening associative memory of a user, comprising: an interfacing means for presenting multi-media information to the user and receiving a plurality of user responses; a computer processing means, coupled to said interfacing means, for processing said multi-media information and user responses; a contents storing means, coupled to said computer processing means, for storing subject matter to be presented to the user in the form of an associative pair; a memory engine means coupled to said computer processing means, the memory engine means further comprising: a means for recording and storing a user profile, a chronologically recorded result of the user recalling each associative pair and a trial and memory history; a review interval optimizing means for processing the trial and memory history and determining a best review interval for each associative pair; a sequences storing means for storing a plurality of best review intervals for a plurality of associative pairs presented to the user; and a process optimizing means further comprising a set of scheduling algorithms for determining whether to present, with an associated schedule, a past associative pair or a new associative pair to the user
wherein as the trial and memory history is updated, the best review intervals are updated and the process optimizing means further determines an optimal order and schedule for presenting a past associative pair or a new associative pair to the user thereby achieves an enhanced long-term memory result.
PCT/US2006/033670 2006-08-28 2006-08-28 A system and method to enhance human associative memory WO2008027033A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2006/033670 WO2008027033A1 (en) 2006-08-28 2006-08-28 A system and method to enhance human associative memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2006/033670 WO2008027033A1 (en) 2006-08-28 2006-08-28 A system and method to enhance human associative memory

Publications (1)

Publication Number Publication Date
WO2008027033A1 true WO2008027033A1 (en) 2008-03-06

Family

ID=39136199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/033670 WO2008027033A1 (en) 2006-08-28 2006-08-28 A system and method to enhance human associative memory

Country Status (1)

Country Link
WO (1) WO2008027033A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8107605B2 (en) 2007-09-26 2012-01-31 Hill-Rom Sas Memory aid for persons having memory loss
US8727788B2 (en) 2008-06-27 2014-05-20 Microsoft Corporation Memorization optimization platform
WO2016176185A1 (en) * 2015-04-27 2016-11-03 The Regents Of The University Of California Neurotherapeutic video game for improving spatiotemporal cognition
CN109271618A (en) * 2018-08-30 2019-01-25 山东浪潮通软信息科技有限公司 A kind of form component for remembering user operation habits
CN111126552A (en) * 2019-12-26 2020-05-08 深圳前海黑顿科技有限公司 Intelligent learning content pushing method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6884078B2 (en) * 2002-09-17 2005-04-26 Harcourt Assessment, Inc. Test of parietal lobe function and associated methods

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6884078B2 (en) * 2002-09-17 2005-04-26 Harcourt Assessment, Inc. Test of parietal lobe function and associated methods

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8107605B2 (en) 2007-09-26 2012-01-31 Hill-Rom Sas Memory aid for persons having memory loss
US8727788B2 (en) 2008-06-27 2014-05-20 Microsoft Corporation Memorization optimization platform
WO2016176185A1 (en) * 2015-04-27 2016-11-03 The Regents Of The University Of California Neurotherapeutic video game for improving spatiotemporal cognition
US11207010B2 (en) 2015-04-27 2021-12-28 The Regents Of The University Of California Neurotherapeutic video game for improving spatiotemporal cognition
CN109271618A (en) * 2018-08-30 2019-01-25 山东浪潮通软信息科技有限公司 A kind of form component for remembering user operation habits
CN109271618B (en) * 2018-08-30 2023-09-26 浪潮通用软件有限公司 Form component realization method capable of memorizing user operation habit
CN111126552A (en) * 2019-12-26 2020-05-08 深圳前海黑顿科技有限公司 Intelligent learning content pushing method and system
CN111126552B (en) * 2019-12-26 2023-05-26 深圳前海黑顿科技有限公司 Intelligent learning content pushing method and system

Similar Documents

Publication Publication Date Title
US11887498B2 (en) Language learning system adapted to personalize language learning to individual users
US20030129574A1 (en) System, apparatus and method for maximizing effectiveness and efficiency of learning, retaining and retrieving knowledge and skills
US6652283B1 (en) System apparatus and method for maximizing effectiveness and efficiency of learning retaining and retrieving knowledge and skills
US6905340B2 (en) Educational device and method
Bjork F 5 assessing our own competence: Heuristics and illusions
Edge et al. MemReflex: adaptive flashcards for mobile microlearning
US20100003659A1 (en) Computer-implemented learning method and apparatus
WO2017106832A1 (en) Method and apparatus for adaptive learning
US20070003914A1 (en) Consultative system
CA2509630A1 (en) System and method for adaptive learning
US20060286538A1 (en) Interactive distributed processing learning system and method
US11393357B2 (en) Systems and methods to measure and enhance human engagement and cognition
WO2008027033A1 (en) A system and method to enhance human associative memory
WO2010111340A1 (en) Teaching system and method
WO2003067555A1 (en) A system and method to optimize human associative memory and to enhance human memory
Weller Critical reflection through personal pronoun analysis (critical analysis) to identify and individualise teacher professional development
CN113748449B (en) Evaluation and training system
US20210057079A1 (en) System and method for teaching actions to develop individualized, focused decision-making skills over time
US10792557B1 (en) Memory puzzle system
JP2023514766A (en) Artificial intelligence learning-based user knowledge tracking device, system and operation method thereof
CN111833013A (en) Learning plan making method and device
Nobriga et al. Training goal writing: A practical and systematic approach
Hoskin et al. Development of BrailleBunny: a device to enhance braille learning
KR20200129505A (en) Method, system and recording medium for learning memory based on brain science
JP2001022259A (en) Repeated study method and device using computer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06813884

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06813884

Country of ref document: EP

Kind code of ref document: A1