EP0576628A1 - System for interactve performance and animation of prerecorded audiovisual sequences - Google Patents

System for interactve performance and animation of prerecorded audiovisual sequences

Info

Publication number
EP0576628A1
EP0576628A1 EP19920917234 EP92917234A EP0576628A1 EP 0576628 A1 EP0576628 A1 EP 0576628A1 EP 19920917234 EP19920917234 EP 19920917234 EP 92917234 A EP92917234 A EP 92917234A EP 0576628 A1 EP0576628 A1 EP 0576628A1
Authority
EP
European Patent Office
Prior art keywords
animation
story
text
mode
interactive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19920917234
Other languages
German (de)
French (fr)
Inventor
Mark Schlichting
John Baker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Broderbund Software Inc
Original Assignee
Broderbund Software Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Broderbund Software Inc filed Critical Broderbund Software Inc
Publication of EP0576628A1 publication Critical patent/EP0576628A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Definitions

  • This invention relates to interactive audiovisual systems, and in particular to a new system which provides spoken words and other audio for performing a story in a sequential manner, coupled with interactively accessible animations.
  • animations which appear in prior story playing systems have not been sequentially dependent upon one another, and thus have lacked flexibility.
  • Several alternative animations may play at random times, or they appear in a particular sequence.
  • the playing of certain animations may depend upon a series of actions taken by the user.
  • the sequentiality of the performance of the stories is an important feature of the present invention. As discussed in greater detail below, this feature is combined in a new manner with a variety of interactive animation capabilities and audio, including contextual text pronunciation.
  • Figure 1 is a block diagram of a system according to the invention.
  • Figures 2 through 26 are reproductions of actual screen-capture shots of an exemplary implementation of the invention in a personal computer, illustrating the interactive capabilities of the invention.
  • Figure 1 shows a basic system 5 for implementing the present invention, including a controller 10, a display 20, and an audio output 30.
  • the display 20 and audio output 30 are coupled to the controller 10 by cables 40 and 50, respectively, and are driven by the controller.
  • Input to the controller 10 is made by means of a mouse 60 having a mouse button 65; or input may be made by another conventional input device, such as a keyboard or the like.
  • the system 5 may be implemented by means of a conventional personal computer or other microprocessor-based system.
  • the controller 10 displays a cursor 70 on a screen 80 of the display 20.
  • the cursor 70 allows the user of the system to interact with the text, animations, and other visual information displayed on the screen 80.
  • the system 5 may be implemented by means of a conventional personal computer or other microprocessor-based system, using one of a variety of applications available for creating textual, graphic and animation or video sequences. Many such computer systems are available, such as the MacintoshTM system by Apple Computer. Performance sequences according to the invention may be implemented using conventional applications and techniques.
  • Figures 2 through 26 are exemplary screen captures of portions of sequences implemented in one actual embodiment of the present invention.
  • Figure 2 shows a title page 90, which appears on the screen 80 (shown in Figure 1), along with the cursor 70.
  • Figure 2 represents a screen which appears on the display 20 , which is not separately shown.
  • a sequence of text, graphics, animations, and audio recordings is stored in a memory in the controller 10. Starting the appropriate application causes the title page 90 to appear.
  • two interactive buttons are provided on the title page: READ ONLY (100) and INTERACTIVE (110). The user of the system positions the cursor 70 over one of these buttons and clicks the mouse button 65 in order to access the chosen sequence.
  • INTERACTIVE button 110 causes essentially the same story sequence to be performed, but in an interactive fashion.
  • the INTERACTIVE mode the user is given the option of interrupting the story at various times to play animations, replay portions of the text, and again to proceed with the performance. This is discussed in detail below.
  • the INTERACTIVE button 110 is clicked, the first page 120 of the story is displayed, as shown in Figure 3, and includes graphics 130, text (in this case one sentence) 140, and various "live" or interactive regions on the screen which may be clicked on to access the different features of the invention. These live regions may be defined in any of a variety of conventional manners, such as by predefining coordinates on the screen such that the desired response is generated if the user clicks within those coordinates.
  • the story is performed according to a prerecorded animation and audio sequences which are associated with one another, and are preferably loaded in RAM in the controller or computer 10.
  • a voice pronounces the sentence appearing on the displayed page 120.
  • groups of words are highlighted in the pronounced sequence.
  • the phrase “Early in the morning” (indicated by the numeral 150 in Figure 3) is highlighted while those words are pronounced from the audio track in memory, followed by highlighting (not separately shown) of the wording "Mom wakes me "while it is pronounced, and so on.
  • the system is at first in a continuous play mode, in which it will proceed to perform a story absent any interruption by the user, in a predetermined, linear fashion.
  • The. story sequence can then proceed to the end, while allowing the user to interrupt at certain predetermined times to enter an interactive mode, wherein certain tangential sequences are performed.
  • the continuous play mode includes a number of performance loops, where certain animations and sounds are repeated until the user interrupts the mode by clicking the cursor 70 on one of a plurality of "live" regions on the screen 80.
  • the live regions are preferably correlated with an identifiable graphic object displayed on the screen, such as the lamp 180, the hat 190, or the poster 200 shown in Figure 4 (as well as Figures 2 through 13).
  • Each sentence is normally pronounced once automatically by the system, while the associated animation sequence is simultaneously performed on the screen.
  • the user has the option of interrupting the continuous mode sequence to cause the system to pronounce any words again which the user wishes, by clicking the cursor 70 on such words.
  • the user has clicked on the word "Monster", and that word is repronounced by the system.
  • the pronunciation in the interactive mode is the same as the pronunciation in the continuous mode; that is, each individual word selected for a repeated pronunciation is pronounced by the system exactly as it is in the continuous mode. This can be done by accessing the same stored audio sequences while in the interactive mode as in the continuous mode.
  • the entire sentence 140 may be repeated by clicking on the repeat button 210, as shown in Figure 6.
  • the word groups, such as group 150 are again highlighted, as they were during their first performance in the continuous mode, and the sentence is pronounced identically.
  • the associated animation may also be reperformed.
  • Figures 7 through 12 demonstrate an interdependent, hierarchichal sequence of interactive animations which are achieved by the present invention.
  • the user may, however, move the cursor 70 to the window 230, and open the window in a conventional drag operation.
  • the window 230 is shown after dragging to a half-open position. With the window in this position, the bird 220 is still not a live, interactive region.
  • the window 230 has been dragged to its fully open position. With the window in this position, the bird 220 is now a live region for interactive animation. When the cursor is moved to a position over the bird 220, as shown in Figure 11, and the mouse button 65 is clicked, the bird then performs a predetermined animation sequence, including an associated soundtrack. A frame from this animation (which is a chirping sequence) is shown in Figure 12.
  • Figures 7 through 12 thus illustrate an interdependent set of sequences which are executed as part of the story performed by the system of the invention. Without first opening the window 230, the user could not get the system to perform the chirping bird sequence.
  • the chirping bird sequence is a sequence which has a condition which must be fulfilled before it can be executed, in this case an action to be taken by the user.
  • Other conditions precedent may be used, including random events generated by the computer or points in the story which must be reached by the system before a region becomes active or before a given interactive sequence becomes available to the user.
  • Figure 13 shows a frame from another animation sequence which may be accessed by the user. When the cursor 70 is clicked on the drawer 240, a mouse 250 appears, executes a sequence with squeaking sounds, and disappears. Any number of similar sequences may be accessed by the user in other live regions defined on the screen, such as those discussed above (the lamp 180, the hat 190, and the poster 200, as well as others).
  • Figure 13 shows another live region, namely a page turn tab 260.
  • the system accesses the next page of the story.
  • the second page 270 of the story appears at Figures 14 through 26.
  • the sentence 280 is performed in the same manner as the sentence 140 shown in Figure 4, and again has an associated animation track and other audio which are performed along with the performance of the sentence.
  • a repeat button 290 is provided, and fills the same function as the repeat button 210 shown in Figure 6.
  • This wait mode may be a continuous loop, or a timeout may be provided, so that the story will continue in the continuous mode if the user does not interact with the system within a predetermined period of time.
  • Figures 15 through 17 show an interactive animation sequence which is executed in response to the user clicking on the arm 300 of the "Mom" character 170.
  • Mom 170 reaches up with her arm to stir the items in the pan 310, as shown by the progression between Figures 15 and 16, and then moves the pan 310 back and forth on the burner 320, as shown by the progression between the frames represented by Figures 16; and 17.
  • This is another example of the type of interactive animation which can be implemented by the system of the invention, similar to that discussed above relative to Figure 13.
  • the text 280 is different in a number of respects from the text 140 shown in Figures 2-13.
  • the system pronounces the sentence 280, it pronounces the correct names of these objects as they are highlighted.
  • the phrase 330 is highlighted as shown in Figure 14, the wording "cereal with milk?" is pronounced.
  • the "eggs" 340 and “cereal” 350 constitute live, interactive regions. If the user clicks on the eggs 340, as in Figure 18, they are animated (again, with audio) to crack open. Mom 170 catches the falling eggs 360 in her pan 310, then turns around and cooks them on the burner 320 as shown in Figure 19. Then, she flings them over her shoulder, as shown in Figure 20, whereupon the "Dad” character 370 catches them in his plate 380, as shown in Figure 21. Dad then serves them to the Little Monster 390, as shown in Figure 22.
  • the sequence of Figures 18 through 22 illustrates an animation sequence which is executed in response to a command by the user (in this case, a click of the mouse) during a wait mode, upon which the computer enters an interactive mode which interrupts the normal, continuous sequence of the performance of the story.
  • the story may then automatically proceed, or may proceed after a timeout, or another page turn tab (similar to tab 260 shown in Figure 13) may be used.
  • FIG. 23 through 25 An interactive animation sequence similar to the foregoing is illustrated in Figures 23 through 25.
  • the user clicks on the cereal 350 it is served in an animation sequence to the Little Monster 390.
  • the bowl of cereal 350 has moved beneath the milk 400, which is poured over the cereal.
  • the bowl of cereal 350 then drops to the table 410 in front of the Little Monster 390.
  • the cereal 350 may return to, or reappear at, its original place as in Figure 23, and the interactive animation sequence is again available to the user.
  • the milk 400 may fulfill the same function as actual words in the text 280 by causing the repronunciation of the word "milk", in the context of the originally pronounced sentence, when the user clicks on it. This is illustrated in Figure 26, where the cursor 70 is clicked on the milk 400, causing it to be highlighted and causing the computer 10 to reperform the pronunciation of the word. As demonstrated by Figures 18 through 26, words and their illustrations may perform similar functions in the system of the invention.
  • the present invention provides a combination of interactive features which have hitherto been unavailable, in the context of a system which performs a story in a predetermined, linear sequence from beginning to end. Coupled with this is the capability to choose between the continuous performance mode, a wait mode, and an interactive mode, in which the user can cause the computer to execute animation sequences, pronounce individual words of the text, repeat portions of the performance, and other functions. Variations on the foregoing embodiments will be apparent in light of the present disclosure, and may be constructed without departing from the scope of this invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Système de représentation séquentielle d'un récit préenregistré comprenant un texte, des animations ou des informations vidéo et audio. Le système, de préférence utilisé dans un ordinateur personnel, possède un mode continu dans lequel il représente le récit de façon linéaire et non interrompue; un mode d'attente, dans lequel il réalise des boucles d'animation ou bien est en attente d'instructions de l'utilisateur; un mode interactif, dans lequel le système réalise des animations et exécute une partie sonore ou d'autres activités parallèles à l'histoire linéaire. Le texte est affiché sur l'écran du système, simultanément à des graphiques et/ou des représentation vidéo. Le texte est prononcé par le système au cours de la représentation séquentielle et, quand l'ordinateur fonctionne en mode interactif, l'utilisateur peut intervenir, de façon que l'ordinateur répète des mots se trouvant dans le texte. La répétition de l'énoncé des mots est similaire à l'énoncé effectué dans le contexte original. Dans le mode continu et dans le mode interactif, les mots prononcés sont mis en évidence. Certaines animations sont rendues inaccessibles à l'utilisateur, même en mode interactif, jusqu'à ce que celui-ci exécute les mesures nécessaires; de ce fait, certaines animations sont interdépendantes ou emboîtées. La représentation d'une animation donnée peut dépendre de l'exécution d'une action particulière ou de la représentation antérieure d'une autre animation ou, enfin, de la génération d'un facteur aléatoire généré par l'ordinateur.System for sequential representation of a prerecorded story including text, animations or video and audio information. The system, preferably used in a personal computer, has a continuous mode in which it represents the story in a linear and uninterrupted fashion; a standby mode, in which it performs animation loops or is awaiting instructions from the user; an interactive mode, in which the system performs animations and performs a sound part or other activities parallel to the linear story. Text is displayed on the system screen, simultaneously with graphics and / or video representations. The text is spoken by the system during the sequential representation and, when the computer is operating in interactive mode, the user can intervene, so that the computer repeats words found in the text. Repeating the wording of words is similar to the wording made in the original context. In continuous mode and in interactive mode, the words spoken are highlighted. Certain animations are made inaccessible to the user, even in interactive mode, until the latter performs the necessary measures; therefore, some animations are interdependent or nested. The representation of a given animation can depend on the execution of a particular action or on the previous representation of another animation or, finally, on the generation of a random factor generated by the computer.

Description

SYSTEM FOR INTERACTIVE PERFORMANCE AND ANIMATION OF PRERECORDED AUDIOVISUAL SEQUENCES
Background of the Invention This invention relates to interactive audiovisual systems, and in particular to a new system which provides spoken words and other audio for performing a story in a sequential manner, coupled with interactively accessible animations. There are systems presently in use which provide animation to a user, and which play animations in response to input from the user, such as by means of a mouse in a computer system. However, animations which appear in prior story playing systems have not been sequentially dependent upon one another, and thus have lacked flexibility.
There is also at least one system presently in use which plays an audio recording of text which appears on a display, and which allows the user of the system to select particular words to be spoken. However, the words are not spoken in the context of the text, but rather in a different and contextually irrelevant manner.
Thus, there is a lack of highly flexible and interactive linear story performance systems, which would provide multiple interactive capabilities to the user.
Summary of the Invention . It is therefore an object of this invention to provide an interactive audiovisual system which provides to the user the capability of multiple modes of animation and audio, in response to input from the user.
It is a particular object of the invention to provide such a system which also performs a story stored in memory in a sequential fashion, providing the interactive animations at appropriate places in the story. Several alternative animations may play at random times, or they appear in a particular sequence. In addition, the playing of certain animations may depend upon a series of actions taken by the user. The sequentiality of the performance of the stories is an important feature of the present invention. As discussed in greater detail below, this feature is combined in a new manner with a variety of interactive animation capabilities and audio, including contextual text pronunciation.
Brief Description of the Drawings Figure 1 is a block diagram of a system according to the invention. Figures 2 through 26 are reproductions of actual screen-capture shots of an exemplary implementation of the invention in a personal computer, illustrating the interactive capabilities of the invention.
Description of the Preferred Embodiments
Figure 1 shows a basic system 5 for implementing the present invention, including a controller 10, a display 20, and an audio output 30. The display 20 and audio output 30 are coupled to the controller 10 by cables 40 and 50, respectively, and are driven by the controller. Input to the controller 10 is made by means of a mouse 60 having a mouse button 65; or input may be made by another conventional input device, such as a keyboard or the like. The system 5 may be implemented by means of a conventional personal computer or other microprocessor-based system.
The controller 10 displays a cursor 70 on a screen 80 of the display 20. As discussed in detail below, the cursor 70 allows the user of the system to interact with the text, animations, and other visual information displayed on the screen 80. The system 5 may be implemented by means of a conventional personal computer or other microprocessor-based system, using one of a variety of applications available for creating textual, graphic and animation or video sequences. Many such computer systems are available, such as the Macintosh™ system by Apple Computer. Performance sequences according to the invention may be implemented using conventional applications and techniques.
Figures 2 through 26 are exemplary screen captures of portions of sequences implemented in one actual embodiment of the present invention. Figure 2 shows a title page 90, which appears on the screen 80 (shown in Figure 1), along with the cursor 70. Figure 2 represents a screen which appears on the display 20 , which is not separately shown. A sequence of text, graphics, animations, and audio recordings is stored in a memory in the controller 10. Starting the appropriate application causes the title page 90 to appear. In the preferred embodiment, two interactive buttons are provided on the title page: READ ONLY (100) and INTERACTIVE (110). The user of the system positions the cursor 70 over one of these buttons and clicks the mouse button 65 in order to access the chosen sequence.
Clicking on the READ ONLY button 100 causes a linear, uninterruptable story sequence to be performed, along with animations, graphics and audio. Clicking on the
INTERACTIVE button 110 causes essentially the same story sequence to be performed, but in an interactive fashion.
The following description applies to both the READ ONLY and INTERACTIVE modes, with the differences being in the interactive capability of the INTERACTIVE mode. In the INTERACTIVE mode, the user is given the option of interrupting the story at various times to play animations, replay portions of the text, and again to proceed with the performance. This is discussed in detail below. Once the INTERACTIVE button 110 is clicked, the first page 120 of the story is displayed, as shown in Figure 3, and includes graphics 130, text (in this case one sentence) 140, and various "live" or interactive regions on the screen which may be clicked on to access the different features of the invention. These live regions may be defined in any of a variety of conventional manners, such as by predefining coordinates on the screen such that the desired response is generated if the user clicks within those coordinates.
The story is performed according to a prerecorded animation and audio sequences which are associated with one another, and are preferably loaded in RAM in the controller or computer 10. Thus, in Figure 3, a voice pronounces the sentence appearing on the displayed page 120. As the sentence is pronounced, groups of words are highlighted in the pronounced sequence. Thus, the phrase "Early in the morning" (indicated by the numeral 150 in Figure 3) is highlighted while those words are pronounced from the audio track in memory, followed by highlighting (not separately shown) of the wording "Mom wakes me "while it is pronounced, and so on. Thus, in Figure 4, the phrase "Little Monster" (indicated by the numeral 160) is highlighted, and in the system of the invention that phrase is simultaneously pronounced, and associated animation is performed (such as the "Mom" character 170 walking in the door in Figure 4).
In this way, an entire story is performed automatically by the system of the invention. The system is at first in a continuous play mode, in which it will proceed to perform a story absent any interruption by the user, in a predetermined, linear fashion. The. story sequence can then proceed to the end, while allowing the user to interrupt at certain predetermined times to enter an interactive mode, wherein certain tangential sequences are performed.
In a preferred embodiment, the continuous play mode includes a number of performance loops, where certain animations and sounds are repeated until the user interrupts the mode by clicking the cursor 70 on one of a plurality of "live" regions on the screen 80. The live regions (some of which will be discussed below) are preferably correlated with an identifiable graphic object displayed on the screen, such as the lamp 180, the hat 190, or the poster 200 shown in Figure 4 (as well as Figures 2 through 13). Each sentence is normally pronounced once automatically by the system, while the associated animation sequence is simultaneously performed on the screen. However, the user has the option of interrupting the continuous mode sequence to cause the system to pronounce any words again which the user wishes, by clicking the cursor 70 on such words. For example, in Figure 5, the user has clicked on the word "Monster", and that word is repronounced by the system. In the preferred embodiment, the pronunciation in the interactive mode is the same as the pronunciation in the continuous mode; that is, each individual word selected for a repeated pronunciation is pronounced by the system exactly as it is in the continuous mode. This can be done by accessing the same stored audio sequences while in the interactive mode as in the continuous mode.
The entire sentence 140 may be repeated by clicking on the repeat button 210, as shown in Figure 6. The word groups, such as group 150, are again highlighted, as they were during their first performance in the continuous mode, and the sentence is pronounced identically. The associated animation may also be reperformed. Figures 7 through 12 demonstrate an interdependent, hierarchichal sequence of interactive animations which are achieved by the present invention. In Figure 7, the user clicks the cursor 70 on a bird 220 which appears outside a window 230. Nothing happens,, because the bird 220 does not constitute a live region, so long as the window 230 is closed. The user may, however, move the cursor 70 to the window 230, and open the window in a conventional drag operation. With a mouse 60 connected to a computer 10, this is typically done by holding down the mouse button 65 when the cursor 70 is in place, and then moving the cursor by dragging the mouse. The window 230 will move along with the cursor, and will remain in the position it was when the mouse button 65 is released.
In Figure 9, the window 230 is shown after dragging to a half-open position. With the window in this position, the bird 220 is still not a live, interactive region.
In Figure 10, the window 230 has been dragged to its fully open position. With the window in this position, the bird 220 is now a live region for interactive animation. When the cursor is moved to a position over the bird 220, as shown in Figure 11, and the mouse button 65 is clicked, the bird then performs a predetermined animation sequence, including an associated soundtrack. A frame from this animation (which is a chirping sequence) is shown in Figure 12.
Figures 7 through 12 thus illustrate an interdependent set of sequences which are executed as part of the story performed by the system of the invention. Without first opening the window 230, the user could not get the system to perform the chirping bird sequence. The chirping bird sequence is a sequence which has a condition which must be fulfilled before it can be executed, in this case an action to be taken by the user. Other conditions precedent may be used, including random events generated by the computer or points in the story which must be reached by the system before a region becomes active or before a given interactive sequence becomes available to the user. Figure 13 shows a frame from another animation sequence which may be accessed by the user. When the cursor 70 is clicked on the drawer 240, a mouse 250 appears, executes a sequence with squeaking sounds, and disappears. Any number of similar sequences may be accessed by the user in other live regions defined on the screen, such as those discussed above (the lamp 180, the hat 190, and the poster 200, as well as others).
Figure 13 shows another live region, namely a page turn tab 260. When the user clicks on this tab, the system accesses the next page of the story. The second page 270 of the story appears at Figures 14 through 26. The sentence 280 is performed in the same manner as the sentence 140 shown in Figure 4, and again has an associated animation track and other audio which are performed along with the performance of the sentence. A repeat button 290 is provided, and fills the same function as the repeat button 210 shown in Figure 6.
Once the sentence and associated animation and audio have been performed, the system enters a wait mode of repeated animation and sounds, during which the user has the option of clicking on a variety of live regions. This wait mode may be a continuous loop, or a timeout may be provided, so that the story will continue in the continuous mode if the user does not interact with the system within a predetermined period of time.
Figures 15 through 17 show an interactive animation sequence which is executed in response to the user clicking on the arm 300 of the "Mom" character 170. Mom 170 reaches up with her arm to stir the items in the pan 310, as shown by the progression between Figures 15 and 16, and then moves the pan 310 back and forth on the burner 320, as shown by the progression between the frames represented by Figures 16; and 17. This is another example of the type of interactive animation which can be implemented by the system of the invention, similar to that discussed above relative to Figure 13.
In Figure 14, the text 280 is different in a number of respects from the text 140 shown in Figures 2-13. First, there are graphic representations of eggs, cereal and milk, rather than the words themselves appearing in the text. When the system pronounces the sentence 280, it pronounces the correct names of these objects as they are highlighted. Thus, while the phrase 330 is highlighted as shown in Figure 14, the wording "cereal with milk?" is pronounced.
Secondly, the "eggs" 340 and "cereal" 350 constitute live, interactive regions. If the user clicks on the eggs 340, as in Figure 18, they are animated (again, with audio) to crack open. Mom 170 catches the falling eggs 360 in her pan 310, then turns around and cooks them on the burner 320 as shown in Figure 19. Then, she flings them over her shoulder, as shown in Figure 20, whereupon the "Dad" character 370 catches them in his plate 380, as shown in Figure 21. Dad then serves them to the Little Monster 390, as shown in Figure 22. Thus, the sequence of Figures 18 through 22 illustrates an animation sequence which is executed in response to a command by the user (in this case, a click of the mouse) during a wait mode, upon which the computer enters an interactive mode which interrupts the normal, continuous sequence of the performance of the story. The story may then automatically proceed, or may proceed after a timeout, or another page turn tab (similar to tab 260 shown in Figure 13) may be used.
An interactive animation sequence similar to the foregoing is illustrated in Figures 23 through 25. When the user clicks on the cereal 350, it is served in an animation sequence to the Little Monster 390. In the frame represented by Figure 24, the bowl of cereal 350 has moved beneath the milk 400, which is poured over the cereal. The bowl of cereal 350 then drops to the table 410 in front of the Little Monster 390. After the sequence, the cereal 350 may return to, or reappear at, its original place as in Figure 23, and the interactive animation sequence is again available to the user.
The milk 400 may fulfill the same function as actual words in the text 280 by causing the repronunciation of the word "milk", in the context of the originally pronounced sentence, when the user clicks on it. This is illustrated in Figure 26, where the cursor 70 is clicked on the milk 400, causing it to be highlighted and causing the computer 10 to reperform the pronunciation of the word. As demonstrated by Figures 18 through 26, words and their illustrations may perform similar functions in the system of the invention.
Thus, the present invention provides a combination of interactive features which have hitherto been unavailable, in the context of a system which performs a story in a predetermined, linear sequence from beginning to end. Coupled with this is the capability to choose between the continuous performance mode, a wait mode, and an interactive mode, in which the user can cause the computer to execute animation sequences, pronounce individual words of the text, repeat portions of the performance, and other functions. Variations on the foregoing embodiments will be apparent in light of the present disclosure, and may be constructed without departing from the scope of this invention.

Claims

ClaimsWhat is claimed is:
1. A performance system for performing a story stored in memory, the system including a display and an audio system, including: first means for displaying on the display text relating to the story; means for generating pronunciations of words from the text on the audio system; second means for displaying on the display graphics related to the story; means for interactive animation of graphics appearing on the display; and means for carrying out the performance of the story, including said displaying of text and graphics and pronunciation of said text, in a sequential fashion from a beginning of the story to an end of the story.
2. The system of claim 1, farther including means for controlling the first and second displaying means and the generating means in each of a first and second mode, wherein: the first mode is a continuous mode for performing the story in a sequential fashion; the controlling means includes means for entering the second mode which is an interactive mode, including executing interruptions to the performance of the story and executing commands during said interruptions, including a command relating to resumption of the performance of the story according to the first mode.
3. The system of claim 2, wherein said commands further include a first animation command for executing a first animation sequence relating to a graphic displayed on the screen.
4. The system of claim 3, wherein said commands further include a second animation command for executing a second animation sequence selected from a plurality of animation sequences, the selected second animation sequence being determined by the first animation sequence.
5. The system of claim 3, wherein said first animation sequence is selected from a plurality of animation sequences, the selected first animation sequence depending upon a random factor determined by the performance system.
6. The system of claim 3, including means for executing the first animation sequence repeatedly.
7. The system of claim 6, including means for ceasing the repeated execution of the first animation sequence.
8. The system of claim 2, wherein said commands further include a pronunciation command for executing pronunciations of individual words of the text.
9. The system of claim 8, wherein the pronunciations of the individual words in the interactive mode are the same as the pronunciations of the words of the text in the continuous mode.
10. The system of claim 1, further including means for generating audio on the audio system relating to graphics appearing on the display.
11. The system of claim 1, further including means for highlighting pronounced portions of the text simultaneously with their pronunciation.
EP19920917234 1991-08-02 1992-07-30 System for interactve performance and animation of prerecorded audiovisual sequences Withdrawn EP0576628A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US74038991A 1991-08-02 1991-08-02
US740389 1991-08-02

Publications (1)

Publication Number Publication Date
EP0576628A1 true EP0576628A1 (en) 1994-01-05

Family

ID=24976304

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19920917234 Withdrawn EP0576628A1 (en) 1991-08-02 1992-07-30 System for interactve performance and animation of prerecorded audiovisual sequences

Country Status (3)

Country Link
EP (1) EP0576628A1 (en)
AU (1) AU2419092A (en)
WO (1) WO1993003453A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2115210C (en) * 1993-04-21 1997-09-23 Joseph C. Andreshak Interactive computer system recognizing spoken commands
DE4322562A1 (en) * 1993-07-07 1995-01-12 Muralt Pierre Damien De Process for the production of films with subtitles
US5741136A (en) * 1993-09-24 1998-04-21 Readspeak, Inc. Audio-visual work with a series of visual word symbols coordinated with oral word utterances
US5938447A (en) * 1993-09-24 1999-08-17 Readspeak, Inc. Method and system for making an audio-visual work with a series of visual word symbols coordinated with oral word utterances and such audio-visual work
US5915256A (en) * 1994-02-18 1999-06-22 Newsweek, Inc. Multimedia method and apparatus for presenting a story using a bimodal spine
US5682469A (en) * 1994-07-08 1997-10-28 Microsoft Corporation Software platform having a real world interface with animated characters
US5930450A (en) * 1995-02-28 1999-07-27 Kabushiki Kaisha Toshiba Recording medium, apparatus and method of recording data on the same, and apparatus and method of reproducing data from the recording medium
GB9606129D0 (en) * 1996-03-22 1996-05-22 Philips Electronics Nv Virtual environment navigation and interaction apparatus
US6040841A (en) * 1996-08-02 2000-03-21 Microsoft Corporation Method and system for virtual cinematography
FR2765370B1 (en) * 1997-06-27 2000-07-28 City Media IMAGE PROCESSING SYSTEM
US6324511B1 (en) * 1998-10-01 2001-11-27 Mindmaker, Inc. Method of and apparatus for multi-modal information presentation to computer users with dyslexia, reading disabilities or visual impairment
WO2009052553A1 (en) * 2007-10-24 2009-04-30 Michael Colin Gough Method and system for generating a storyline

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9303453A1 *

Also Published As

Publication number Publication date
WO1993003453A1 (en) 1993-02-18
AU2419092A (en) 1993-03-02

Similar Documents

Publication Publication Date Title
US5630017A (en) Advanced tools for speech synchronized animation
JP2677754B2 (en) Data processing method
Konopka Planning ahead: How recent experience with structures and words changes the scope of linguistic planning
US5697789A (en) Method and system for aiding foreign language instruction
US5526480A (en) Time domain scroll bar for multimedia presentations in a data processing system
US5692212A (en) Interactive multimedia movies and techniques
US6113394A (en) Reading aid
Krauss et al. Nonverbal behavior and nonverbal communication: What do conversational hand gestures tell us?
US5742779A (en) Method of communication using sized icons, text, and audio
André et al. WebPersona: a lifelike presentation agent for the World-Wide Web
JP4064489B2 (en) Method and system for multimedia application development sequence editor using time event specification function
EP0576628A1 (en) System for interactve performance and animation of prerecorded audiovisual sequences
WO1995020189A1 (en) System and method for creating and executing interactive interpersonal computer simulations
WO2000031613A1 (en) Script development systems and methods useful therefor
JPH1031662A (en) Method and system for multimedia application development sequence editor using synchronous tool
US5889519A (en) Method and system for a multimedia application development sequence editor using a wrap corral
US5999172A (en) Multimedia techniques
US20030023572A1 (en) System and method for logical agent engine
JPH11259501A (en) Speech structure detector/display
Wheeldon et al. Grammatical Encoding for Speech Production
WO2023002300A1 (en) Slide playback program, slide playback device, and slide playback method
Mikovec et al. Visualization of users’ activities in a specific environment
May et al. Characterising structural and dynamic aspects of the interpretation of visual interface objects
JPS63197212A (en) Multi-medium reproducing device
Schwartz Bruce Morrissette, Novel and Film: Essays in Two Genres

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LI LU MC NL SE

17P Request for examination filed

Effective date: 19931202

17Q First examination report despatched

Effective date: 19960514

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19971104