WO1999050827A1 - Digitized sound management tool for computer implemented multimedia applications - Google Patents

Digitized sound management tool for computer implemented multimedia applications Download PDF

Info

Publication number
WO1999050827A1
WO1999050827A1 PCT/US1999/007051 US9907051W WO9950827A1 WO 1999050827 A1 WO1999050827 A1 WO 1999050827A1 US 9907051 W US9907051 W US 9907051W WO 9950827 A1 WO9950827 A1 WO 9950827A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
desired sound
multimedia
management module
multimedia application
Prior art date
Application number
PCT/US1999/007051
Other languages
French (fr)
Inventor
Gaston R. Cangiano
William M. Jenkins
Athanassios Protopapas
Original Assignee
Scientific Learning Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Scientific Learning Corp. filed Critical Scientific Learning Corp.
Priority to AU31202/99A priority Critical patent/AU3120299A/en
Publication of WO1999050827A1 publication Critical patent/WO1999050827A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Stereophonic System (AREA)

Abstract

A sound management module executes independently of a multimedia application and serves requests for generating sounds for presentation as part of a multimedia display. To generate and play a desired sound, the multimedia application requests that the sound mangement module generate and play the sound (506). To formulate the request, the multimedia application determines a number of characteristics of the desired sound (502, 504). Such characteristics can include, for example, (i) component sounds which are to be concatenated to form the desired sound, (ii) a duration of a period of silence to be included between the components sounds, and (iii) sound wave characteristics of synthesized sounds. When the request is received, the sound management module creates the desired sound in accordance with the specified characteristics included in the request (512). The sound management module can be created using general purpose computer programming languages and is not limited to the particular multimedia authoring tool.

Description

- 1 -
DIGITIZED SOUND MANAGEMENT TOOL FOR COMPUTER IMPLEMENTED
MULTIMEDIA APPLICATIONS
SPECIFICATION
FIELD OF THE INVENTION
The present invention relates to computer-implemented multimedia applications and, in particular, to a mechanism for including digitized sound in multimedia applications while reducing storage requirements for such digitized sound.
BACKGROUND OF THE INVENTION
One of the fastest growing areas of computer processing is the development and proliferation of multimedia computer applications. Such multimedia computer applications include multimedia computer games, virtual reality systems, multimedia presentations, and computer implemented training systems. The combination of sound, text, and motion video in multimedia displays presents information in a particularly efficient and powerful manner such that each type of media provides a context for others of the types of media.
Multimedia computer applications are relatively new and few multimedia authoring tools are available for creating multimedia applications. A multimedia authoring tool is a computer process by which a user of a computer can create a multimedia application, i.e., a computer process which presents multimedia subject matter using one or more computer display devices including computer video display screens and/or loudspeakers. Currently, the most flexible and useful multimedia authoring tool available is the Director multimedia authoring tool available from Macromedia, Inc. of San Francisco, California.
The Director multimedia authoring tool provides a wide array of powerful multimedia processing modules but is rather limited in the manner to which a multimedia computer application created through the Director multimedia authoring tool can control the multimedia presentation created by the multimedia application. For example, the Director multimedia authoring tool provides a mechanism by which digitized audio can be included in the presentation of multimedia subject matter but generally requires that the digitized audio is complete and stored in a storage device such as a magnetic disk prior to - 2 - inclusion in a multimedia document. In some multimedia applications, the various sounds which are to be included in the multimedia presentation are difficult to predict prior to presentation of the multimedia subject matter. Such is true, for example, in the case of interactive multimedia applications such as multimedia games and training systems. Therefore, when creating interactive multimedia applications using the Director multimedia authoring tool, all sounds which can potentially be included in the multimedia presentation must generally be created in their entirety and stored on mass storage media, thereby using substantial computer resources.
What is needed is a mechanism by which sound can be included in interactive multimedia presentations while reducing the amount of mass storage required to store such sound.
SUMMARY OF THE INVENTION
In accordance with the present invention, a sound management module executes independently of a multimedia application and serves requests for generating sounds for presentation as part of a multimedia display. The multimedia application is developed using the Macromedia Director multimedia authoring tool in this illustrative embodiment and therefore is not provided with access to sound generation and presentation utilities of an operating system within which the multimedia application executes.
To generate and play a desired sound, the multimedia application requests that a sound management module, which executes concurrently with and independently of the multimedia application, generate and play the sound. To formulate the request, the multimedia application determines a number of characteristics of the desired sound. Such characteristics can include, for example, (i) component sounds which are to be concatenated to form the desired sound, (ii) a duration of a period of silence to be included between the component sounds, and (iii) sound wave characteristics of synthesized sounds. Including periods of silence between component sounds in a composite, desired sound allows the amount of time between component sounds to be precisely controlled. Such is particularly important in interactive multimedia applications used for auditory training.
When the request is received, the sound management module creates the desired sound in accordance with the specified characteristics included in the request. The sound - 3 - management module can be created using general purpose computer programming languages and is not limited to the particular multimedia authoring tool by which the multimedia application is created. Accordingly, the sound management module can access many sound generation and playback mechanisms provided by the operating system within which the sound management module executes.
The efficiencies afforded by the sound management module are particularly apparent when the specific sounds to be presented to a user during playback of a multimedia display by the multimedia application are dependent upon actions taken by the user, i.e., when the multimedia application is interactive. Ordinarily, every possible sound that could be included in the multimedia display would have to be complete and stored in a computer storage device prior to execution of the multimedia application. Accordingly, substantial storage resources would be required. However, the sound management module can create a composite sound from one or more component sounds in response to a request from the multimedia application. As a result, only a limited number of component sounds are needed to create a wide variety of sounds which can be included in the multimedia display of the multimedia application. Substantially less storage resources are therefore required to store sounds for inclusion in the multimedia display of the multimedia application.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a block diagram of a computer system in which a multimedia application and a sound management module execute in accordance with the present invention.
Figure 2 is a block diagram of a sound database of Figure 1 in greater detail.
Figure 3 is a logic flow diagram of the processing of the sound management module of Figure 1 in accordance with the present invention.
Figure 4 is a block diagram of a composite sound formed by the sound management module of Figure 1 in accordance with the present invention.
Figure 5 is a logic flow diagram illustrating the interaction between the multimedia application and sound manager module of Figure 1 in accordance with the present invention. - 4 -
DETAILED DESCRIPTION
In accordance with the present invention, a sound management module 150 (Figure 1) executes independently of a multimedia application 140 and serves requests for generating sounds for presentation through a loudspeaker 120D as part of a multimedia display. Multimedia application 140 is developed using the Macromedia Director multimedia authoring tool in this illustrative embodiment. Accordingly, multimedia application 140 includes no mechanisms by which sounds can be generated. Instead, multimedia application 140 can only initiate playback of sounds previously formed in their entirety and stored in memory 104.
The efficiencies afforded by sound management module 150 are particularly apparent when the specific sounds to be presented to a user during playback of a multimedia display by multimedia application 140 are dependent upon actions taken by the user, i.e., when multimedia application 140 is interactive. Ordinarily, every possible sound that could be included in the multimedia display would have to be complete and stored in memory 104 prior to execution of multimedia application 140. Accordingly, substantial storage resources within memory 104 would be required. However, sound management module 150 creates a composite sound 170 from one or more component sounds 202A-H (Figure 2) of a sound database 160 in response to a request from multimedia application 140. Therefore, only a limited number of component sounds 202 A-H are needed to create a wide variety of sounds which can be included in the multimedia display of multimedia application 140. As a result, substantially less storage resources are required to store sounds for inclusion in the multimedia display of multimedia application 140.
Multimedia application 140 and sound management module 150 execute within a computer system 100 which is shown in Figure 1. Computer system 100 includes a processor 102 and memory 104 which is coupled to processor 102 through an interconnect 106. Interconnect 106 can be generally any interconnect mechanism for computer system components and can be, e.g., a bus, a crossbar, a mesh, a torus, or a hypercube. Processor 102 fetches from memory 104 computer instructions and executes the fetched computer instructions. Processor 102 also reads data from and writes data to memory 104 and sends data and control signals through interconnect 106 to one or more computer display devices 120 and receives data and control signals through interconnect 106 from one or more computer user input devices 130 in accordance with fetched and executed computer - 5 - instructions.
Memory 104 can include any type of computer memory and can include, without limitation, randomly accessible memory (RAM), read-only memory (ROM), and storage devices which include storage media such as magnetic and/or optical disks. Memory 104 includes multimedia application 140 and sound management module 150, each of which is all or part of one or more computer processes which in turn execute within processor 102 from memory 104. A computer process is generally a collection of computer instructions and data which collectively define a task performed by computer system 100.
Each of computer display devices 120 can be any type of computer display device including without limitation a printer, a cathode ray tube (CRT), a light-emitting diode (LED) display, or a liquid crystal display (LCD). Each of computer display devices 120 receives from processor 102 control signals and data and, in response to such control signals, displays the received data. Computer display devices 120, and the control thereof by processor 102, are conventional.
In addition, loudspeaker 120D can be any loudspeaker and can include amplification and can be, for example, a pair of headphones. Loudspeaker 120D receives sound signals from audio processing circuitry 120C and produces corresponding sound for presentation to a user of computer system 100. Audio processing circuitry 120C receives control signals and data from processor 102 through interconnect 106 and, in response to such control signals, transforms the received data to a sound signal for presentation through loudspeaker 120D.
Each of user input devices 130 can be any type of user input device including, without limitation, a keyboard, a numeric keypad, or a pointing device such as an electronic mouse, trackball, lightpen, touch-sensitive pad, digitizing tablet, thumb wheels, or joystick. Each of user input devices 130 generates signals in response to physical manipulation by the listener and transmits those signals through interconnect 106 to processor 102.
As described above, multimedia application 140 and sound management module 150 execute within processor 102 from memory 104. Specifically, processor 102 fetches computer instructions from multimedia application 140 and sound management module 150 and executes those computer instructions. Processor 102, in executing multimedia application 140 and sound management module 150, retrieves component sounds 202 A-H (Figure 2) from sound database 160, forms from component sounds 202 A-H composite - 6 - sound 170 (Figure 1), and includes composite sound 170 in a multimedia presentation in a manner described more completely below.
In one embodiment, multimedia application 140 is a limited hold reaction time procedure test which is described more completely in co-pending U.S. Patent Application
S/N 08/ , filed January 23, 1998 by William M. Jenkins, Ph.D. et al. and entitled
"Adaptive Motivation for Computer- Assisted Training System" and in U.S. Patent
Application S/N 08/ , filed , 1998 by William M. Jenkins, Ph.D. et al. and entitled "Method and Apparatus for Training of Sensory Perceptual System in LLI Subjects" and those descriptions are incorporated herein by reference. Briefly, a user grabs an object using conventional drag-and-drop user interface techniques involving physical manipulation of a user input device. The user holds the object while a phoneme is repeatedly played for the user. For example, the audible sound, "si," can be repeated through loudspeaker 120D. After the first phoneme repeatedly plays a predetermined, randomly selected, number of times, a similar but distinct phoneme, e.g., "sti," is substituted for the repeated phoneme. The user is expected to recognize the substituted phoneme as distinct and so indicate by releasing the object using conventional drag and drop techniques. If the user releases the object prior to substitution of the distinct phoneme or fails to release the object within a predetermined period of time following substitution of the distinct phoneme, the user's response is characterized as incorrect. Conversely, if the user releases the object within the predetermined period of time following substitution of the distinct phoneme, the user's response is characterized as correct.
One characteristic of the repeated phonemes that controls a degree of challenge presented to the user in identifying the respective phonemes is the amount of time between phonemes, which is called herein the inter-stimulus interval (ISI). To present a specific degree of challenge to the user, it is important to accurately control the ISI. Controlling synchronization of various components of a multimedia display by multimedia application 140 is particularly difficult. Specifically, playback, through multimedia application 140, of the respective phonemes separated by a delay equal in length to the ISI yields unacceptably large variations in the actual periods of silence between the phonemes. Accordingly, multimedia application 140 plays a single composite sound 170 which includes the respective phonemes and ISI portions 402A-C (Figure 4). Specifically, composite sound is a concatenation of a composite sound 202N ISI portion 402A, component sound 202A, - 7 -
ISI portion 402B, component sound 202N ISI portion 402C, and component sound 202B. Therefore, composite sound 170 includes three occurrences of a first phoneme represented by component sound 202 A and a single occurrence of a second phoneme represented by component sound 202B separated by periods of silence as ISI portions 402A-C. Inclusion of ISI portions 402 A-C in composite sound 170 precisely controls the length of the period of silence between the respective phonemes.
However, the number of occurrences of the first phoneme during interactive play of the multimedia display of multimedia application 140 varies randomly between a minimum number and a maximum number. In addition, the specific phonemes also vary during playback of the multimedia display of multimedia application 140. Accordingly, forming composite sounds analogous to composite sound 170 for all possible permutations of component sounds and pseudo-random number of occurrences of the first phoneme and various ISI values would require an inordinate amount of storage resources of computer system 100.
To reduce the amount of storage resources required to present a wide variety of precisely controlled phoneme sequences to the user, sound management module 150 forms composite sound 170 upon request by multimedia application 140 from component sounds 202A-H. In one embodiment, sound management module 150 is an Xtra as used by the Macromedia Director multimedia authoring tool. Xtras are known and are described only briefly herein for completeness. Sound management module 150 is dynamically loaded and executed in response to a request by multimedia application 140. Sound management module 150 shares a memory address space within memory 104 but has its own execution state and is scheduled for execution within computer system 100 concurrently with and independently of multimedia application 140.
To create composite sound 170, multimedia application 140 sends a request to sound management module 150. The request includes identification of first and second component sounds, e.g., component sounds 202 A-B, a number of times the first component sound is to be repeated, and an amount of time of silence between the component sounds. In response to the request, sound management module 150 forms composite sound 170 in a manner illustrated as logic flow diagram 300 (Figure 3).
In step 302, sound management module 150 (Figure 1) clears composite sound 170. Sound management module 150 can clear composite sound 170 by deleting any data stored - 8 - in composite sound 170 or, alternatively, by creating a new, initially empty composite sound 170 within memory 104.
Loop step 304 (Figure 3) and next step 310 form a loop in which each occurrence of each component sound specified in the request received from multimedia application 140 (Figure 1) is processed. During each iteration of the loop of steps 304-310 (Figure 3), the particular occurrence of a component sound is referred to as the subject component sound. For each occurrence of each component sound, processing transfers to step 306.
In step 306, sound management module 150 (Figure 1) appends the subject component sound to composite sound 170. In this illustrative example, sound management module 150 appends a first occurrence of component sound 202A (Figure 4) to composite sound 170. In step 308 (Figure 3), sound management module 150 (Figure 1) appends an ISI of the duration specified in the request received from multimedia application 140, e.g., ISI portion 402A (Figure 4), if the subject component sound is not the last occurrence of the last component sound. Processing transfers through next step 310 (Figure 3) to loop step 304 in which the next occurrence of the component sounds is processed according to the loop of steps 304-310. In this illustrative embodiment, sound management module 150 (Figure 1) appends two more occurrences of component sound 202 A, each of which is followed by a respective ISI portion, i.e., ISI portions 402B-C, and a single occurrence of component sound 202B.
After all occurrences of all component sounds have been processed according to the loop of steps 304-310 (Figure 3), processing transfers to step 312 in which sound management module 150 (Figure 1) plays composite sound 170 by sending composite sound 170 and control signals to audio processing circuitry 120C for playback on loudspeaker 120D. After step 312 (Figure 3), processing according to logic flow diagram 300 completes.
Thus, sound management module 150 (Figure 1) produces composite sound 170 from component sounds 202A-B according to parameters specified by multimedia application 140 in response to a request received from multimedia application 140. As a result, all composite sounds which can be specified by varying the length of ISI portions 402 A-C (Figure 4) and/or by varying the number of times component sound 202 A is repeated can be derived by sound management module 150 (Figure 1) from component sounds 202 A-B (Figure 2). Therefore, only component sounds 202 A-B need to be stored - 9 - persistently in memory 104. In addition, components 202 A-B require substantially less storage space than any one variation of composite sound 170. Accordingly, substantial storage resources are saved by dynamic creation of composite sound 170 in accordance with the present invention.
While it is described herein that composite sound 170 includes a number of occurrences of a first component sound and a single occurrence of a second component sound, it is appreciated that composite sound 170 can be formed from any combination of component sounds 202A-H (Figure 2) and can even be created as a synthesized sound from a number of sound-wave characteristics specified by multimedia application 140 (Figure 1). For example, component sounds 202A-H of sound database 160 can be individual words which can be concatenated with one another by sound management module 150 to form sentences. Multimedia application 140 can specify a particular sentence by identifying, in sequence, the particular ones of component sounds 202 A-B to be concatenated to form composite sound 170. In another embodiment, multimedia application 140 sends a request to sound management module 150 to generate and play a synthesized sound. The request specifies a number of characteristics of the sound. For example, multimedia application 140 specifies a pair of frequency sweeps by specifying for each of the sweeps a starting frequency, an ending frequency, and a duration, and specifying as an ISI a period of time to elapse between the two frequency sweeps. In response to such a request, sound management module 150 constructs composite sound 170 to include (i) a first frequency sweep of the first duration from the first starting frequency linearly progressing to the first ending frequency, (ii) a period of silence of the specified duration as an ISI, and (iii) a second frequency sweep of the second duration from the second starting frequency linearly progressing to the second ending frequency.
Sound management module 150 enables a multitude of various complex sounds with accurate timing and synchronization parameters such as ISIs to be prepared from a relatively small number of component sounds 202A-H or by specification of a relatively small number of characteristics of the complex sound. Therefore, the amount of storage resources required to store sufficient audio data to represent all possible sounds to be played by multimedia application 140 is substantially reduced.
The interaction between multimedia application 140 and sound management module 150 is illustrated in logic flow diagram 500 (Figure 5). In step 502, multimedia application - 10 -
140 determines the characteristics of a desired sound. The particular sound desired by multimedia application 140 depends upon the particular behavior designed to be exhibited by multimedia application 140 according to the computer instructions of multimedia application 140. In one embodiment, the desired sound is determined by generating a pseudo-random number between three and seven to represent the number of times a first phoneme, e.g., "si," is to be played before a second phoneme, e.g., "sti," is to be played and by determining an appropriate ISI according to a user's past performance in recognizing the two phonemes. The desired sound thus includes the pseudo-random number of occurrences of the first phoneme followed by a single occurrence of the second phoneme with pauses therebetween in accordance with the appropriate ISI. Of course, multimedia application 140 can determine that other complex sounds, such as sentences and/or synthesized sounds, are desired.
In step 504 (Figure 5), multimedia application 140 formulates a request which includes all necessary components of the request, including specification of one or more of component sounds 202A-H if needed. In step 506 (Figure 5), multimedia application 140 (Figure 1) sends the formulated request to sound management module 150. The interface by which multimedia application 140 sends and sound management module 150 receives the request is known and conventional. In step 508 (Figure 5), multimedia application 140 (Figure 1) continues processing while sound management module 150 executes concurrently and independently.
Sound management module 150 receives the request in step 510 (Figure 5). In step 512, sound management module 150 (Figure 1) generates composite sound 170 in accordance with the request and plays composite sound 170 through loudspeaker 120D. In one embodiment, sound management module 150 generates and plays composite sound 170 in the manner described above with respect to logic flow diagram 300 (Figure 3).
The above description is illustrative only and is not limiting. The present invention is limited only by the claims which follow.

Claims

- 11 -What is claimed is:
1. A method for including a desired sound in a multimedia presentation by a multimedia application created by a multimedia authoring tool, the method comprising: forming a request which specifies one or more characteristics of the desired sound; and sending the request to a sound management module which executes concurrently with the multimedia application in such a manner that causes the sound management module to create the desired sound.
2. The method of Claim 1 wherein the one or more characteristics of the desired sound include one or more component sounds of the desired sound.
3. The method of Claim 2 wherein the one or more characteristics of the desired sound further include a duration of a period of silence to be included between respective ones of the one or more component sounds.
4. A method for including a desired sound in a multimedia presentation by a multimedia application created by a multimedia authoring tool, the method comprising: receiving, within a sound management module which executes concurrently with the multimedia application, a request from a multimedia application wherein the request specifies one or more characteristics of a desired sound; and constructing the desired sound in accordance with the one or more characteristics.
5. The method of Claim 4 further comprising: playing the desired sound.
6. The method of Claim 4 wherein the one or more characteristics of the desired sound include one or more component sounds of the desired sound. - 12 -
7. The method of Claim 4 wherein the one or more characteristics of the desired sound include two or more component sounds of the desired sound; further wherein constructing the desired sound comprises concatenating the two or more component sounds to form the desired sound.
8 The method of Claim 7 wherein the one or more characteristics of the desired sound further include specification of one or more periods of silence to be placed between the two or more component sounds; further wherein concatenating the two or more component words includes concatenating the two or more component sounds with one of the one or more periods of silence concatenated between respective ones of the two or more component sounds to form the desired sound.
9. A computer readable medium useful in association with a computer which includes a processor and a memory, the computer readable medium including computer instructions which are configured to cause the computer to include a desired sound in a multimedia presentation by a multimedia application created by a multimedia authoring tool by: forming a request which specifies one or more characteristics of the desired sound; and sending the request to a sound management module which executes concurrently with the multimedia application in such a manner that causes the sound management module to create the desired sound.
10. The computer readable medium of Claim 9 wherein the one or more characteristics of the desired sound include one or more component sounds of the desired sound.
11. The computer readable medium of Claim 10 wherein the one or more characteristics of the desired sound further include a duration of a period of silence to be included between respective ones of the one or more component sounds. - 13 -
12. A computer readable medium useful in association with a computer which includes a processor and a memory, the computer readable medium including computer instructions which are configured to cause the computer to include a desired sound in a multimedia presentation by a multimedia application created by a multimedia authoring tool by: receiving, within a sound management module which executes concurrently with the multimedia application, a request from a multimedia application wherein the request specifies one or more characteristics of a desired sound; and constructing the desired sound in accordance with the one or more characteristics.
13. The computer readable medium of Claim 12 the computer instructions are further configured to cause the computer to include a desired sound in a multimedia presentation by a multimedia application created by a multimedia authoring tool by: playing the desired sound.
14. The computer readable medium of Claim 12 wherein the one or more characteristics of the desired sound include one or more component sounds of the desired sound.
15. The computer readable medium of Claim 12 wherein the one or more characteristics of the desired sound include two or more component sounds of the desired sound; further wherein constructing the desired sound comprises concatenating the two or more component sounds to form the desired sound.
16 The computer readable medium of Claim 15 wherein the one or more characteristics of the desired sound further include specification of one or more periods of silence to be placed between the two or more component sounds; further wherein concatenating the two or more component words includes concatenating the two or more component sounds with one of the one or more periods of silence concatenated between respective ones of the two or more - 14 - component sounds to form the desired sound.
17. A computer system comprising: a processor; a memory operatively coupled to the processor; and a multimedia application (i) which executes in the processor from the memory and (ii) which is created by a multimedia authoring tool and (iii) which, when executed by the processor, causes the computer to include a desired sound in a multimedia presentation by a multimedia application created by a multimedia authoring tool by: forming a request which specifies one or more characteristics of the desired sound; and sending the request to a sound management module which executes concurrently with the multimedia application in such a manner that causes the sound management module to create the desired sound.
18. The computer system of Claim 17 wherein the one or more characteristics of the desired sound include one or more component sounds of the desired sound.
19. The computer system of Claim 18 wherein the one or more characteristics of the desired sound further include a duration of a period of silence to be included between respective ones of the one or more component sounds.
20. A computer system comprising: a processor; a memory operatively coupled to the processor; and a sound management module (i) which executes in the processor from the memory and (ii) which, when executed by the processor, causes the computer to include a desired sound in a multimedia presentation by a multimedia application which is created by a multimedia authoring tool and which executes concurrently with the sound management module by: receiving a request from a multimedia application wherein the - 15 - request specifies one or more characteristics of a desired sound; and constructing the desired sound in accordance with the one or more characteristics.
21. The computer system of Claim 20 the sound management module further causes the computer to include a desired sound in a multimedia presentation by a multimedia application created by a multimedia authoring tool by: playing the desired sound.
22. The computer system of Claim 20 wherein the one or more characteristics of the desired sound include one or more component sounds of the desired sound.
23. The computer system of Claim 20 wherein the one or more characteristics of the desired sound include two or more component sounds of the desired sound; further wherein constructing the desired sound comprises concatenating the two or more component sounds to form the desired sound.
24 The computer system of Claim 23 wherein the one or more characteristics of the desired sound further include specification of one or more periods of silence to be placed between the two or more component sounds; further wherein concatenating the two or more component words includes concatenating the two or more component sounds with one of the one or more periods of silence concatenated between respective ones of the two or more component sounds to form the desired sound.
PCT/US1999/007051 1998-03-31 1999-03-30 Digitized sound management tool for computer implemented multimedia applications WO1999050827A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU31202/99A AU3120299A (en) 1998-03-31 1999-03-30 Digitized sound management tool for computer implemented multimedia applications

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US5314198A 1998-03-31 1998-03-31
US09/053,141 1998-03-31

Publications (1)

Publication Number Publication Date
WO1999050827A1 true WO1999050827A1 (en) 1999-10-07

Family

ID=21982198

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/007051 WO1999050827A1 (en) 1998-03-31 1999-03-30 Digitized sound management tool for computer implemented multimedia applications

Country Status (2)

Country Link
AU (1) AU3120299A (en)
WO (1) WO1999050827A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5530898A (en) * 1991-03-13 1996-06-25 Casio Computer Co., Ltd. Digital recorder for storing audio data on tracks with specific operation modes inputted manually where soundless portion data is inserted based on respective operation modes
US5659793A (en) * 1994-12-22 1997-08-19 Bell Atlantic Video Services, Inc. Authoring tools for multimedia application development and network delivery
US5713021A (en) * 1995-06-28 1998-01-27 Fujitsu Limited Multimedia data search system that searches for a portion of multimedia data using objects corresponding to the portion of multimedia data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5530898A (en) * 1991-03-13 1996-06-25 Casio Computer Co., Ltd. Digital recorder for storing audio data on tracks with specific operation modes inputted manually where soundless portion data is inserted based on respective operation modes
US5659793A (en) * 1994-12-22 1997-08-19 Bell Atlantic Video Services, Inc. Authoring tools for multimedia application development and network delivery
US5713021A (en) * 1995-06-28 1998-01-27 Fujitsu Limited Multimedia data search system that searches for a portion of multimedia data using objects corresponding to the portion of multimedia data

Also Published As

Publication number Publication date
AU3120299A (en) 1999-10-18

Similar Documents

Publication Publication Date Title
US10656718B2 (en) Device and method for outputting a series of haptic effects defined in a timeline effect definition
US8036766B2 (en) Intelligent audio mixing among media playback and at least one other non-playback application
US7096416B1 (en) Methods and apparatuses for synchronizing mixed-media data files
US5913258A (en) Music tone generating method by waveform synthesis with advance parameter computation
André et al. Perceval: a computer-driven system for experimentation on auditory and visual perception
WO2005001661A3 (en) Method and apparatus and program storage device including an integrated well planning workflow control system with process dependencies
WO2020108102A1 (en) Vibration method, electronic device and storage medium
JP2020174339A (en) Method, device, server, computer-readable storage media, and computer program for aligning paragraph and image
EP3462442A1 (en) Singing voice edit assistant method and singing voice edit assistant device
Park et al. A physics-based vibrotactile feedback library for collision events
Schertenleib et al. Conducting a virtual orchestra
WO1999050827A1 (en) Digitized sound management tool for computer implemented multimedia applications
Petit et al. Composing and Performing Interactive Music using the HipHop. js language
US11024340B2 (en) Audio sample playback unit
KR102395540B1 (en) Device and method for storing and playing simulation files
US6728664B1 (en) Synthesis of sonic environments
JPH09244650A (en) Musical sound synthesizing device and method
JP2005339693A (en) Disk information display device
JP2005099264A (en) Music playing program
CN104298435A (en) Input interface processing method and device
US6317123B1 (en) Progressively generating an output stream with realtime properties from a representation of the output stream which is not monotonic with regard to time
WO1999051020A2 (en) Animation synchronization for computer implemented multimedia applications
CN112734940B (en) VR content playing modification method, device, computer equipment and storage medium
JPH0553705A (en) Operating method for information processor
CN113724673B (en) Method for constructing rhythm type editor and generating and saving rhythm by rhythm type editor

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
NENP Non-entry into the national phase

Ref country code: KR

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase