EP1365386A1 - Gestion de son numérique - Google Patents

Gestion de son numérique Download PDF

Info

Publication number
EP1365386A1
EP1365386A1 EP02425318A EP02425318A EP1365386A1 EP 1365386 A1 EP1365386 A1 EP 1365386A1 EP 02425318 A EP02425318 A EP 02425318A EP 02425318 A EP02425318 A EP 02425318A EP 1365386 A1 EP1365386 A1 EP 1365386A1
Authority
EP
European Patent Office
Prior art keywords
sound
time
digital sound
fragment
sound data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02425318A
Other languages
German (de)
English (en)
Inventor
Alessandro Nannicini
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinglebell Communication Srl
Original Assignee
Jinglebell Communication Srl
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinglebell Communication Srl filed Critical Jinglebell Communication Srl
Priority to EP02425318A priority Critical patent/EP1365386A1/fr
Publication of EP1365386A1 publication Critical patent/EP1365386A1/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0041Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
    • G10H1/0058Transmission between separate instruments or between individual components of a musical system

Definitions

  • the present invention relates to the field of digital sound management in an electronic processor system (briefly, a computer).
  • a first technique consists of sending, from the server to the client, a whole sound passage (music or speech) split up into packets, as is done for transmission of data of any kind whatsoever via the Internet.
  • the packets which are structured and sent in a sequential order by the server, but ⁇ because of the very nature of the Internet - not necessarily received in the same sequential order by the client, are reconstructed on the client computer before the passage is played out. More specifically, the client computer waits until receipt of all the packets has been completed, reconstructs the digital sound file by sequentially connecting the packets, and only then does it play out the sound file without any breaks in it.
  • the aforesaid technique, with or without compression, is well suitable for sending music or other digital sound files to a user (at the client computer) who deliberately decides to listen to it, and who is therefore willing to wait for the whole time required for all the packets making up the digital sound file to be received (downloading). It must be pointed out, however, that the such required time depends on many factors, including the speed of the physical link, the sending speed (related to the workload of the server), the speed of transmission (related to the receiving and forwarding times of the various different computers along the path that each packet follows within the network) and, to a certain extent, the speed of reception (conditioned by the workload of the client).
  • the aforesaid technique is not suitable for sending a digital sound file regardless of the will of the user to listen to it, such as a background music associated with a web page. This is because there is a risk of the user navigating elsewhere before the process of reception of all the packets has been completed and therefore of him not enjoying the digital sound file associated with that web page being played out. It is obvious, however, that providing a soundtrack for web pages is desirable in order to make them more appealing.
  • buttons-ins are genuine programs that are installed in the client computer and that are provided with one or more sound libraries.
  • the web pages have to be written specifically for sending instructions to the plug-in, which instructions have to either identify the sounds to be played out among those available in the libraries already contained in the client computer, or require additional sound libraries to be downloaded and installed before use.
  • GUI graphical user interface
  • the web page itself must be interspersed with special instructions.
  • a further technique used at times consists of associating a small digital sound file, often tailored specifically for this purpose, with a web page; this file is played out repeatedly in loop fashion by the client computer to form a sound passage.
  • playing out a sound passage so conceived may sometimes have the opposite effect to that desired, which would be to improve the communication impact of the web page by making it more pleasant to visit and more appealing.
  • the digital sound file to be played out cyclically must also be fairly long, which may result in the initial waiting time between accessing the page and playing out the soundtrack still being considerable, especially for low-speed links or if the hardware and software resources of the client computer are limited.
  • the technical problem underlying the present invention is that of improving the playing out of digital sound data in an electronic computer system, in particular when there are limited storing and/or loading resources, and also that of making more dynamic management of the digital sound data available.
  • the term "electronic computer system” or “computer” is used to encompass stand-alone or networked computers, such as personal computers (PC), laptops, workstations, et cetera, as well as dedicated processing units such as, by way of example, consoles for video games, interactive decoders and the like.
  • the technical problem described above may arise both as a result of the lack of immediate availability of the whole file of digital sound data, due for example to insufficient bulk storage capacity, and as a result of the unavailability of other resources for playing out the digital sound data sequentially and uninterruptedly, due for example to overloading of the temporary storage facility (RAM) in which the digital sound data are loaded in order to be played out.
  • RAM temporary storage facility
  • the problem referred to above is solved, according to the invention, by means of a method for playing digital sound at an electronic computer system according to claim 1.
  • the invention concerns, furthermore, a computer program including a code for implementing said method when it is carried out within an electronic computer system, according to claim 24, a corresponding computer program product, according to claim 25, and an electronic computer system including means for implementing the method, according to claim 30.
  • the invention also concerns a method for causing the method of playing out to be executed, according to claim 31.
  • the invention also concerns a parametric computer program according to claim 33, a corresponding program product according to claim 36 and an electronic computer system in which such a parametric computer program is loaded, according to claim 37.
  • the invention also concerns a dynamic data structure in an electronic computer system according to claim 38.
  • the Applicant perceived that the technical problem described above may be solved in the following manner: ad hoc fragmentation of the digital sound file representative of the desired sound passage; loading of the digital sound fragments into the computer and in particular downloading of the fragments from the Internet; and the playing out or performance of the digital sound fragments in a manner that is suitably controlled by software, in particular by immediately playing out the first (down)loaded fragment, at least once and preferably several times cyclically, without waiting for (down)loading of the second and of the subsequent fragments to be completed, and more preferably before initiating (down)loading of the second and of the subsequent fragments.
  • the first digital sound fragment is, therefore, played out almost instantaneously, thus basically eliminating the waiting times.
  • a first digital sound fragment lasting only for a short time and therefore of a small size, is loaded (in particular downloaded from the network, the Internet or other) and played out immediately, preferably in loop fashion, for the purpose of simulating continuity by means of its repetition, assuming naturally that the musical nature of said sound fragment enables it to be played out in loop fashion.
  • other fragments are (down)loaded and played, overlapping or replacing said first digital sound fragment or other fragments that have been (down)loaded, as explained more precisely elsewhere.
  • the size of a fragment lasting 3 seconds will be only 40 Kbytes even if it is coded with a high resolution in order to preserve the listening quality, such as by way of example in MP3 (joint stereo) format at 64 or 80 Kbps.
  • the average time required for downloading such a fragment corresponds therefore to the average time required for downloading an image with a size, for example, of 320x240 pixels with medium to high compression. In practice, this lapse of time is therefore imperceptible on the part of the user or in any case more than acceptable even in the least favourable links.
  • each digital sound fragment constitutes one sound "tessera” of a sound "mosaic”, that constitutes the final sound passage, so that in this way the passage acquires an almost infinite musical potential.
  • the arrangement is progressively enriched by playing, also cyclically if required, several fragments downloaded one at a time from the network.
  • CPU Central Processing Unit
  • OS Operating Systems
  • a sound engine is provided that seizes and "masks" the activities of (down)loading of the sound fragments by playing out those that have already been (down)loaded, possibly in the recursive mode.
  • a further advantage of the present invention as compared with the known techniques referred to above consists of the fact that it does not need pre-installing in the client computer either sound libraries or other software, apart of course - as far as concerns the preferred use of the method according to the invention for broadcasting over the Internet - from a web browser, sound management hardware/software such as a sound card and loudspeakers and, possibly, a plug-in enabling the browser to manage the sound.
  • sound management hardware/software such as a sound card and loudspeakers and, possibly, a plug-in enabling the browser to manage the sound.
  • One plug-in that is particularly useful in the framework of the present invention, as will be understood later, is Macromedia Flash PlayerTM by Macromedia, Inc. of San Francisco, California, U.S.A., which has become so widespread that it is a de facto standard for transmitting sound and animated graphics via the Internet.
  • Macromedia FlashTM Since the Macromedia FlashTM program also contains, in addition to programming tools, all the functions of the Macromedia Flash PlayerTM program or plug-in, in the rest of the present description and in the attached claims the expression Macromedia Flash (Player)TM will at times be used to refer indiscriminately to one or the other.
  • the various different digital sound files forming the sound fragments that are part of the complex passage are (down)loaded once only to the (client) computer, and stored immediately in the "cache" memory of the browser (that is to say in the area or directory intended for containing temporary Internet files), enabling the digital sound fragments that have already been played out to be retrieved at a later time with no need to trigger (down)loading process once again.
  • the digital sound files according to a special data structure that will be described below, it is possible to:
  • the various different fragments can be played out on a single channel, providing merging of the digital data of two (or more) fragments by specific software programs.
  • the various different fragments are played out on several different sound channels, more preferably on several stereo sound channels.
  • Multi-channel playing out of several sound files is an opportunity provided by most current computers, and is managed directly by the hardware and software installed in the client computer, in particular by the operating system and by the specific sound controller (sound board).
  • Multi-channel playing out of several digital sound files is currently exploited in the field of multimedia products such as video games, in order to enable the various effects to be managed separately depending on the events occurring in the game being played and, possibly, on the user's own preferences.
  • multimedia products such as video games
  • the various different digital sound fragments are suitably correlated with one another while being played out.
  • the co-ordination pattern is drafted with the help of a time grid.
  • Each of the digital sound fragments is planned for playing out once or more in this time grid.
  • the time grid is split up according to a primary unit of time (LOOP), the length of which is expressed typically in milliseconds.
  • LOOP primary unit of time
  • At least the first digital sound fragment in order to cut down the waiting times as much as possible, it is preferable for at least the first digital sound fragment to be intended for cyclic repetition. It will therefore be provided, typically, with a length equal to one primary unit of time, that is to say equal to 1 LOOP, and so as to be played for a given number of primary units of time LOOPs along the time grid.
  • a secondary unit of time CYCLE is also established in the time grid, being a multiple of the primary unit LOOP and therefore expressed by an adimensional number.
  • the secondary unit of time or CYCLE is particularly useful since it enables the information concerning the point in time when the playing of each digital sound fragment is scheduled to be limited.
  • time grid defined at least by the primary unit of time LOOP but preferably also by the units of time of a higher order CYCLEs, GROUPs, etc., also enables easy management of the synchronism between the digital sound fragments being played out and graphic parts, in particular animated graphics. It is thus possible to associate an image with a specific digital sound fragment and to control the displaying of the picture in the Graphical User Interface, in particular on the web page, so that the selected frames thereof are displayed while the digital sound fragment associated with it is being played out.
  • Speech is a manner of expression that is linear by nature, and therefore is not suitable for being played out recursively, however it can be partitioned into sentences, each of which can be stored in a digital sound fragment.
  • the integration of digital sound fragments representative of speech with digital sound fragments representative of music and their management according to the present invention is advantageous in that the music fragments being played out cover the time required for (down)loading the digital sound fragments representative of the speech. It should also be noted that the time required for (down)loading the digital sound fragments representative of the speech is in any case short, since the human voice is often monophonic and its tone colour covers a more limited frequency range, and it can be compressed to a greater extent than digital sound files representative of music.
  • Management of the sequence according to which the digital sound fragments are played out can also be controlled by events determined by interaction of the user on the Graphical User Interface (GUI), in particular on the web page.
  • GUI Graphical User Interface
  • interaction with a web page is intercepted by assigning events to the objects concerned, one by one, in the HTML source code.
  • This universal statement enables the HTML page to be activated automatically in order to dialogue with the sound engine in charge of starting to play out the digital sound fragments, making it possible to control the way in which the digital sound fragments are played out so that the sound environment can undergo changes depending on the action taken by the user within the graphical interface corresponding to the HTML page itself, for example on the actions mentioned above.
  • Such a universal statement may be included in each HTML page.
  • a separate file will be created, written for example in Javascript, containing this universal statement, and only a line of code, for example Javascript, capable of causing the contents of said separate file to be run, will be included in each HTML page.
  • Figure 1 is a block diagram representative of a method for playing out digital sound in an electronic computer system and of a software structure 1 of a form of embodiment of a computer program comprising code for implementing said method, according to the present invention.
  • Figure 1 shows the functional blocks that are made active in the electronic computer system in which digital sound management according to the invention is implemented, for example in the client computer upon linking up via the Internet to a web page that implements digital sound management according to the invention.
  • an initialisation block INIT is indicated as a whole.
  • the initialisation process represented schematically by the INIT block 10 supervises the definition of some global variables.
  • the information referred to the primary units of time LOOPs during which a particular digital sound fragment has to be played is managed, conveniently, by including the digital sound fragment in a special set before the start of the first secondary unit of time CYCLE in which it has to start being played out, and removing it from that set before the start of the first secondary unit of time CYCLE in which it does not have to start being played out, and including it once again in said set before the start of the first secondary unit of time CYCLE in which it has to start being played out again, as illustrated more precisely below.
  • the sound engine needs only to take care of controlling the various different points in time at which the various different digital sound fragments must start to be played out, in particular those fragments that are present in the above set each time; that is to say, it needs only to track the time on the basis of the primary units of time LOOPS and, if any, of the secondary units of time CYCLES and of the units of time of a higher order GROUPs, ....
  • time grid a more or less pyramid-like structure is provided, allowing for playing out the digital sound fragments with a lower repetition; this structure also leads to particularly effective management of the start of playing out the digital sound fragments by means of the sound engine, which will only have to take care of controlling the various different points in time at which the digital sound fragments have to start (START_POINTS).
  • each pre-recorded digital sound fragment is produced (composed or adapted) in relation to the units of time and the complex passage desired or, vice versa, the units of time are defined on the basis of a pre-existing complex passage.
  • the digital sound fragments are indicated below as SOUND_FRAGMENT_X, in which X represents a numerical value.
  • Each digital sound fragment SOUND_FRAGMENT_X may last for one or more primary units of time LOOPS and it may be subject to be repeated cyclically up to a number of times equal to a secondary unit of time, that is to say up to CYCLE times within a secondary unit of time CYCLE, so as to create continuity (even until it occupies a whole CYCLE, if required, in which case it will be repeated constantly).
  • a secondary unit of time CYCLE it is possible to program the start of playing out each digital sound fragment SOUND_FRAGMENT_X playing it out at the time of any primary unit of time LOOP of a secondary unit of time CYCLE, and not necessarily at the time of the first one.
  • each digital sound fragment SOUND_FRAGMENT_X has properties of volume, start, length, repetitions, panpotting etc., as described more precisely below.
  • the functions or processes that can be called up during actuation of the method according to an embodiment of the invention are preferably defined, such as a process for creating a sound object MAKE(SOUND_OBJECT_X), a process for removing a sound object KILL(SOUND_OBJECT_X), a process for resuming a sound object RESUME(SOUND_OBJECT_X), a process for random mixing of sound objects SCRAMBLE(), a process of synchronism with moving graphic elements SPIN(), a sound fading process FADER(), one or more processes of management of special effects FX_CONTROL(), all described in greater detail below.
  • 11 indicates a engine (DOWN)LOAD_ENGINE for (down)loading the digital sound fragments SOUND_FRAGMENT_X and for creating sound objects SOUND_OBJECT_X.
  • the block (DOWN)LOAD_ENGINE 11 supervises downloading of the digital sound fragments SOUND_FRAGMENT_X from the network, starting the download from the physical location of the web page (host computer) and checking its progress assiduously by comparing the size in KBytes of the digital sound file and the quantity of data already downloaded, in a manner that is well known per se.
  • the download engine (DOWN)LOAD_ENGINE 11 therefore includes a set of loading sub-blocks 121, 122, 123, 124, ... labelled as LOAD_SOUND_FRAGMENT_1, LOAD_SOUND_FRAGMENT_2, LOAD_SOUND_FRAGMENT_3, LOAD_SOUND_FRAGMENT_X... in Figure 1.
  • Each loading sub-block 121, 122, 123, 124, ... is associated with a loading check sub-block 131, 132, 133, ... labelled as CHECK_LOAD_1, CHECK_LOAD_2, CHECK_LOAD_3, ... in Figure 1. It will be understood that the loading sub-blocks 121,... and the loading check sub-blocks 131,... may also be lacking, in which case checking of the process of downloading data from the web is taken care of by the browser itself, in a manner that is well known per se.
  • each digital sound fragment SOUND_FRAGMENT_X Upon completion of loading of each digital sound fragment SOUND_FRAGMENT_X, as carried out by loading sub-blocks 121,... and as checked by the loading check sub-blocks 131,..., a respective process of creation of a sound object is carried out.
  • the processes for creating sound objects are represented schematically in Figure 1 by the sub-blocks for creating sound objects 141, 142, 143, ... MAKE_SOUND_OBJECT_1, MAKE_SOUND_OBJECT_2, MAKE_SOUND_OBJECT_3,...
  • the processes of creation of sound objects MAKE_SOUND_OBJECT_X 141 associate the file containing the digital sound fragment SOUND_FRAGMENT_X with an instance, attributing thereto a name and a set of properties until the sound object SOUND_OBJECT_X is created.
  • each sound object SOUND_OBJECT 2 includes, in a preferred embodiment:
  • the animations are structured by means of a series of frames that are run sequentially at a given speed expressed in frames per second (fps). Some of the frames are created explicitly and these are defined as key frames. If two successive key frames are linked, the program itself takes care of creating the intermediate frames between the two linked successive key frames, by means of a sort of linear interpolation. If displaying of a particular key frame is forced so that it will be in sync with a specific primary unit of time LOOP in the manner expressed by the SPIN_POINTS data 252, the linear or sequential nature of the animation may be altered.
  • fps frames per second
  • the SPIN_POINTS sequence indicated above may provide a continuous display of the graphics moving between the second and the third, between the sixth and the seventh, and between the seventh and the eighth primary units of time LOOP of a secondary unit of time CYCLE, and between the eighth primary unit of time LOOP of a secondary unit of time CYCLE and the first primary unit of time LOOP of a subsequent secondary unit of time CYCLE, while the image will appear unconstrained by the sync with the sound between the other primary units of time LOOPs.
  • the objects created by the sub-blocks for creating objects MAKE_SOUND_OBJECT_X 141 are stored in a set SOUND_OBJECTS 15, for example an array, for storing sound objects SOUND_OBJECT_X 2 and/or in a current set SOUND_OBJECTS_TO_PLAY 16, for example an array, of sound objects SOUND_OBJECT_X 2 referred to digital sound fragments being played out.
  • array is used to indicate a data structure capable of including the data of each sound object SOUND_OBJECT_X 2, but said structure need not necessarily be in the form of a table.
  • Other suitable structures such as stacks, several aggregated tables, pointers et cetera may be used.
  • a block MAIN_CONTROL 17 has the function of a main controller of management of the digital sound fragments SOUND_FRAGMENT_X and of the sound objects SOUND_OBJECT_X 2 associated with them.
  • said block supervises the execution of the various different processes described above (MAKE, KILL, RESUME, SPIN, SCRAMBLE, FX_EFFECTS) or, in other words, supervises use of the functions defined in the initialisation block INIT 10.
  • the block MAIN_CONTROL 17 supervises management of the digital sound fragments SOUND_FRAGMENT_X as a whole as far as concerns playing them out according to the pre-established time grid, to form the desired complex passage, as well as supervises management of other sound and graphic elements. In other words, one or more of the following sub-blocks could be contained in the block MAIN_CONTROL 17.
  • a sub-block SEQUENCE_CONTROLLER 171 has the function of controller of the sequence of digital sound fragments to be played out or de-activated, removing them from the array SOUND_OBJECTS_TO_PLAY 16 by means of the KILL(SOUND_OBJECT_X) process and inserting them back into the array SOUND_OBJECTS_TO_PLAY 16 by means of the RESUME(SOUND_OBJECT_X) process, in particular taking them from the array SOUND_OBJECTS 15.
  • the KILL(SOUND_OBJECT_X) process does not fully eliminate the sound object SOUND_OBJECT_X, but merely removes it from the array SOUND_OBJECTS_TO_PLAY 16, leaving it in the array SOUND_OBJECTS 15.
  • the process of resuming a sound object, RESUME(SOUND_OBJECT_X) must therefore simply reinsert the sound object SOUND_OBJECT_X into the array SOUND_OBJECTS_TO_PLAY 16, taking it from the array SOUND_OBJECTS 15.
  • the array SOUND_OBJECTS_TO_PLAY 16 need not include the whole data structure of a sound object 2 described above with reference to Figure 2; instead, it may include merely pointers to the objects of the array SOUND_OBJECTS 15, as illustrated in Figure 2 itself, in particular their identifiers ID 20.
  • said array SOUND_OBJECTS_TO_PLAY 16 contains in this case all the data, that is to say all the data of the data structure described with reference to the array SOUND_OBJECTS 15 ( Figure 2), of the sound objects 2 contained in it, or at least all such data with reference to the sound objects 2 intended to be played out once only.
  • the process of removal of a sound object KILL(SOUND_OBJECT_X) causes a digital sound fragment SOUND_FRAGMENT_X to stop being played out by removing the sound object SOUND_OBJECT_X 2 corresponding to it - or the pointer to said sound object, in particular its identifier ID 20 - from the array SOUND_OBJECTS_TO_PLAY 16. Playing out may, furthermore, be stopped immediately, or if said digital sound fragment SOUND_FRAGMENT_X is currently being played out, it may be allowed to finish.
  • playing out of the various different digital sound fragments SOUND_FRAGMENT_X is started at the start of each primary unit of time LOOP, in the event that playing out is not stopped immediately it will be stopped at the end of the relevant digital sound data or, in any case, it will not be started again at the start of subsequent primary units of time LOOP or, at the latest, it will end at the start of the next secondary unit of time CYCLE if the playing out that has already started is programmed to be repeated cyclically for a certain number of main primary units LOOP, as expressed by the datum LOOPS 222 of the sound object SOUND_OBJECT_X 2 associated with the digital sound fragment SOUND_FRAGMENT_X.
  • sequence controller SEQUENCE_CONTROLLER 171 may also provide a process of creation of a sound object MAKE(SOUND_OBJECT), for example if the process of removal of a sound object KILL(SOUND_OBJECT_X) completely removes the sound object, or in order to vary some data among the data 22-25 associated with a digital sound fragment SOUND_FRAGMENT_X.
  • the main controller MAIN_CONTROL 17 may furthermore include a random controller of the sequence of digital sound fragments SOUND_FRAGMENT_X to be played out, represented by block RANDOM_CONTROLLER 172.
  • This controller manages the sequence of digital sound fragments SOUND_FRAGMENT_X in the same way as the sequence controller SEQUENCE_CONTROLLER 171 (that is to say by means of the KILL and RESUME processes), but in a random manner.
  • said random controller RANDOM_CONTROLLER 172 may be activated, starting from a certain point in time, thus replacing the sequence controller SEQUENCE_CONTROLLER 171, or it may act in parallel thereto.
  • an action range is associated, either pre-established or tunable by the main controller MAIN_CONTROLLER 17,, so that it will be able, for example, to remove and/or insert, by means of the RESUME and KILL processes, up to a given number of sound objects from/into the array SOUND_OBJECTS_TO_PLAY 16 each time it is run, for example each time a secondary unit of time CYCLE is started.
  • the random controller RANDOM_CONTROLLER 172 can also provide for a new process of creation of a sound object MAKE(SOUND_OBJECT) in order to vary some data among the data 22-25 associated with a digital sound fragment SOUND_FRAGMENT_X, regardless of whether the previous sound object SOUND_OBJECT_X associated with said digital sound fragment SOUND_FRAGMENT_X is removed or not.
  • the main controller MAIN_CONTROL 17 may furthermore include an event controller, represented by the block EVENT_CONTROLLER 173.
  • Said controller manages events that can condition playing out of the digital sound fragments SOUND_FRAGMENT_X and the displaying of the animations ANIMATION_Y, such as events generated by the user interacting with the Graphical User Interface (GUI), such as for example a shift from the left channel to the right channel in response to a tracking device of the client computer (a mouse, for example) being moved from the left towards the right, to a specific key on the keyboard being pressed, to pre-established points in time or to other internal events of the client computer such as other applications being launched, the current window being moved or minimised, et cetera.
  • GUI Graphical User Interface
  • the main controller MAIN_CONTROL 17 may furthermore include a special effects controller, represented by block FX_CONTROLLER 174.
  • Said special effects controller FX_CONTROLLER 174 manages, for example, fade-in, fade-out and cross-fade effects, effects of transition from one channel to another of a multi-channel playing system, et cetera.
  • the special effects controller FX_CONTROLLER 174 can, furthermore, act on the parameters for playing out the sound objects present in the array SOUND_OBJECTS_TO_PLAY 16, in particular on the data 23 concerning volume and panpotting.
  • Such additional special effects controllers ADDITIONAL_FX_CONTROL 19 condition the sound objects SOUND_OBJECTS 2 contained in the array SOUND_OBJECTS_TO_PLAY 16 directly, for example by changing the level data 24, the volume data 231 and the panpotting data 232 thereof.
  • Actual playing out of the digital sound fragments SOUND_FRAGMENT_X corresponding to the sound objects 2 contained in the array SOUND_OBJECTS_TO_PLAY 16 at each point in time is managed by a sound engine represented by a block SOUND_ENGINE 19.
  • the sound engine SOUND_ENGINE 19 manages solely the sound objects contained in the array SOUND_OBJECTS_TO_PLAY 16 at each point in time.
  • the sound engine SOUND_ENGINE 19 manages more specifically the synchronism for starting to play out the digital sound fragments SOUND_FRAGMENT_X, to play which it resorts for example to other hardware and software components for managing the sound that are present in the client computer, in particular interacting with a sound board.
  • the sound engine SOUND_ENGINE 19 takes the properties expressed by the data 22-25 associated with each digital sound fragment SOUND_FRAGMENT_X in a corresponding sound object SOUND_OBJECT_X 2 into consideration; these properties include the level 24 at which the fragment has to be played out in an application providing objects on more than one level such as, for example, the Macromedia Flash (Player)TM application referred to above, the sound volume 231, the panpotting level 232, synchronism with the units of time LOOPs and CYCLES and synchronism with the animations.
  • the Macromedia Flash Player
  • the sound engine SOUND_ENGINE 19 tracks the time on the basis of the units of time LOOPs and CYCLES, and starts to play out the digital sound fragments SOUND_FRAGMENT_X at the start of each primary unit of time LOOP (during each secondary unit of time CYCLE) in which each fragment has to be played out, as expressed by their respective data 221. Furthermore, if an animation ANIMATION_Y is associated with the digital sound fragment SOUND_FRAGMENT_X by means of the data 251 in the corresponding sound object SOUND_OBJECT_X 2, the sound engine SOUND_ENGINE 19 takes care of forcing displaying the key frame of the animation ANIMATION_Y, the label of which is indicated by the data 252 at each start of a LOOP.
  • the primary unit of time LOOP must be small enough to allow maximum freedom of composition of the complex passage. It must be stressed in this respect that the sound engine SOUND_ENGINE 19 takes care of managing only the start of playing out of a digital sound fragment SOUND_FRAGMENT_X, the actual playing out of which, and any repetitions, are controlled subsequently directly by the appropriate software and hardware.
  • the shorter fragments will be intended, preferably, for being (down)loaded and therefore played out first, so as to provide sound immediately.
  • the longer fragments will, on the other hand, be preferably intended for being (down)loaded and therefore played out at a later stage in the performance of the complex passage, when one or several fragments are already being played out and their (down)loading in the background will therefore be "invisible" to the user.
  • the fragments intended to be played out cyclically should last preferably for 1 LOOP exactly.
  • a method of preparation applicable to certain digital sound fragments intended to be played out cyclically, for example as pads for providing an atmosphere, will be described further below, with reference to Figure 5.
  • the sound engine SOUND_ENGINE 19 is responsible only for the start of the playing out of the sound objects contained in the array SOUND_OBJECTS_TO_PLAY 16, but not for checking their repetition, is to exploit as much as possible, in the playing sequence data 22, the repetition in loop fashion represented by the parameter LOOPS 222 in contrast to the sequence of primary units of time for starting playing out, START_POINTS data 221.
  • Such a sequence means that the sound engine SOUND_ENGINE 19 must start playing out of the digital sound fragment SOUND_FRAGMENT_X each time a primary unit of time LOOP starts, i.e. four (CYCLE) times in one secondary unit of time CYCLE.
  • the sound engine SOUND_ENGINE 19 also takes care of synchronising displaying of any graphic images or animations ANIMATION_Y with the start of the primary units of time LOOP, on the basis of the animation data 25 of the sound objects SOUND_OBJECT_X 2 present in the array SOUND_OBJECTS_TO_PLAY 16; that is to say forcing, at each primary unit of time LOOP, the display of the key frame of the animation ANIMATION_Y labelled, as indicated by the SPIN_POINTS data 252, to be displayed at said primary unit of time LOOP.
  • a digital sound fragment SOUND_FRAGMENT_X that has, in the corresponding sound object SOUND_OBJECT_X 2
  • the following key frames will be displayed in each secondary unit of time CYCLE in which said sound object SOUND_OBJECT_X 2 is present in the array SOUND_OBJECTS_TO_PLAY 16: the key frame of the animation ANIMATION_Y labelled as 1 at the start of the first primary unit of time LOOP, the key frame of the animation ANIMATION_Y labelled as 2 at the start of the second primary unit of time LOOP, again the key frame of the animation ANIMATION_Y labelled as 1 at the start of the third primary unit of time LOOP and the key frame of the animation ANIMATION_Y labelled as 3
  • Figure 2 is a schematic illustration of the environment in the (client) computer at a given point in time while the digital sound management program according to a preferred embodiment of the invention is running, that is to say the files, variables and processes as a whole and the relationships among them.
  • the set 31 of the files (down)loaded to the (client) computer is shown.
  • This set includes first and foremost a file that is representative of the Graphical User Interface within which the program is implemented, in particular web page Page.html 6.
  • Said set 31 also includes the files of the digital sound fragments SoundFragment1.swf 5a, SoundFragment2.swf 5b, SoundFragment3.swf 5c, SoundFragment4.swf 5d,..., and the files of the animations AnimationA.swf 253a, AnimationB.swf 253b.
  • extension .swf which is typical of MacroMedia Flash (Player)TM files, is only used here by way of example, and should not be construed as a limitation to this invention, which can be applied to various different sound management programs and plug-ins, as mentioned above.
  • the set 31 also includes a further file indicated as Main.swf 7.
  • said file also not necessarily a MacroMedia Flash (Player)TM file, includes, in a particularly preferred enmbodiment, the programming code of all the processes defined above, that is to say of the initialisation process INIT 10, of the (down)load engine (DOWN)LOAD_ENGINE 11, of the various different processes of the main controller MAIN_CONTROL 17, of the sound engine SOUND_ENGINE 19 and of the additional special effects controllers ADDITIONAL_FX_CONTROL 18, as well as of the necessary data structure, in particular, for creating the arrays SOUND_OBJECTS 15 and SOUND_OBJECTS_TO_PLAY 16.
  • the relationships between such processes and the code contained in the file Main.swf 7 is not shown in Figure 2.
  • the variables of the software environment of the (client) computer include first of all the array SOUND_OBJECTS 15, already described above, and the array SOUND_OBJECTS_TO_PLAY 16. This latter array is shown more specifically as a solid line at a first moment in time, and as a dotted line in two other different moments in time (blocks 16a and 16b). These three moments in time correspond to specific moments of an example that will be described subsequently with combined reference to Figures Figures 2 to 4.
  • the variables of the software environment of the client computer also include several global variables 41, defined in the initialisation block INIT 10, in particular the length of the primary unit of time LOOP and that of the secondary unit of time CYCLE, in the case illustrated.
  • the initialisation block INIT 10 the initialisation block
  • the LOOP the length of the primary unit of time
  • CYCLE the secondary unit of time CYCLE
  • Figure 2 is now described in greater detail in combination with Figure 3, that illustrates schematically an example of embodiment of the method for managing digital sound files according to the present invention implemented using the MacroMedia Flash (Player)TM software.
  • Examples will furthermore be provided of the programming of the various different functional blocks and processes, using the proprietary ActionScriptingTM programming language of that software. Said language is, moreover, very similar to JavaScript language, which is so widespread in the Internet environment that it has now become a de facto standard.
  • MacroMedia FlashTM is an object-oriented programming tool particularly suitable for the management of audio-visual communications, that is to say of sound and animated graphics. It is based on a timeline TimeLine, and its design is therefore basically linear, although more than one independent time line can be launched in parallel. It provides advanced possibilities of improving performance by programming in the proprietary ActionScriptingTM language.
  • the programming environment provides for creation of files with the extension .FLA, that are then compiled into files with the extension .SWF.
  • files with the extension .FLA that are then compiled into files with the extension .SWF.
  • the source code and the digital sound data contained inside them are protected (that is to say they are not visible without decompiling or reverse engineering): this is yet another advantageous aspect of this implementation of the present invention.
  • Each .FLA or .SWF file has two separate sections.
  • a first section consists of a library of sounds (SoundLibrary), that is to say it can contain one or more sound passages (digital sound fragments in the case of the invention), typically non-compressed passages having the extension .WAV in the .FLA file, that are then exported in the form of MP3-type compressed passages to the .SWF file.
  • a second section is constituted by the aforesaid time line (TimeLine).
  • the time line is structured in a series of frames, of which some are key frames (KeyFrames), and can be labelled so that reference can be made to them. It is possible to insert a particular frame of an animation, sound data and/or code in ActionScriptingTM language into each key frame.
  • .swf files can be shared with other .swf files, setting up a specific sharing property and attributing sharing names to these files.
  • an .swf file can access images, sounds and code portions stored in another shared .swf file, and can cause another .swf file to be run.
  • the key frames can stop the timing of the time line from running (stop key frames).
  • the key frames can start display of images, sounds being played or other .swf files being run, on a particular layer or level, to which reference is made by means of a number.
  • This expression is used to indicate another .swf file, or another element of the same .swf file having its own time line, that can be controlled by means of a code or by means of the key frames of the main time line, that is to say of the time line of the .swf file that made it start to run.
  • pairs of key frames of an animation can be interpolated by the program to display the graphics in the frames intervening between the key frames of the pair. For example, to move a small image such as a star from a position X,Y on the screen to a position X',Y', it is possible to insert a first key frame containing the image of the star in the position X,Y and a second key frame containing the image of the star in the position X',Y' and to connect the two key frames. At runtime, between the first key frame and the second key frame, the image of the star is displayed in a series of intermediate positions, thus achieving an almost continuous movement of the star with only two key frames. A similar sort of interpolation takes place as far as concerns changes in size or colour of an image.
  • Figure 3 illustrates schematically the structure of the various different files in an example of embodiment of digital sound management according to the present invention by means of Macromedia Flash (Player)TM.
  • Said Figure also shows the software environment existing in the computer (data available in the server and downloaded to the client) in the case of implementation on the Internet.
  • Said Figure is based on an example of embodiment that is congruent with Figure 2 already discussed and with Figure 4, which shows the various different digital sound fragments SOUND_FRAGMENT_X and the various different key frames of animations ANIMATION_Y being run in various different primary units of time LOOPS along the time line; the time grid according to the invention is indicated schematically on the time line.
  • Each of these files SOUND_FRAGMENT_X.SWF 5a, 5b,...,5n includes more specifically one sound fragment .WAV (FRAGMENT_X_SOUND_DATA), previously prepared in the manner described below, in its sound library SoundLibrary 51, while it contains nothing in its own time line TimeLine 52.
  • a sharing name is assigned to each of the files SOUND_FRAGMENT_X.SWF, so that its elements and in particular its sounds FRAGMENT_X_SOUND_DATA can be accessed by other .swf files.
  • the sound fragments .WAV are prepared in such a way as to optimise the quality-data weight ratio of the digital sound. It is possible, for example, to use compressed formats such as the MP3 format at a certain number of Kbps (Kbit per second). Generally speaking, 80 Kbps is a good compromise, but more acute tones may require a higher Kbps rate, while lower tones may be reproduced well at a lower definition.
  • the file Main.swf 7 is embedded as an object inside the BODY of page Page.html 6, for example with the following HTML code, that is supported by both the Internet ExplorerTM browser by Microsoft Corporation of Redmond, WA, U.S.A., and by the Netscape NavigatorTM browser by Netscape Communications Corporation, Mountain View, California, U.S.A.
  • file Main.swf 7 illustrated here does not contain anything in its sound library SoundLibrary 71.
  • the time line TimeLine 72 of the file Main.swf 7 contains a plurality of key frames numbered as KeyFrame_n in Figure 3.
  • the first key frame of TimeLine 72 of the file Main.swf 7, KeyFrame_1, contains a portion of ActionScriptingTM code implementing the functional blocks illustrated below.
  • the KILL function and the subsequent RESUME function only manage the array SOUND_OBJECTS_TO_PLAY 16.
  • a flow chart of the SCRAMBLE function is illustrated in Figure 7.
  • a first block 701 removal of a pre-established number of sound objects SOUND_OBJECT_X 2, randomly chosen, from the array SOUND_OBJECTS_TO_PLAY 16, is caused.
  • a second block 702 an attempt is made to restore a second number of sound objects SOUND_OBJECT_X 2, randomly chosen, from the array SOUND_OBJECTS 15 to the array SOUND_OBJECTS_TO_PLAY 16. Since it is possible, when randomly choosing the sound objects SOUND_OBJECT_X 2 , that they have already been restored from said array SOUND_OBJECTS_TO_PLAY 16, a check is made in a block 703 to determine whether this has occurred. In block 703 it is also possible to provide a step of incompatibility checking for some digital sound fragments that do not have to be removed. In the event of a negative result (exit NO from block 703) resumption takes place.
  • the second key frame KeyFrame_2 shown in Figure 3 is in charge of (down)loading a first file SOUND_FRAGMENT_1.SWF containing a digital sound fragment FRAGMENT_1_SOUND_DATA and of checking the (down)loading, that is to say it represents the functional blocks LOAD_SOUND_FRAGMENT_1 121 and CHECK LOAD_1 131 of Figure 1. This is a stop key frame.
  • the third key frame KeyFrame_3 in Figure 3 implements the process of creation of a sound object MAKE(SOUND_OBJECT_1) associated with the digital sound fragment SOUND_FRAGMENT_1 that has just been downloaded, that is to say that it implements block 141 of Figure 1:
  • the sound object 2 SOUND_OBJECT_1 associated with the first digital sound fragment SOUND_FRAGMENT_1 has the following values of the various different data (see the corresponding line in the array SOUND_OBJECTS 15 in Figure 2):
  • this parameter store is a Boolean parameter that, if it takes on a first Boolean value, the value "1", for example, indicates that the sound object SOUND_OBJECT_X 2 must be inserted into the array SOUND_OBJECTS 15 so as to be able to use it again by means of a process of RESTORE(SOUND_OBJECT_X) subsequently to a process of KILL(SOUND_OBJECT_X); if, on the other hand, it takes on a second Boolean value, the value "0", for example, it indicates that the sound object SOUND_OBJECT_X 2 does not have to be inserted into the array SOUND_OBJECTS 15, but only into the array SOUND_OBJECTS_TO_PLAY 16: thus, following a process of KILL(SOUND_OBJECT_X) it may not be restored by means of a process of RESTORE(SOUND_OBJECT_X) it may not be restored by means of a process of RESTORE(SO
  • the key frame KeyFrame_4 in Figure 3 immediately starts playing the digital sound fragment that has just been (down)loaded and inserted into the array SOUNDS_OBJECTS_TO_PLAY 16.
  • said key frame is a "stable" key frame that implements the sound engine SOUND_ENGINE 19:
  • a first block 601 is run each time the frame containing the code implementing the sound engine SOUND_ENGINE 19 is entered, that is to say at a first frequency f1 expressed in frames per second (since the sound engine is in a stable key frame).
  • the first frequency f1 is relatively low, for example 15 times per second (15 fps).
  • the time MS lapsed since the start of the current primary unit of time LOOP is calculated.
  • a comparison is made between the time that is estimated will have passed when exiting the current frame and the value equivalent to the primary unit of time LOOP (for example, 1600 msec), as declared in the process of INIT 11.
  • the estimate is made by adding the time MS that has passed since the start of the current primary unit of time LOOP to the time MSPF that has passed while running the previous frame, up-dated in a block 607.
  • the time MSPF that has passed during the running of the previous frame is multiplied by a safety threshold value EDGE, set for example at a value of 1,2.
  • a check is carried out in block 602 to see whether, at the next comparison made at the relatively low first frequency f1, the value of the primary unit of time LOOP would be exceeded.
  • block 607 is run to up-date the variable MSPF to the time that has passed during the current frame and, in the subsequent frame (block 608), block 601 is run again.
  • a check is carried out to see whether said time MS is greater than or equal to the value equivalent to the primary unit of time LOOP. As long as the time MS is shorter than the LOOP value (exit NO from block 604), blocks 603 and 604 are repeated. This check is therefore carried out during the current running of a frame, at a relatively high frequency f2, equal to the time required to carry out the instructions of blocks 603, 604.
  • block 605 After block 605 has been run, the value of the time MS that has passed since the start of the current primary unit of time LOOP is reset in block 606, since the next run of block 601 will be related to a new primary unit of time LOOP. Then block 607 is run again to up-date the time MSPF that has passed while running the frame, and block 608 for passing on to the next frame.
  • Figure 4 illustrates, for some secondary units of time CYCLES of the time line, the objects (digital sound fragments SOUND_FRAGMENT_X and key frames of animations ANIMATION_Y), the playing out and displaying of which is started at the start of the primary units of time (LOOP).
  • the digital sound fragments SOUND_FRAGMENT_X are represented by their identifier number X
  • the key frames of an animation ANIMATION_Y are represented by the letter Y identifying the animation and by the label of the key frames.
  • the digital sound fragment SOUND_FRAGMENT_1 and the key frame of the animation AnimationA labelled as 1 are started at the start of the first primary unit of time LOOP L1
  • the digital sound fragment SOUND_FRAGMENT_1 and the key frame of the animation AnimationA labelled as 2 are started at the start of the second primary unit of time LOOP L2
  • the digital sound fragment SOUND_FRAGMENT_1 and the key frame of the animation AnimationA labelled as 3 are started at the start of the third primary unit of time LOOP L3
  • the digital sound fragment SOUND_FRAGMENT_1 and again the key frame of the animation AnimationA labelled as 3 are started at the start of the fourth and last primary unit of time LOOP L4.
  • Key frames KeyFrame_5 and KeyFrame_6 are similar to key frames KeyFrame_2 and KeyFrame_3, but they refer to a second digital sound fragment SoundFragment2.swf that is associated with a second sound object SOUND_OBJECT_X2, therefore implementing the functional blocks 122, 132, 142 of Figure 1.
  • the second secondary unit of time CYCLE shown explicitly as C2 in Figure 4 corresponds to the situation existing when running of the key frame KeyFrame_6 has been completed.
  • the array SOUND_OBJECTS 15 and the array SOUND_OBJECTS_TO_PLAY 16 will therefore have the contents illustrated in Figure 2. More specifically, the array SOUND_OBJECTS 15 contains four sound objects SOUND_OBJECT_1, SOUND_OBJECT_2, SOUND_OBJECT_3 and SOUND_OBJECT_4, each having some values of its respective data 20-25, and the array SOUND_OBJECTS_TO_PLAY 16 contains pointers to all said sound objects SOUND_OBJECT_1, SOUND_OBJECT_2, SOUND_OBJECT_3 and SOUND_OBJECT_4. The pointers are indicated in Figure 2 by the names of the respective identifiers ID 20: SO1, SO2, SO3, SO4.
  • playing out of the digital sound fragment SOUND_FRAGMENT_3 is also started at the start of the first and of the second primary units of time LOOPS L1 and L2
  • playing out of the digital sound fragment SOUND_FRAGMENT_4 is started at the start of the second, third and fourth primary units of time LOOPs L2, L3, L4, and displaying of the frames labelled with 1, 1, 2 and 3, respectively, of the animation ANIMATION_B is started or forced at the start of the four primary units of time LOOP L1-L4.
  • the sound engine SOUND_ENGINE 19 starts to play out the third and the fourth digital sound fragments SOUND_FRAGMENT_3 and SOUND_FRAGMENT_4 (assumed to last for a time equal to one primary unit of time LOOP) only once within a secondary unit of time CYCLE, exploiting the self-repeating property LOOPS 222 of their respective sound objects 2 SOUND_OBJECT_3 and SOUND_OBJECT_4; the second digital sound fragment SOUND_FRAGMENT_2 (whatever its length), on the other hand, is started to be played out twice by the sound engine SOUND_ENGINE 19.
  • the key frame KeyFrame_i in Figure 3 implements a process of removal KILL(SOUND_OBJECT_3) of the sound object SOUND_OBJECT_3. Accordingly, at the subsequent key frame KeyFrame_i+1, the array SOUND_OBJECTS_TO_PLAY, shown by a dotted line as 16a in Figure 2, only contains the pointers SO1, SO2 and SO4 to the objects SOUND_OBJECT_1, SOUND_OBJECT_2 and SOUND_OBJECT_4. The relevant code is simply a call to the function KILL(SOUND_OBJECT_X) defined in KeyFrame_1: kill(soundobject1,0,1,1);
  • the array SOUND_OBJECTS_TO_PLAY shown in dotted line as 16b contains once again the pointers SO1,SO2,SO3 and SO4 (ID 20) to all the sound objects SOUND_OBJECT_1, SOUND_OBJECT_2, SOUND_OBJECT_3 and SOUND_OBJECT_4.
  • the relevant code is simply a call to the function RESUME(SOUND_OBJECT_X) defined in KeyFrame_1: resume(soundobject1);
  • key frame KeyFrame_k in Figure 3 represents a stable key frame that implements altogether the various different additional controllers of the main controller MAIN_CONTROL 17.
  • controller FX_CONTROLLER 174 the code that implements a function of fading-in and -out transition between volumes of groups of digital sound fragments SOUND_FRAGMENT_X being played out is illustrated as part of the special effects controller FX_CONTROLLER 174:
  • the digital sound fragments to be faded out are inserted into the array-type variable tofade , while those to be faded in are inserted into the array-type variable toraise . It should be noted that the references for return of each digital sound fragment to its original volume were previously inserted into the array-type variable vols.
  • the following code implements the random controller RANDOM_PLAYER 172, using the function SCRAMBLE defined in KeyFrame_1 periodically:
  • the key frames of the animation files can in turn contain other secondary sounds, that enrich the environment with additional effects such as, by way of example, pads for creating an atmosphere that are played out cyclically independently of the sound engine SOUND_ENGINE 19, or random melodies.
  • TimeLine 72 refers to the once-only running of a given sound file, for example a compressed .MP3 file, containing a random note or phrasing.
  • a key frame of the time line TimeLine 72 refers to the once-only running of a given sound file, for example a compressed .MP3 file, containing a random note or phrasing.
  • the sequence described above for invoking the various different processes is given by way of example only and is in no way binding.
  • the sound engine SOUND_ENGINE 19 and the various different controllers of the main controller MAIN_CONTROL 17 could be arranged on the time line TimeLine 72 immediately, even in the same initial key frame KeyFrame_1 or, on the contrary, they could be arranged on the time line TimeLine 72 only if and when it is necessary to make use of them.
  • the first key frame KeyFrame_1 need not contain those functions that are never called up in a specific embodiment.
  • a particularly preferred aspect of the present invention enables any HTML page whatsoever to be made interactive with the sound engine SOUND_ENGINE 19 in a manner that will depend on the user's behaviour at the graphical interface representing it, in particular in the page Page.html 6, it being unnecessary to insert proper controls into the BODY of the actual HTML page.
  • This is particularly advantageous also for providing soundtracks for existing web sites, since it is not necessary to make any large-scale changes to the body of the HTML pages.
  • JavaScriptTM code indicated below included in an HTML page, for example at the bottom of its body section ( ⁇ BODY>); this code is supported both by the Internet ExplorerTM browser by Microsoft Corporation of Redmond, WA, U.S.A. and by the Netscape NavigatorTM browser by Netscape Communications Corporation of Mountain View, California, U.S.A.
  • the code shown below allows stating once only which categories of elements in an HTML page are capable of interacting and what type of interaction this is. All the objects belonging to these categories are indexed and one or more interaction criteria are attributed to them.
  • the files Main.swf 7 and SoundFragment1.swf 5a could be combined in a single file containing both the first digital sound fragment Fragment_1_Sound_Data in the sound library and the various different KeyFrame_X described above in the time line.
  • the music genres that are most suitable for being played out according to the digital sound management technique of the present invention are undoubtedly dance genres, such as techno, jungle, hip hop and funky, in which the philosophy of cyclic repetition is already a strong feature. Unexpectedly, however, excellent experimental results have been achieved even with piano scores of the minimalist york-garde type, jazz sessions, with recurrent broken chords or with rock arrangements, and even with parts that are sung. Like all genres that are not based on a constant rhythmic grid, classical music is the most difficult to adapt to the present invention, although it is not impossible.
  • the digital sound fragments that are not intended to be recursively repeated do not necessarily have to be cut off so that they last for exactly one primary unit of time LOOP; instead, they may last less than 1 LOOP and even overflow into the next primary unit of time LOOP or beyond, considering that the sound engine SOUND_ENGINE 19 only controls their starting.
  • the first sounds to be (down)loaded and played out prefferably be designed in such a way that they are as light as possible, so as to keep the waiting times down to the bare minimum. They can then be followed by more substantial sounds for which the (down)load time is by then covered and justified by sound already being played out.
  • PADs for creating the atmosphere can be reduced to a very small number of milliseconds in order to save bulk and can be played out cyclically independently rather than putting a burden on the sound engine SOUND_ENGINE 19: they can be launched, for example, by a dedicated key frame on the time line of any one of the .swf files present in a practical application.
  • the digital sound fragment 500 is cut to a length exceeding one primary unit of time LOOP by a quantity equal to a fade effect, and is cut with a fade-in effect 501 at the initial edge and with a fade-out effect 502 at the final edge.
  • the digital sound fragment 500 is then cut into two parts 503 and 504 at a point 505 that is approximately half way through its length, in which the wave form 500 preferably has a volume of zero decibels.
  • the two parts 503 and 504 are then reversed, placing the second part 504 in front of the first part 503.
  • the first and the second parts 503 and 504 are overlapped for the length of the fade-out 502 of the second part 504 and of the fade-in 501 of the first part 503, thus giving rise to cross-fading of the two parts 503 and 504.
  • the joint between the two parts played out in succession is far less perceptible due to the fact that it corresponds to the point in time at which the original waveform 500 was cut. If, in particular, the cut is made at a point in time in which the original waveform has a value of 0 dB, the joint is even less perceptible.
  • GUI graphical interface
  • a computer program for digital sound management may conveniently be designed as a programming kit or a parametric program including the functions or processes described above, or at least the process of creation of a sound object MAKE(SOUND_OBJECT) and the sound engine SOUND_ENGINE 19, and having means for receiving as inputs parameters such as the various data 21-25 of the sound objects 2, including the names of the files containing the digital sound fragments and, possibly, the global variables 41.
  • a parametric program or programming kit will take care of generating in particular the file Main.swf 7 and, possibly, of incorporating it into a page Page.html 6.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)
EP02425318A 2002-05-20 2002-05-20 Gestion de son numérique Withdrawn EP1365386A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP02425318A EP1365386A1 (fr) 2002-05-20 2002-05-20 Gestion de son numérique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP02425318A EP1365386A1 (fr) 2002-05-20 2002-05-20 Gestion de son numérique

Publications (1)

Publication Number Publication Date
EP1365386A1 true EP1365386A1 (fr) 2003-11-26

Family

ID=29286262

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02425318A Withdrawn EP1365386A1 (fr) 2002-05-20 2002-05-20 Gestion de son numérique

Country Status (1)

Country Link
EP (1) EP1365386A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8062089B2 (en) 2006-10-02 2011-11-22 Mattel, Inc. Electronic playset
US8292689B2 (en) 2006-10-02 2012-10-23 Mattel, Inc. Electronic playset

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0855697A1 (fr) * 1996-12-27 1998-07-29 Yamaha Corporation Transmission en temps réel d'information musicale
US5886275A (en) * 1997-04-18 1999-03-23 Yamaha Corporation Transporting method of karaoke data by packets
US5977468A (en) * 1997-06-30 1999-11-02 Yamaha Corporation Music system of transmitting performance information with state information
WO2001016931A1 (fr) * 1999-09-01 2001-03-08 Nokia Corporation Procede et dispositif pour conferer des caracteristiques audio sur mesure a des terminaux cellulaires

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0855697A1 (fr) * 1996-12-27 1998-07-29 Yamaha Corporation Transmission en temps réel d'information musicale
US5886275A (en) * 1997-04-18 1999-03-23 Yamaha Corporation Transporting method of karaoke data by packets
US5977468A (en) * 1997-06-30 1999-11-02 Yamaha Corporation Music system of transmitting performance information with state information
WO2001016931A1 (fr) * 1999-09-01 2001-03-08 Nokia Corporation Procede et dispositif pour conferer des caracteristiques audio sur mesure a des terminaux cellulaires

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8062089B2 (en) 2006-10-02 2011-11-22 Mattel, Inc. Electronic playset
US8292689B2 (en) 2006-10-02 2012-10-23 Mattel, Inc. Electronic playset

Similar Documents

Publication Publication Date Title
RU2189119C2 (ru) Способ передачи медиа-файлов по сети связи
US10218760B2 (en) Dynamic summary generation for real-time switchable videos
US6093880A (en) System for prioritizing audio for a virtual environment
US6822153B2 (en) Method and apparatus for interactive real time music composition
KR101246976B1 (ko) 미디어 콘텐츠 렌더링의 특징
US6744974B2 (en) Dynamic variation of output media signal in response to input media signal
CN108566519A (zh) 视频制作方法、装置、终端和存储介质
US20140288686A1 (en) Methods, systems, devices and computer program products for managing playback of digital media content
CN107124624A (zh) 视频数据生成的方法和装置
CN104202540B (zh) 一种利用图片生成视频的方法及系统
US7884275B2 (en) Music creator for a client-server environment
US10319411B2 (en) Device and method for playing an interactive audiovisual movie
KR20070101844A (ko) 원격 플랫폼을 위한 선언 컨텐트를 저작하는 방법 및 장치
JP2001155066A (ja) 広告表示装置および広告表示方法並びにコンピュータ読取り可能な記録媒体
EP1365386A1 (fr) Gestion de son numérique
AU9292498A (en) Interactive sound effects system and method of producing model-based sound effects
EP3122431A1 (fr) Moteur sonore pour jeux vidéo
CN108614829A (zh) 一种播放方法及终端
CN111991802A (zh) 控制方法、装置、服务器及存储介质
JP3799359B2 (ja) 再生装置、再生方法、及びプログラム
Pitteri et al. Listen By Looking: a mobile application for augmented fruition of live music and interactive learning
US12009011B2 (en) Method and device of presenting audio/video files, computing device, and readable storage medium
US20200402537A1 (en) Method and device of presenting audio/video files, computing device, and readable storage medium
Somberg ◾ Interactive Music Systems for Games
JPH04208782A (ja) イメージ情報処理システムの同期装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

AKX Designation fees paid
REG Reference to a national code

Ref country code: DE

Ref legal event code: 8566

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20040527